WO2011125314A1 - Video coding device and video decoding device - Google Patents

Video coding device and video decoding device Download PDF

Info

Publication number
WO2011125314A1
WO2011125314A1 PCT/JP2011/001955 JP2011001955W WO2011125314A1 WO 2011125314 A1 WO2011125314 A1 WO 2011125314A1 JP 2011001955 W JP2011001955 W JP 2011001955W WO 2011125314 A1 WO2011125314 A1 WO 2011125314A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
prediction
image
encoding
binarization
Prior art date
Application number
PCT/JP2011/001955
Other languages
French (fr)
Japanese (ja)
Inventor
守屋 芳美
関口 俊一
杉本 和夫
浅井 光太郎
村上 篤道
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to TW100111976A priority Critical patent/TW201143459A/en
Publication of WO2011125314A1 publication Critical patent/WO2011125314A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/13Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters

Definitions

  • the present invention relates to a moving image encoding apparatus that divides a moving image into predetermined regions and performs encoding in units of regions, and a moving image decoding device that decodes encoded moving images in units of predetermined regions.
  • each frame of a video signal is moved in units of block data (called a macro block) in which 8 ⁇ 8 pixels of color difference signals corresponding to a luminance signal of 16 ⁇ 16 pixels are combined.
  • a compression method based on a compensation technique and an orthogonal transform / transform coefficient quantization technique is employed.
  • Motion compensation technology is a technology that reduces the redundancy of signals in the time direction for each macroblock using a high correlation existing between video frames.
  • a previously encoded frame is used as a reference image. Accumulated and searched from within a predetermined search range in the reference image for a block region having the smallest differential power with the current macroblock that is the target of motion compensation prediction, and the spatial position of the current macroblock and the reference image. This is a technique for encoding a shift from a spatial position of a search result block in the middle as a motion vector.
  • the coding mode for dividing the macroblock is selected to locally It can handle small moving objects.
  • Patent Document 1 when there are a large number of moving objects smaller than the determined macroblock in the frame, the ratio of selecting the coding mode for dividing the macroblock becomes high, so that it is small in advance.
  • the macroblock size is selected, the amount of code related to the coding mode that instructs division within the macroblock that occurs when a macroblock of a larger size is selected does not occur, so the amount of code can be reduced even with the same coding efficiency Can do.
  • Patent Document 1 in order to determine the optimum macroblock size, advanced preprocessing for accurately evaluating the image content before encoding is necessary, and the amount of processing related to this preprocessing is required. There was a problem that became huge.
  • the present invention has been made in order to solve the above-described problems, and can efficiently compress a compression code by suppressing a code amount related to overhead such as an encoding mode even in a macro block size set in advance regardless of image contents. It is an object of the present invention to obtain a moving image encoding device and a moving image decoding device that can be converted into a video image.
  • the encoding control unit selects an encoding mode in which a predetermined block division type based on encoding efficiency is selected from the encoding modes, and outputs the selected multilevel signal.
  • the variable length encoding unit uses the binarization table that specifies the correspondence between the multilevel signal and the binary signal representing the encoding mode to determine the encoding mode of the multilevel signal selected by the encoding control unit.
  • Arithmetic coding processing that outputs a coded bit string by arithmetically coding the binary signal converted by the binarized part and the binary signal converted by the binarized part, and multiplexing the coded bit string into a bit stream
  • a binarization table update unit that updates the correspondence between the multilevel signal and the binary signal of the binarization table based on the selection frequency by the encoding control unit of each encoding mode.
  • variable length decoding unit arithmetically decodes a coded bit string representing a coding mode multiplexed in a bitstream to generate a binary signal; Using a binarization table that specifies the correspondence between a binary signal representing a coding mode and a multilevel signal, the coding mode represented by the binary signal generated by the arithmetic decoding processing operation unit is changed to a multilevel signal.
  • the multilevel signal representing the encoding mode is converted into a binary signal and arithmetically encoded. Since it is multiplexed into a bitstream, a moving image that can be efficiently compression-coded while suppressing the amount of code related to overhead such as a coding mode even in a macroblock size set in advance regardless of the content of the image An encoding device and a moving image decoding device can be obtained.
  • FIG. 1 It is a block diagram which shows the structure of the moving image encoder which concerns on Embodiment 1 of this invention. It is a figure which shows an example of the encoding mode of the picture which performs the prediction encoding of a time direction. It is a figure which shows another example of the encoding mode of the picture which performs the prediction encoding of a time direction.
  • 3 is a block diagram showing an internal configuration of a motion compensation prediction unit of the video encoding apparatus according to Embodiment 1.
  • FIG. It is a figure explaining the determination method of the predicted value of the motion vector according to encoding mode. It is a figure which shows an example of adaptation of the transform block size according to encoding mode.
  • FIG. 3 is a block diagram showing an internal configuration of a transform / quantization unit of the video encoding apparatus according to Embodiment 1.
  • FIG. It is a block diagram which shows the structure of the moving image decoding apparatus which concerns on Embodiment 1 of this invention. It is a block diagram which shows the internal structure of the variable length encoding part of the moving image encoder which concerns on Embodiment 2 of this invention. It is a figure which shows an example of a binarization table, and shows the state before an update. It is a figure which shows an example of a probability table. It is a figure which shows an example of a state transition table.
  • FIG. 13A is a diagram illustrating a procedure for generating context identification information.
  • FIG. 13A is a diagram illustrating a binarization table in binary tree representation
  • FIG. 13B is a diagram illustrating the positional relationship between an encoding target macroblock and peripheral blocks.
  • FIG. It is a figure which shows an example of a binarization table, and shows the state after an update.
  • It is a block diagram which shows the internal structure of the interpolation image generation part with which the motion compensation prediction part of the moving image encoder which concerns on Embodiment 3 of this invention is provided.
  • Embodiment 1 FIG.
  • each frame image of video is used as an input, motion compensation prediction is performed between adjacent frames, and compression processing by orthogonal transformation / quantization is performed on the obtained prediction difference signal.
  • a moving picture coding apparatus that generates a bit stream by performing variable length coding and a moving picture decoding apparatus that decodes the bit stream will be described.
  • FIG. 1 is a block diagram showing a configuration of a moving picture coding apparatus according to Embodiment 1 of the present invention.
  • 1 divides a macroblock image obtained by dividing each frame image of the input video signal 1 into a plurality of blocks having a macroblock size of 4 into one or more sub-blocks according to the encoding mode 7.
  • the block dividing unit 2 that outputs the macro / sub-block image 5 and the macro / sub-block image 5 are input
  • the macro / sub-block image 5 is intraframed using the image signal of the intra prediction memory 28.
  • the intra prediction unit 8 that predicts and generates the predicted image 11 and the macro / subblock image 5 are input, the reference image 15 of the motion compensated prediction frame memory 14 is used for the macro / subblock image 5.
  • a motion compensated prediction unit 9 that performs motion compensated prediction to generate a predicted image 17 and a macro / sub-block image 5 according to the encoding mode 7 From the switching unit 6 that is input to either the tiger prediction unit 8 or the motion compensation prediction unit 9 and the macro / sub-block image 5 that is output from the block division unit 2, either the intra prediction unit 8 or the motion compensation prediction unit 9 is used.
  • the subtraction unit 12 that generates the prediction difference signal 13 by subtracting the prediction images 11 and 17 output from one side, and the conversion / quantization that performs the conversion and quantization processing on the prediction difference signal 13 to generate the compressed data 21 Unit 19, variable-length encoding unit 23 that entropy-encodes compressed data 21 and multiplexes it into bit stream 30, and inverse that generates local decoded prediction difference signal 24 by performing inverse quantization and inverse transform processing on compressed data 21
  • the quantized / inverse transform unit 22 and the prediction images 11 and 17 output from either the intra prediction unit 8 or the motion compensation prediction unit 9 are added to the inverse quantization / inverse transform unit 22.
  • the encoding control unit 3 includes information necessary for processing of each unit (macroblock size 4, encoding mode 7, optimal encoding mode 7a, prediction parameter 10, optimal prediction parameters 10a and 18a, compression parameter 20, and optimal compression parameter 20a. ) Is output. Details of the macroblock size 4 and the encoding mode 7 will be described below. Details of other information will be described later.
  • the encoding control unit 3 specifies the macro block size 4 of each frame image of the input video signal 1 to the block dividing unit 2 and selects all the selectable macro blocks according to the picture type for each encoding target macro block. Indicates encoding mode 7.
  • the encoding control unit 3 can select a predetermined encoding mode from the set of encoding modes, but this encoding mode set is arbitrary, for example, the set shown in FIG. 2A or FIG. 2B described below. A predetermined encoding mode can be selected from the list.
  • FIG. 2A is a diagram illustrating an example of a coding mode of a P (Predictive) picture that performs predictive coding in the time direction.
  • mb_modes 0 to 2 are modes (inter) for encoding macroblocks (M ⁇ L pixel blocks) by inter-frame prediction.
  • mb_mode0 is a mode in which one motion vector is assigned to the entire macroblock
  • mb_mode1 and 2 are modes in which the macroblock is equally divided horizontally or vertically, and different motion vectors are assigned to the divided sub-blocks.
  • mb_mode3 is a mode in which a macroblock is divided into four and different coding modes (sub_mb_mode) are assigned to the divided subblocks.
  • sub_mb_mode0 to sub_mb_mode0 are coding modes respectively assigned to subblocks (m ⁇ 1 pixel blocks) obtained by dividing the macroblock into four when mb_mode3 is selected as the macroblock coding mode.
  • Is a mode (intra) in which sub-blocks are encoded by intra-frame prediction.
  • the other modes are modes for inter-frame encoding (inter)
  • sub_mb_mode1 is a mode in which one motion vector is assigned to the entire subblock
  • sub_mb_mode2 and 3 are subdivided into subblocks horizontally or vertically, respectively.
  • Sub_mb_mode4 is a mode in which a different motion vector is assigned to each divided subblock
  • sub_mb_mode4 is a mode in which a subblock is divided into four and a different motion vector is assigned to each divided subblock.
  • FIG. 2B is a diagram illustrating another example of the coding mode of the P picture that performs predictive coding in the time direction.
  • mb_modes 0 to 6 are modes (inter) for encoding macroblocks (M ⁇ L pixel blocks) by inter-frame prediction.
  • mb_mode0 is a mode in which one motion vector is assigned to the entire macroblock
  • mb_modes 1 to 6 divide the macroblock horizontally, vertically or diagonally, and assign different motion vectors to the divided sub-blocks.
  • Mode. mb_mode7 is a mode in which a macroblock is divided into four and different coding modes (sub_mb_mode) are allocated to the divided subblocks.
  • Sub_mb_mode 0 to 8 are coding modes respectively assigned to sub-blocks (m ⁇ 1 pixel blocks) obtained by dividing the macro block into four when mb_mode 7 is selected in the macro block coding mode, and sub_mb_mode 0 Is a mode (intra) in which sub-blocks are encoded by intra-frame prediction.
  • the other modes are inter-frame encoding modes (inter)
  • sub_mb_mode1 is a mode in which one motion vector is assigned to the entire sub-block
  • sub_mb_modes 2 to 7 are sub-blocks divided horizontally, vertically, or diagonally, respectively.
  • the sub_block_mode 8 is a mode in which a sub-block is divided into four, and a different motion vector is assigned to each divided sub-block.
  • the block dividing unit 2 divides each frame image of the input video signal 1 input to the moving image encoding device into macroblock images having a macroblock size of 4 specified by the encoding control unit 3. Further, the block division unit 2 assigns different coding modes to the sub-blocks in which the coding mode 7 specified by the coding control unit 3 has divided the macroblock (sub_mb_mode1 to 4 in FIG. 2A or sub_mb_mode1 in FIG. 2B). ) To 8), the macroblock image is divided into sub-block images indicated by the encoding mode 7. Therefore, the block image output from the block dividing unit 2 is either a macro block image or a sub block image according to the encoding mode 7. Hereinafter, this block image is referred to as a macro / sub-block image 5.
  • each frame of the input video signal 1 is not an integer multiple of the horizontal size or vertical size of the macroblock size 4
  • the frame size is equal to the macroblock size for each frame of the input video signal 1.
  • a frame (extended frame) in which pixels are expanded in the horizontal direction or the vertical direction until an integral multiple is generated. For example, when expanding a pixel in the vertical direction, the pixel at the bottom of the original frame is repeatedly filled, or a pixel having a fixed pixel value (gray, black, white, etc.) There are methods such as filling.
  • an extension frame generated for each frame of the input video signal 1 and having an integer multiple of the macroblock size is input to the block dividing unit 2 in place of each frame image of the input video signal 1.
  • the macroblock size 4 and the frame size (horizontal size and vertical size) of each frame of the input video signal 1 are variable because they are multiplexed into a bitstream in units of sequences or pictures consisting of one or more frames. It is output to the long encoding unit 23.
  • the value of the macro block size may be specified by a profile or the like without being multiplexed directly into the bit stream.
  • identification information for identifying the profile in sequence units is multiplexed into the bit stream.
  • the switching unit 6 is a switch for switching the input destination of the macro / sub-block image 5 in accordance with the encoding mode 7.
  • the switching unit 6 inputs the macro / sub-block image 5 intra prediction unit 8 to
  • the conversion mode 7 is a mode for encoding by inter-frame prediction (hereinafter referred to as inter-frame prediction mode)
  • the macro / sub-block image 5 is input to the motion compensation prediction unit 9.
  • the intra prediction unit 8 performs intra-frame prediction on the input macro / subblock image 5 in units of a macroblock to be encoded specified by the macroblock size 4 or a subblock specified by the encoding mode 7. .
  • the intra prediction unit 8 uses the image signals in the frames stored in the intra prediction memory 28 for all intra prediction modes included in the prediction parameter 10 instructed from the encoding control unit 3. A predicted image 11 is generated for each.
  • the encoding control unit 3 designates the intra prediction mode as the prediction parameter 10 corresponding to the encoding mode 7.
  • a macroblock or sub-block is set to a 4 ⁇ 4 pixel block unit, and a prediction image is generated using pixels around a unit block of an image signal in the intra prediction memory 28.
  • a mode for generating a prediction image using pixels around a unit block of an image signal in the intra prediction memory 28 in units of pixel blocks, a mode for generating a prediction image from an image obtained by reducing the size of a macro block or sub block, and the like. is there.
  • the motion-compensated prediction unit 9 designates a reference image 15 to be used for generating a predicted image from one or more frames of reference image data stored in the motion-compensated prediction frame memory 14, and the reference image 15 and the macro / subblock Using the image 5, motion compensation prediction according to the encoding mode 7 instructed from the encoding control unit 3 is performed, and a prediction parameter 18 and a prediction image 17 are generated.
  • the motion compensated prediction unit 9 uses the motion vector as the prediction parameter 18 corresponding to the coding mode 7 and the reference image identification number (reference image index) indicated by each motion vector. Etc. Details of the method of generating the prediction parameter 18 will be described later.
  • the subtraction unit 12 subtracts either the predicted image 11 or the predicted image 17 from the macro / sub-block image 5 to obtain a predicted difference signal 13.
  • the prediction difference signal 13 is each produced
  • the prediction difference signal 13 generated in accordance with all intra prediction modes specified by the prediction parameter 10 is evaluated by the encoding control unit 3, and the optimal prediction parameter 10a including the optimal intra prediction mode is determined.
  • an encoding cost J 2 described later is calculated using the compressed data 21 obtained by transforming and quantizing the prediction difference signal 13, and an intra prediction mode that minimizes the encoding cost J 2 is selected.
  • the encoding control unit 3 evaluates the prediction difference signals 13 respectively generated for all modes included in the encoding mode 7 in the intra prediction unit 8 or the motion compensation prediction unit 9, and performs encoding based on the evaluation result.
  • the optimum coding mode 7a that can obtain the optimum coding efficiency is determined from the modes 7.
  • the encoding control unit 3 determines the optimum prediction parameters 10a, 18a and the optimum compression parameter 20a corresponding to the optimum coding mode 7a from the prediction parameters 10, 18 and the compression parameter 20. Each determination procedure will be described later.
  • the intra prediction mode is included in the prediction parameter 10 and the optimal prediction parameter 10a.
  • the prediction parameter 18 and the optimal prediction parameter 18a include a motion vector, a reference image identification number (reference image index) indicated by each motion vector, and the like.
  • the compression parameter 20 and the optimum compression parameter 20a include a transform block size, a quantization step size, and the like.
  • the encoding control unit 3 outputs the optimal encoding mode 7a, the optimal prediction parameters 10a and 18a, and the optimal compression parameter 20a for the macroblock or subblock to be encoded to the variable length encoding unit 23. . Also, the encoding control unit 3 outputs the optimum compression parameter 20 a of the compression parameters 20 to the transform / quantization unit 19 and the inverse quantization / inverse transform unit 22.
  • the transform / quantization unit 19 includes the optimal encoding mode 7a determined by the encoding control unit 3 and the optimal prediction among the plurality of prediction difference signals 13 generated corresponding to all modes included in the encoding mode 7.
  • a prediction difference signal 13 (hereinafter referred to as an optimum prediction difference signal 13a) corresponding to the prediction images 11 and 17 generated based on the parameters 10a and 18a is selected, and the optimum prediction difference signal 13a is encoded.
  • a transform coefficient is calculated by performing transform processing such as DCT (Discrete Cosine Transform) based on the transform block size of the optimum compression parameter 20a determined by the control unit 3, and the transform coefficient is encoded by the encoding control unit 3 Is quantized based on the quantization step size of the optimum compression parameter 20a instructed from the above, and the quantized compressed data 21, which is a transform coefficient, is inversely quantized and inversely transformed. And outputs it to 22 and a variable length coding unit 23.
  • transform processing such as DCT (Discrete Cosine Transform)
  • the inverse quantization / inverse transform unit 22 performs inverse transform processing such as inverse DCT by performing inverse quantization on the compressed data 21 input from the transform / quantization unit 19 using the optimal compression parameter 20a.
  • a local decoded prediction difference signal 24 of the difference signal 13 a is generated and output to the adding unit 25.
  • the adding unit 25 adds the local decoded prediction difference signal 24 and the predicted image 11 or the predicted image 17 to generate a local decoded image signal 26, and outputs the local decoded image signal 26 to the loop filter unit 27 and intra. Store in the prediction memory 28. This locally decoded image signal 26 becomes an image signal for intra-frame prediction.
  • the loop filter unit 27 performs a predetermined filtering process on the local decoded image signal 26 input from the adder unit 25 and stores the local decoded image 29 after the filtering process in the motion compensated prediction frame memory 14. This locally decoded image 29 becomes the reference image 15 for motion compensation prediction.
  • the filtering process by the loop filter unit 27 may be performed in units of macroblocks of the input local decoded image signal 26, or after one local decoded image signal 26 corresponding to one macroblock is input. You may go together.
  • the variable length encoding unit 23 includes the compressed data 21 output from the transform / quantization unit 19, the optimal encoding mode 7a output from the encoding control unit 3, the optimal prediction parameters 10a and 18a, and the optimal compression parameter.
  • 20a is entropy-encoded, and a bit stream 30 indicating the encoding result is generated.
  • the optimal prediction parameters 10a and 18a and the optimal compression parameter 20a are encoded in units corresponding to the encoding mode indicated by the optimal encoding mode 7a.
  • the moving picture coding apparatus is configured so that the motion compensation prediction unit 9 and the transform / quantization unit 19 operate in cooperation with the coding control unit 3, respectively.
  • the coding mode, the prediction parameter, and the compression parameter (that is, the optimum coding mode 7a, the optimum prediction parameters 10a and 18a, and the optimum compression parameter 20a) are determined.
  • a procedure for determining a coding mode, a prediction parameter, and a compression parameter by which the coding control unit 3 can obtain optimum coding efficiency is as follows. Prediction parameters, 2. 2. compression parameters; The description will be made in the order of the encoding mode.
  • Prediction Parameter Determination Procedure when the encoding mode 7 is the inter-frame prediction mode, a prediction parameter including a motion vector related to the inter-frame prediction, a reference image identification number (reference image index) indicated by each motion vector, etc. A procedure for determining 18 will be described.
  • the motion compensation prediction unit 9 in cooperation with the encoding control unit 3, all the encoding modes 7 (for example, the encoding modes shown in FIG. 2A or FIG. 2B) instructed from the encoding control unit 3 to the motion compensation prediction unit 9 are performed.
  • the prediction parameter 18 is determined for each set. The detailed procedure will be described below.
  • FIG. 3 is a block diagram showing the internal configuration of the motion compensation prediction unit 9.
  • the motion compensation prediction unit 9 illustrated in FIG. 3 includes a motion compensation region division unit 40, a motion detection unit 42, and an interpolated image generation unit 43.
  • the encoding mode 7 input from the encoding control unit 3, the macro / sub-block image 5 input from the switching unit 6, and the reference image 15 input from the motion compensated prediction frame memory 14. There is.
  • the motion compensation region dividing unit 40 divides the macro / sub-block image 5 input from the switching unit 6 into blocks serving as motion compensation units in accordance with the encoding mode 7 instructed from the encoding control unit 3.
  • the motion compensation region block image 41 is output to the motion detection unit 42.
  • the interpolated image generation unit 43 specifies the reference image 15 used for prediction image generation from one or more frames of reference image data stored in the motion compensated prediction frame memory 14, and the motion detection unit 42 specifies the reference image specified.
  • the motion vector 44 is detected within a predetermined motion search range on 15.
  • the motion vector is detected by a virtual sample precision motion vector, as in the MPEG-4 AVC standard.
  • This detection method creates a virtual sample (pixel) by interpolation between integer pixels for pixel information (referred to as an integer pixel) of a reference image, and uses it as a predicted image.
  • an integer pixel In the MPEG-4 AVC standard, a virtual sample with 1/8 pixel accuracy can be generated and used.
  • a virtual sample with 1/2 pixel accuracy is generated by interpolation using a 6-tap filter using six integer pixels in the vertical direction or the horizontal direction.
  • the 1 ⁇ 4 pixel precision virtual sample is generated by an interpolation operation using an average value filter of adjacent 1 ⁇ 2 pixels or integer pixels.
  • the interpolation image generation unit 43 generates a predicted image 45 of virtual pixels according to the accuracy of the motion vector 44 instructed from the motion detection unit 42.
  • a motion vector detection procedure with virtual pixel accuracy will be described.
  • Motion vector detection procedure I The interpolated image generation unit 43 generates a predicted image 45 for a motion vector 44 with integer pixel accuracy that is within a predetermined motion search range of the motion compensation region block image 41.
  • the predicted image 45 (predicted image 17) generated with integer pixel accuracy is output to the subtracting unit 12, and is subtracted from the motion compensation region block image 41 (macro / sub-block image 5) by the subtracting unit 12, thereby predicting the difference signal 13. become.
  • the encoding control unit 3 evaluates the prediction efficiency with respect to the prediction difference signal 13 and the integer pixel precision motion vector 44 (prediction parameter 18).
  • the prediction cost J 1 is calculated from the following equation (1), and a motion vector 44 with integer pixel precision that minimizes the prediction cost J 1 within a predetermined motion search range is determined.
  • J 1 D 1 + ⁇ R 1 (1)
  • D 1 and R 1 are used as evaluation values.
  • D 1 is the sum of absolute values (SAD) in the macro block or sub block of the prediction difference signal
  • R 1 is the estimated code amount of the identification number of the motion vector and the reference image pointed to by this motion vector
  • is a positive number.
  • the code amount of the motion vector is calculated by predicting the motion vector value in each mode of FIG. 2A or FIG. 2B using the value of a nearby motion vector and converting the predicted difference value into a probability distribution. It is obtained by entropy coding based on it or by performing code amount estimation corresponding to it.
  • FIG. 4 is a diagram for explaining a method for determining a motion vector prediction value (hereinafter referred to as a prediction vector) in each encoding mode 7 shown in FIG. 2B.
  • a prediction vector a motion vector prediction value
  • the prediction vector PMV of the rectangular block is calculated from the following equation (2).
  • median () corresponds to the median filter process and is a function that outputs the median value of the motion vectors MVa, MVb, and MVc.
  • PMV median (MVa, MVb, MVc) (2)
  • Motion vector detection procedure II The interpolated image generation unit 43 generates a predicted image 45 for one or more 1 ⁇ 2 pixel precision motion vectors 44 positioned around the integer pixel precision motion vector determined in the “motion vector detection procedure I”. .
  • the predicted image 45 (predicted image 17) generated with the 1 ⁇ 2 pixel accuracy is converted by the subtractor 12 into the motion compensation region block image 41 (macro / sub-block image 5). ) To obtain the prediction difference signal 13.
  • the encoding control unit 3 evaluates the prediction efficiency with respect to the prediction difference signal 13 and the motion vector 44 with 1/2 pixel accuracy (prediction parameter 18), and is positioned around the motion vector with integer pixel accuracy.
  • a motion vector 44 having a 1 ⁇ 2 pixel accuracy that minimizes the prediction cost J 1 is determined from one or more motion vectors having a 1 ⁇ 2 pixel accuracy.
  • Motion vector detection procedure III The encoding control unit 3 and the motion compensation prediction unit 9 similarly apply to the motion vector with a 1 ⁇ 4 pixel accuracy around the motion vector with a 1 ⁇ 2 pixel accuracy determined in the “motion vector detection procedure II”.
  • a motion vector 44 having a 1/4 pixel accuracy that minimizes the prediction cost J 1 is determined from one or more motion vectors having a 1/4 pixel accuracy positioned at the position.
  • Motion vector detection procedure IV Similarly, the encoding control unit 3 and the motion compensation prediction unit 9 detect a motion vector with virtual pixel accuracy until a predetermined accuracy is obtained.
  • motion vector detection with virtual pixel accuracy is performed until a predetermined accuracy is reached. For example, a threshold for the prediction cost is determined, and the prediction cost J 1 is smaller than the predetermined threshold. In such a case, the detection of the motion vector with the virtual pixel accuracy may be stopped before the predetermined accuracy is reached.
  • the motion vector may refer to a pixel outside the frame defined by the reference frame size. In that case, it is necessary to generate pixels outside the frame.
  • One method of generating pixels outside the frame is a method of filling in pixels at the screen edge.
  • the frame size of each frame of the input video signal 1 is not an integer multiple of the macroblock size and an extended frame is input instead of each frame of the input video signal 1, the frame size is set to an integer multiple of the macroblock size.
  • the expanded size extended frame size
  • the frame size of the reference frame is the frame size of the original input video signal.
  • the motion compensation prediction unit 9 determines the predetermined predetermined values for the motion compensation region block images 41 obtained by dividing the macro / sub-block image 5 into blocks each serving as a motion compensation unit indicated by the encoding mode 7.
  • the motion vector of the accurate virtual pixel accuracy and the identification number of the reference image indicated by the motion vector are output as the prediction parameter 18.
  • the motion compensation prediction unit 9 outputs the prediction image 45 (prediction image 17) generated by the prediction parameter 18 to the subtraction unit 12, and is subtracted from the macro / sub-block image 5 by the subtraction unit 12 to generate the prediction difference signal 13. Get.
  • the prediction difference signal 13 output from the subtraction unit 12 is output to the transform / quantization unit 19.
  • FIG. 5 is a diagram illustrating an example of adaptation of the transform block size according to the coding mode 7 illustrated in FIG. 2B.
  • a 32 ⁇ 32 pixel block is used as an example of the M ⁇ L pixel block.
  • the mode specified by the encoding mode 7 is mb_mode 0 to 6
  • either one of 16 ⁇ 16 and 8 ⁇ 8 pixels can be adaptively selected as the transform block size.
  • the transform block size can be adaptively selected from 8 ⁇ 8 or 4 ⁇ 4 pixels for each 16 ⁇ 16 pixel sub-block obtained by dividing the macroblock into four.
  • the set of transform block sizes that can be selected for each encoding mode can be defined from any rectangular block size that is equal to or smaller than the sub-block size that is equally divided by the encoding mode.
  • FIG. 6 is a diagram showing another example of adaptation of the transform block size according to the coding mode 7 shown in FIG. 2B.
  • the coding mode 7 designates the above-described mb_mode 0, 5 and 6, in addition to 16 ⁇ 16 and 8 ⁇ 8 pixels as selectable transform block sizes, a sub-unit that is a unit of motion compensation A transform block size corresponding to the block shape can be selected.
  • mb_mode0 it can be adaptively selected from 16 ⁇ 16, 8 ⁇ 8, and 32 ⁇ 32 pixels.
  • mb_mode5 it is possible to adaptively select from 16 ⁇ 16, 8 ⁇ 8, and 16 ⁇ 32 pixels.
  • mb_mode6 it is possible to adaptively select from 16 ⁇ 16, 8 ⁇ 8, and 32 ⁇ 16 pixels. Although illustration is omitted, in the case of mb_mode7, it is possible to adaptively select from 16 ⁇ 16, 8 ⁇ 8, and 16 ⁇ 32 pixels, and in the case of mb_mode1 to 4, it can be applied to a non-rectangular region. May be selected from 16 ⁇ 16 and 8 ⁇ 8 pixels, and may be adapted to select a rectangular area from 8 ⁇ 8 and 4 ⁇ 4 pixels.
  • the encoding control unit 3 sets a transform block size set corresponding to the encoding mode 7 illustrated in FIGS. 5 and 6 as the compression parameter 20.
  • a set of transform block sizes that can be selected in accordance with the macroblock encoding mode 7 is determined in advance so that it can be adaptively selected in units of macroblocks or subblocks.
  • a set of selectable transform block sizes is determined in advance in accordance with the encoding mode 7 (sub_mb_mode 1 to 8 in FIG. 2B) of the sub-block obtained by dividing the macro block. May be adaptively selected for each divided block unit.
  • the encoding control unit 3 determines in advance a set of transform block sizes corresponding to the encoding mode 7 so that it can be selected adaptively. Just keep it.
  • the transform / quantization unit 19 cooperates with the encoding control unit 3 to make a macroblock unit designated by the macroblock size 4 or a subblock unit obtained by further dividing the macroblock unit according to the encoding mode 7 Then, an optimum conversion block size is determined from the conversion block sizes. The detailed procedure will be described below.
  • FIG. 7 is a block diagram showing the internal configuration of the transform / quantization unit 19.
  • the transform / quantization unit 19 illustrated in FIG. 7 includes a transform block size dividing unit 50, a transform unit 52, and a quantization unit 54.
  • Input data includes a compression parameter 20 (transform block size, quantization step size, etc.) input from the encoding control unit 3 and a prediction difference signal 13 input from the encoding control unit 3.
  • the transform block size dividing unit 50 transforms the prediction difference signal 13 for each macroblock or sub-block for which the transform block size is to be determined into a block corresponding to the transform block size of the compression parameter 20, and serves as a transform target block 51.
  • the data is output to the conversion unit 52. If a plurality of transform block sizes are selected and specified for one macroblock or sub-block by the compression parameter 20, the transform target blocks 51 of the respective transform block sizes are sequentially output to the transform unit 52.
  • the conversion unit 52 performs conversion processing on the input conversion target block 51 in accordance with a conversion method such as integer conversion or Hadamard conversion in which DCT and DCT conversion coefficients are approximated by integers, and the generated conversion coefficient 53 is quantized. To the unit 54.
  • a conversion method such as integer conversion or Hadamard conversion in which DCT and DCT conversion coefficients are approximated by integers, and the generated conversion coefficient 53 is quantized.
  • the quantization unit 54 quantizes the input transform coefficient 53 according to the quantization step size of the compression parameter 20 instructed from the encoding control unit 3, and dequantizes the compressed data 21 that is the quantized transform coefficient. Output to the inverse transform unit 22 and the encoding control unit 3. Note that, when a plurality of transform block sizes are selected and designated for one macroblock or sub-block by the compression parameter 20, the transform unit 52 and the quantization unit 54 are described above for all transform block sizes. Then, each compressed data 21 is output.
  • the compressed data 21 output from the quantization unit 54 is input to the encoding control unit 3 and is used for evaluating the encoding efficiency with respect to the transform block size of the compression parameter 20.
  • the encoding control unit 3 uses the compressed data 21 obtained for all the transform block sizes that can be selected for each of the encoding modes included in the encoding mode 7, for example, encoding using the following equation (3): the cost J 2 calculates, selects a transform block size for the encoding cost J 2 to a minimum.
  • J 2 D 2 + ⁇ R 2 (3)
  • D 2 and R 2 are used as evaluation values.
  • the compressed data 21 obtained for the transform block size is input to the inverse quantization / inverse transform unit 22, and the local decoded prediction difference signal obtained by performing the inverse transform / inverse quantization process on the compressed data 21
  • the sum of square distortion between the local decoded image signal 26 obtained by adding the predicted image 17 to 24 and the macro / sub-block image 5 is used.
  • R 2 is a code obtained by actually encoding the compressed data 21 obtained with respect to the transform block size, the encoding mode 7 and the prediction parameters 10 and 18 relating to the compressed data 21 by the variable length encoding unit 23.
  • a quantity (or estimated code quantity) is used.
  • the encoding control unit 3 selects an optimum compression parameter by selecting a transform block size corresponding to the determined optimal encoding mode 7a after determining the optimal encoding mode 7a according to “3. Procedure for determining encoding mode” described later. 20a and output to the variable length coding unit 23.
  • the variable length coding unit 23 entropy codes the optimum compression parameter 20a and then multiplexes it into the bit stream 30.
  • the transform block size is selected from a transform block size set (illustrated in FIGS. 5 and 6) defined in advance according to the optimal encoding mode 7a of the macroblock or sub-block.
  • identification information such as an ID may be assigned to the transform block size included in the set, and the identification information may be entropy-coded as transform block size information and multiplexed into the bitstream 30.
  • identification information of the transform block size set is also set on the decoding device side.
  • the transform block size can be automatically determined from the set on the decoding device side. There is no need to multiplex into stream 30.
  • Coding Mode Determination Procedure According to the above “1. Prediction Parameter Determination Procedure” and “2. Compression Parameter Determination Procedure”, the prediction parameters 10, When 18 and the compression parameter 20 are determined, the encoding control unit 3 further transforms and quantizes the prediction difference signal 13 obtained by using each encoding mode 7, the prediction parameters 10 and 18 and the compression parameter 20 at that time. using compressed data 21 obtained Te, determined from the above equation (3) a coding mode 7 encoding cost J 2 is reduced, it selects the coding mode 7 as an optimal coding mode 7a of the macroblock.
  • the optimal encoding mode 7a may be determined from all the encoding modes in which the skip mode is added as the macroblock or sub-block mode to the encoding mode shown in FIG. 2A or 2B.
  • the skip mode is a mode in which a prediction image that has been motion-compensated using a motion vector of an adjacent macroblock or sub-block on the encoding device side is used as a local decoded image signal, and a prediction parameter or compression parameter other than the encoding mode Since it is not necessary to calculate and multiplex the bit stream, it is possible to perform encoding with a reduced code amount.
  • a predicted image that has been motion-compensated using motion vectors of adjacent macroblocks or sub-blocks is output as a decoded image signal in the same procedure as the coding device side.
  • the encoding mode may be determined so as to control only the skip mode and suppress the amount of code consumed in the extension region.
  • the encoding control unit 3 obtains the optimum encoding efficiency determined by the above “1. Prediction parameter determination procedure”, “2. Compression parameter determination procedure”, and “3. Encoding mode determination procedure”. Is output to the variable length coding unit 23, and the prediction parameters 10 and 18 corresponding to the optimum coding mode 7a are selected as the optimum prediction parameters 10a and 18a. The corresponding compression parameter 20 is selected as the optimum compression parameter 20 a and output to the variable length coding unit 23.
  • the variable length encoding unit 23 entropy-encodes the optimal encoding mode 7a, the optimal prediction parameters 10a and 18a, and the optimal compression parameter 20a and multiplexes them into the bit stream 30.
  • the optimal prediction difference signal 13a obtained from the predicted images 11 and 17 based on the determined optimal encoding mode 7a, the optimal prediction parameters 10a and 18a, and the optimal compression parameter 20a is converted into a transform / quantization unit 19 as described above.
  • the compressed data 21 is converted and quantized in the above manner into compressed data 21, which is entropy-coded by the variable length coding unit 23 and multiplexed into the bit stream 30.
  • the compressed data 21 passes through an inverse quantization / inverse transform unit 22 and an addition unit 25 to become a locally decoded image signal 26 and is input to a loop filter unit 27.
  • FIG. 8 is a block diagram showing the configuration of the video decoding apparatus according to Embodiment 1 of the present invention.
  • the moving picture decoding apparatus shown in FIG. 8 entropy-decodes the optimal encoding mode 62 from the bitstream 60 in units of macroblocks, and the macroblock or sub-block divided according to the decoded optimal encoding mode 62
  • the optimal prediction parameter 63, the compressed data 64, and the variable length decoding unit 61 that entropy decodes the optimal compression parameter 65 and the optimal prediction parameter 63 are input, the intra prediction mode and the intra prediction included in the optimal prediction parameter 63 are input.
  • the motion vector included in the optimal prediction parameter 63 and the optimal Specified by the reference image index included in the prediction parameter 63 In accordance with the motion compensation prediction unit 70 that performs motion compensation prediction using the reference image 76 in the motion compensated prediction frame memory 75 to generate the prediction image 72, and variable length decoding according to the decoded optimum encoding mode 62 Using the switching unit 68 that inputs the optimal prediction parameter 63 decoded by the unit 61 to either the intra prediction unit 69 or the motion compensated prediction unit 70 and the optimal compression parameter 65, inverse quantization and compression are performed on the compressed data 64.
  • Either the intra-prediction unit 69 or the motion compensation prediction unit 70 outputs the inverse quantization / inverse transformation unit 66 that performs the inverse transformation process and generates the prediction difference signal decoded value 67 and the prediction difference signal decoded value 67.
  • An addition unit 73 that adds the predicted images 71 and 72 to generate a decoded image 74, an intra prediction memory 77 that stores the decoded image 74, and a filter for filtering the decoded image 74 It includes a loop filter unit 78 that sense to generate a reproduced image 79, and a motion compensated prediction frame memory 75 for storing the reproduced image 79.
  • the variable length decoding unit 61 performs entropy decoding processing on the bit stream 60 and performs a sequence unit or a picture including one or more frames.
  • the macroblock size is determined based on profile identification information decoded from the bitstream in sequence units. .
  • the number of macroblocks included in each frame is determined, and the optimal encoding mode 62, optimal prediction parameter 63, and compressed data 64 of each macroblock included in the frame are determined.
  • optimum compression parameter 65 Transform block size information, quantization step size, and the like are decoded.
  • the optimum coding mode 62, optimum prediction parameter 63, compressed data 64, and optimum compression parameter 65 decoded on the decoding device side are the optimum coding mode 7a, optimum prediction parameters 10a, 18a, coded on the coding device side, This corresponds to the compressed data 21 and the optimum compression parameter 20a.
  • the transform block size information of the optimum compression parameter 65 is a transform block selected from a transform block size set defined in advance in units of macroblocks or sub-blocks according to the coding mode 7 on the encoding device side. This is identification information for specifying the size, and the decoding apparatus side specifies the conversion block size of the macroblock or sub-block from the optimal encoding mode 62 and the conversion block size information of the optimal compression parameter 65.
  • the inverse quantization / inverse transform unit 66 uses the compressed data 64 and the optimum compression parameter 65 input from the variable length decoding unit 61 to perform inverse quantization / inverse transform processing in units of blocks specified by the transform block size information. And a prediction difference signal decoded value 67 is calculated.
  • variable length decoding unit 61 determines a prediction vector by the process shown in FIG. 4 with reference to the motion vectors of the peripheral blocks that have already been decoded, and uses the prediction difference value decoded from the bitstream 60.
  • the decoded value of the motion vector is obtained by addition.
  • the variable length decoding unit 61 includes the decoded value of the motion vector in the optimum prediction parameter 63 and outputs it to the switching unit 68.
  • the switching unit 68 is a switch that switches the input destination of the optimal prediction parameter 63 in accordance with the optimal encoding mode 62.
  • the switching unit 68 also uses the optimum prediction parameter 63 (intra prediction mode) input from the variable length decoding unit 61. Is output to the intra prediction unit 69, and when the optimal encoding mode 62 indicates the inter-frame prediction mode, the optimal prediction parameter 63 (motion vector, reference image identification number (reference image index) indicated by each motion vector, etc.) Is output to the motion compensation prediction unit 70.
  • the intra prediction unit 69 refers to the decoded image (decoded image signal in the frame) 74a in the frame stored in the intra prediction memory 77, and corresponds to the intra prediction mode indicated by the optimal prediction parameter 63.
  • a predicted image 71 is generated and output.
  • the method of generating the predicted image 71 by the intra prediction unit 69 is the same as the operation of the intra prediction unit 8 on the encoding device side.
  • the intra prediction unit 8 applies to all intra prediction modes instructed in the encoding mode 7.
  • the intra prediction unit 69 is different in that only the prediction image 71 corresponding to the intra prediction mode indicated in the optimum encoding mode 62 is generated.
  • the motion compensated prediction unit 70 predicts a predicted image from one or more reference images 76 stored in the motion compensated prediction frame memory 75 based on a motion vector, a reference image index, and the like indicated by the input optimum prediction parameter 63. 72 is generated and output.
  • the method of generating the predicted image 72 by the motion compensation prediction unit 70 is a process of searching for a motion vector from a plurality of reference images among the operations of the motion compensation prediction unit 9 on the encoding device side (motion detection unit 42 shown in FIG. 3). And corresponding to the operation of the interpolated image generating unit 43), and only the process of generating the predicted image 72 is performed according to the optimal prediction parameter 63 given from the variable length decoding unit 61. Similar to the encoding device, the motion compensation prediction unit 70 embeds pixels outside the frame with pixels at the end of the screen when the motion vector refers to a pixel outside the frame defined by the reference frame size. The prediction image 72 is produced
  • the reference frame size may be defined by a size obtained by extending the decoded frame size to an integral multiple of the decoded macroblock size, or may be defined by the decoded frame size. To determine the reference frame size.
  • the adding unit 73 adds either one of the predicted image 71 or the predicted image 72 and the predicted difference signal decoded value 67 output from the inverse quantization / inverse transform unit 66 to generate a decoded image 74.
  • the decoded image 74 is stored in the intra prediction memory 77 and input to the loop filter unit 78 in order to be used as a reference image (decoded image 74a) for generating an intra predicted image of a subsequent macroblock.
  • the loop filter unit 78 performs the same operation as the loop filter unit 27 on the encoding device side, generates a reproduced image 79, and outputs it from the moving image decoding device. Further, the reproduced image 79 is stored in the motion compensated prediction frame memory 75 for use as a reference image 76 for subsequent prediction image generation. Note that the size of the reproduced image obtained after decoding all the macroblocks in the frame is an integral multiple of the macroblock size. When the size of the reproduced image is larger than the decoded frame size corresponding to the frame size of each frame of the video signal input to the encoding device, the reproduced image includes an extended region in the horizontal direction or the vertical direction. In this case, a decoded image obtained by removing the decoded image of the extended area portion from the reproduced image is output from the decoding device.
  • the decoded image in the extended region portion of the reproduced image stored in the motion compensated prediction frame memory 75 is not referred to in the subsequent predicted image generation. Therefore, the decoded image obtained by removing the decoded image of the extended area portion from the reproduced image may be stored in the motion compensated prediction frame memory 75.
  • the macro / subblock image 5 divided according to the macroblock coding mode 7 depends on the size of the macroblock or subblock.
  • a set of transform blocks including a plurality of transform block sizes is determined in advance, and the encoding control unit 3 selects one transform block size with the best coding efficiency from the transform block size set as the optimum compression parameter 20a.
  • the conversion / quantization unit 19 divides the optimal prediction difference signal 13a into blocks of the conversion block size included in the optimal compression parameter 20a, and performs conversion and quantization processing. Since the compressed data 21 is generated, the transform block size set is set to a macro block or a sub block. Compared to the conventional method which is irrespective fixed size, with equal code amount, it is possible to improve the quality of encoded video.
  • variable length encoding unit 23 is configured to multiplex the transform block size adaptively selected from the set of transform block sizes according to the encoding mode 7 into the bitstream 30,
  • the variable length decoding unit 61 decodes the optimal compression parameter 65 from the bit stream 60 in units of macroblocks or subblocks, and the inverse quantization / inverse conversion unit 66
  • the transform block size is determined based on the transform block size information included in the optimum compression parameter 65, and the compressed data 64 is inversely transformed and inversely quantized in units of the transform block size.
  • the video decoding device can decode the compressed data by selecting the transform block size used on the encoding device side from the set of transform block sizes defined in the same way as the video encoding device. It becomes possible to correctly decode the bitstream encoded by the moving picture encoding apparatus according to Embodiment 1.
  • Embodiment 2 a modification of the variable length coding unit 23 of the video encoding device according to the first embodiment and the variable length decoding unit 61 of the video decoding device according to the first embodiment are the same. A modification will be described.
  • FIG. 9 is a block diagram showing an internal configuration of the variable length coding unit 23 of the moving picture coding apparatus according to Embodiment 2 of the present invention.
  • the same or equivalent parts as in FIG. the configuration of the video encoding apparatus according to the second embodiment is the same as that of the first embodiment, and the operation of each component except for the variable-length encoding unit 23 is the same as that of the first embodiment. Therefore, FIG. 1 to FIG. 8 are used.
  • the apparatus configuration and the processing method are based on the assumption that the encoding mode set shown in FIG. 2A is used. However, the encoding mode set shown in FIG. 2B is used. Needless to say, the present invention is also applicable to the assumed apparatus configuration and processing method.
  • the variable length encoding unit 23 illustrated in FIG. 9 specifies the correspondence between the index value of the multilevel signal representing the encoding mode 7 (or the optimal prediction parameters 10a and 18a and the optimal compression parameter 20a) and the binary signal.
  • a binarization table memory 105 for storing a binarization table, and an optimal encoding mode 7a (or optimal prediction parameters 10a, 18a, optimal) of the multilevel signal selected by the encoding control unit 3 using the binarization table
  • the binarization unit 92 that converts the index value of the multilevel signal of the compression parameter 20a) into the binary signal 103, the context identification information 102 generated by the context generation unit 99, the context information memory 96, the probability table memory 97, and the state transition
  • the binary signal 103 converted by the binarization unit 92 with reference to the table memory 98 is arithmetically encoded and encoded bits 111, and the frequency of occurrence of the arithmetic coding processing operation unit 104 that multiplexes the coded bit string
  • variable-length coding procedure of the variable-length coding unit 23 will be described using the macroblock optimum coding mode 7a output from the coding control unit 3 as an example of the entropy-coded parameter.
  • the optimal prediction parameters 10a and 18a and the optimal compression parameter 20a which are parameters to be encoded, may be variable-length encoded in the same procedure as in the optimal encoding mode 7a, and the description thereof is omitted.
  • the encoding control unit 3 outputs a context information initialization flag 91, a type signal 100, peripheral block information 101, and a binarization table update flag 113. Details of each information will be described later.
  • the initialization unit 90 initializes the context information 106 stored in the context information memory 96 in accordance with the context information initialization flag 91 instructed from the encoding control unit 3, and sets the initial state. Details of the initialization processing by the initialization unit 90 will be described later.
  • the binarization unit 92 refers to the binarization table stored in the binarization table memory 105 and refers to the index of the multilevel signal representing the type of the optimal encoding mode 7a input from the encoding control unit 3
  • the value is converted into a binary signal 103 and output to the arithmetic coding processing operation unit 104.
  • FIG. 10 is a diagram illustrating an example of a binarization table held in the binarization table memory 105.
  • the “coding mode” shown in FIG. 10 is a prediction that is motion-compensated using a motion mode of a macroblock adjacent to the coding mode (mb_skip: coding apparatus side) in the coding mode (mb_mode 0 to 3) shown in FIG. 2A.
  • the index values of these encoding modes are each binarized with 1 to 3 bits and stored as “binary signal”.
  • each bit of the binary signal is referred to as a “bin” number.
  • a small index value is assigned to a coding mode having a high occurrence frequency, and the binary signal is also set to be as short as 1 bit.
  • the optimal encoding mode 7 a output from the encoding control unit 3 is input to the binarization unit 92 and also to the frequency information generation unit 93.
  • the frequency information generation unit 93 counts the frequency of occurrence of the index value of the encoding mode included in the optimum encoding mode 7a (the selection frequency of the encoding mode selected by the encoding control unit) to generate the frequency information 94.
  • the data is output to a binarization table update unit 95 described later.
  • the probability table memory 97 includes a plurality of symbols (MPS: Most Probable Symbol) having a high occurrence probability among symbol values “0” or “1” of each bin included in the binary signal 103 and a plurality of combinations of the occurrence probabilities. This is a memory for holding a pair of stored tables.
  • MPS Most Probable Symbol
  • FIG. 11 is a diagram showing an example of a probability table held in the probability table memory 97. As shown in FIG. In FIG. 11, “probability table numbers” are assigned to discrete probability values (“occurrence probabilities”) between 0.5 and 1.0.
  • the state transition table memory 98 is encoded from the “probability table number” stored in the probability table memory 97 and the probability state before the MPS encoding of “0” or “1” indicated by the probability table number. This is a memory for holding a table storing a plurality of combinations of state transitions to the probability state.
  • FIG. 12 is a diagram illustrating an example of a state transition table held in the state transition table memory 98.
  • “Probability table number”, “Probability transition after LPS encoding”, and “Probability transition after MPS encoding” in FIG. 12 respectively correspond to the probability table numbers shown in FIG.
  • the probability state of “probability table number 1” surrounded by a frame in FIG. 12 MPS occurrence probability 0.527 from FIG. 11
  • LPS Last Probable Symbol
  • the occurrence probability of MPS is reduced due to the occurrence of LPS.
  • the probability state represents a transition from “probability transition after MPS encoding” to probability table number 2 (MPS occurrence probability 0.550 from FIG. 11). That is, the occurrence probability of MPS is increased due to the occurrence of MPS.
  • the context generation unit 99 includes a type signal 100 indicating the types of parameters to be encoded (the optimal encoding mode 7a, the optimal prediction parameters 10a and 18a, and the optimal compression parameter 20a) input from the encoding control unit 3 and peripheral block information.
  • the context identification information 102 is generated for each bin of the binary signal 103 obtained by binarizing the encoding target parameter.
  • the type signal 100 is the optimum encoding mode 7a of the encoding target macroblock.
  • the peripheral block information 101 is the optimal encoding mode 7a of the macroblock adjacent to the encoding target macroblock.
  • FIG. 13A is a diagram showing the binary table shown in FIG. 10 in binary tree representation.
  • a thick-framed encoding target macroblock shown in FIG. 13B and peripheral blocks A and B adjacent to the encoding target macroblock will be described as an example.
  • a black circle is called a node, and a line connecting the nodes is called a path.
  • An index of a multilevel signal to be binarized is assigned to the end node of the binary tree.
  • the depth of the binary tree corresponds to the bin number, and a bit string obtained by combining symbols (0 or 1) assigned to each path from the root node to the end node is The binary signal 103 corresponding to the index of the multilevel signal assigned to the terminal node is obtained.
  • one or more context identification information is prepared according to the information of the peripheral blocks A and B.
  • the context generation unit 99 uses the neighboring block information of neighboring neighboring blocks A and B. Referring to 101, any one of the three pieces of context identification information C0, C1, and C2 is selected from the following equation (4).
  • the context generation unit 99 outputs the selected context identification information as the context identification information 102.
  • the context information identified by the context identification information 102 holds an MPS value (0 or 1) and a probability table number that approximates the occurrence probability, and is in an initial state. This context information is stored in the context information memory 96.
  • the arithmetic coding processing operation unit 104 arithmetically codes the 1 to 3 bit binary signal 103 input from the binarizing unit 92 for each bin to generate an encoded bit string 111 and multiplexes the encoded bit string 111 into the bit stream 30.
  • an arithmetic coding procedure based on the context information will be described.
  • the arithmetic coding processing operation unit 104 refers to the context information memory 96 to obtain context information 106 based on the context identification information 102 corresponding to the bin 0 of the binary signal 103. Subsequently, the arithmetic coding processing calculation unit 104 refers to the probability table memory 97 and specifies the MPS occurrence probability 108 of bin 0 corresponding to the probability table number 107 held in the context information 106.
  • the arithmetic coding processing operation unit 104 determines the symbol value 109 (0 or 0) of bin 0 based on the MPS value (0 or 1) held in the context information 106 and the identified MPS occurrence probability 108. 1) is arithmetically encoded. Subsequently, the arithmetic coding processing calculation unit 104 refers to the state transition table memory 98 and converts the probability table number 107 held in the context information 106 and the symbol value 109 of bin 0 previously arithmetically coded. Based on this, the probability table number 110 after bin 0 symbol encoding is obtained.
  • the arithmetic coding processing calculation unit 104 uses the value of the probability table number (that is, the probability table number 107) of the context information 106 of bin 0 stored in the context information memory 96 as the probability table number after the state transition ( That is, it is updated to the probability table number 110) obtained from the state transition table memory 98 before encoding the symbol of bin 0.
  • the arithmetic coding processing calculation unit 104 performs arithmetic coding on the bins 1 and 2 based on the context information 106 identified by each context identification information 102 as in the bin 0, and after the symbol coding of each bin, The information 106 is updated.
  • the arithmetic encoding processing operation unit 104 outputs an encoded bit string 111 obtained by arithmetically encoding all bin symbols, and the variable length encoding unit 23 multiplexes the bit stream 30.
  • the context information 106 identified by the context identification information 102 is updated every time a symbol is arithmetically encoded. That is, it means that the probability state of each node transitions for each symbol encoding.
  • the initialization of the context information 106 that is, the resetting of the probability state is performed by the initialization unit 90 described above.
  • the initialization unit 90 performs initialization in response to an instruction by the context information initialization flag 91 of the encoding control unit 3, and this initialization is performed at the head of the slice or the like.
  • the initial state of each context information 106 MPS value and probability table number initial value approximating the occurrence probability
  • the control unit 3 may include the context information initialization flag 91 and instruct the initialization unit 90.
  • the binarization table update unit 95 is based on the binarization table update flag 113 instructed by the encoding control unit 3 and is generated by the frequency information generation unit 93.
  • the encoding target parameter here, the optimal encoding mode 7a.
  • the binarization table memory 105 is updated with reference to the frequency information 94 indicating the frequency of occurrence of the index values. The procedure for updating the binarization table by the binarization table update unit 95 will be described below.
  • binarization is performed so that the coding mode with the highest occurrence frequency can be binarized with a short code word according to the occurrence frequency of the encoding mode specified by the optimum encoding mode 7a that is the encoding target parameter.
  • the correspondence between the table coding mode and the index is updated to reduce the code amount.
  • FIG. 14 is a diagram illustrating an example of the binarized table after the update, and is a post-update state when it is assumed that the state of the binarized table before the update is the state illustrated in FIG. For example, when the occurrence frequency of mb_mode3 is the highest, the binarization table update unit 95 assigns the smallest index value so that a binary signal of a short code word is assigned to the mb_mode3.
  • the binarization table update unit 95 generates binarization table update identification information 112 for enabling the decoding device to identify the updated binarization table when the binarization table is updated. Thus, it is necessary to multiplex the bit stream 30. For example, when there are a plurality of binarization tables for each encoding target parameter, an ID for identifying each encoding target parameter is assigned in advance to the encoding device side and the decoding device side, respectively, and the binarization table update unit 95 may output the ID of the updated binarization table as the binarization table update identification information 112 and multiplex it with the bitstream 30.
  • the encoding control unit 3 refers to the frequency information 94 of the encoding target parameter at the head of the slice, and determines that the occurrence frequency distribution of the encoding target parameter has greatly changed beyond a predetermined allowable range.
  • the binarization table update flag 113 is output.
  • the variable length coding unit 23 may multiplex the binarization table update flag 113 into the slice header of the bit stream 30. Further, when the binarization table update flag 113 indicates “the binarization table is updated”, the variable length coding unit 23 stores the coding mode, compression parameter, and prediction parameter binarization table. Among them, the binarization table update identification information 112 indicating which binarization table is updated is multiplexed into the bitstream 30.
  • the encoding control unit 3 may instruct the update of the binarization table at a timing other than the head of the slice.
  • the encoding control unit 3 outputs the binarization table update flag 113 at the head of an arbitrary macroblock. May be.
  • the binarization table update unit 95 outputs information specifying the macroblock position where the binarization table has been updated, and the variable length encoding unit 23 also multiplexes the information into the bitstream 30. There is a need to.
  • the encoding control unit 3 When the encoding control unit 3 outputs the binarization table update flag 113 to the binarization table update unit 95 to update the binarization table, the encoding control unit 3 notifies the initialization unit 90 of the context information initialization flag. 91 is output to initialize the context information memory 96.
  • FIG. 15 is a block diagram showing an internal configuration of the variable length decoding unit 61 of the video decoding apparatus according to Embodiment 2 of the present invention.
  • the configuration of the moving picture decoding apparatus according to the second embodiment is the same as that of the first embodiment, and the operation of each component other than the variable length decoding unit 61 is the same as that of the first embodiment. 1 to 8 are referred to.
  • variable length decoding unit 61 illustrated in FIG. 15 is multiplexed into the bitstream 60 with reference to the context identification information 126, the context information memory 128, the probability table memory 131, and the state transition table memory 135 generated by the context generation unit 122.
  • An arithmetic decoding processing operation unit 127 that arithmetically decodes the encoded bit string 133 representing the optimal encoding mode 62 (or optimal prediction parameter 63, optimal compression parameter 65) to generate a binary signal 137, and a binary signal.
  • a binarization table memory 143 for storing a binarization table 139 designating a correspondence relationship between the optimum coding mode 62 (or the optimum prediction parameter 63, the optimum compression parameter 65) and the multilevel signal, and the binarization table 139
  • the binary signal 137 generated by the arithmetic decoding processing operation unit 127 is And a reverse binarizing section 138 for converting the decoded values 140 of the signal.
  • variable-length decoding procedure of the variable-length decoding unit 61 will be described taking the optimal encoding mode 62 of the macroblock included in the bitstream 60 as an example of the entropy-decoded parameter.
  • the optimal prediction parameter 63 and the optimal compression parameter 65 that are parameters to be decoded may be variable-length decoded in the same procedure as in the optimal encoding mode 62, and thus description thereof is omitted.
  • the context initialization information 121 the encoded bit string 133, the binary table update flag 142, and the binary table update identification information multiplexed on the encoding device side are included. 144 is included. Details of each information will be described later.
  • the initialization unit 120 initializes the context information stored in the context information memory 128 at the head of the slice or the like. Alternatively, the initialization unit 120 prepares a plurality of sets in advance for the initial state of the context information (the initial value of the MPS value and the probability table number approximating the occurrence probability), and the decoded value of the context initialization information 121 You may make it select the initial state corresponding to to from a set.
  • the context generation unit 122 refers to the type signal 123 indicating the type of parameters to be decoded (optimal encoding mode 62, optimal prediction parameter 63, and optimal compression parameter 65) and the peripheral block information 124, and provides context identification information 126. Generate.
  • the type signal 123 is a signal representing the type of parameter to be decoded, and it is determined according to the syntax held in the variable-length decoding unit 61 what the decoding target parameter is. Therefore, it is necessary to hold the same syntax on the encoding device side and the decoding device side.
  • the encoding control unit 3 on the encoding device side holds the syntax.
  • the parameter type to be encoded next and the parameter value (index value), that is, the type signal 100 are sent to the variable length encoding unit 23. It will be output sequentially.
  • the peripheral block information 124 is information such as a coding mode obtained by decoding a macroblock or a subblock, and has a variable length for use as peripheral block information 124 for subsequent decoding of the macroblock or subblock. It is stored in a memory (not shown) in the decoding unit 61 and is output to the context generation unit 122 as necessary.
  • the generation procedure of the context identification information 126 by the context generation unit 122 is the same as the operation of the context generation unit 99 on the encoding device side.
  • the context generation unit 122 on the decoding device side also generates context identification information 126 for each bin of the binarization table 139 referred to by the inverse binarization unit 138.
  • the context information of each bin holds an MPS value (0 or 1) and a probability table number for specifying the occurrence probability of the MPS as probability information for arithmetic decoding of the bin.
  • the probability table memory 131 and the state transition table memory 135 store the same probability table (FIG. 11) and state transition table (FIG. 12) as the coding device side probability table memory 97 and the state transition table memory 98. .
  • the arithmetic decoding processing calculation unit 127 performs arithmetic decoding on the encoded bit string 133 multiplexed in the bit stream 60 for each bin to generate a binary signal 137 and outputs the binary signal 137 to the inverse binarization unit 138.
  • the arithmetic decoding processing calculation unit 127 refers to the context information memory 128 to obtain context information 129 based on the context identification information 126 corresponding to each bin of the encoded bit string 133. Subsequently, the arithmetic decoding processing calculation unit 127 refers to the probability table memory 131 and specifies the MPS occurrence probability 132 of each bin corresponding to the probability table number 130 held in the context information 129.
  • the arithmetic decoding process calculation unit 127 is input to the arithmetic decoding process calculation unit 127 based on the MPS value (0 or 1) held in the context information 129 and the identified MPS occurrence probability 132.
  • the encoded bit string 133 is arithmetically decoded to obtain a symbol value 134 (0 or 1) for each bin.
  • the arithmetic decoding process calculation unit 127 refers to the state transition table memory 135 and decodes each bin decoded in the same procedure as the arithmetic coding process calculation unit 104 on the encoding device side.
  • the probability table number 136 after the symbol decoding of each bin (after the state transition) is obtained based on the symbol value 134 and the probability table number 130 held in the context information 129.
  • the arithmetic decoding processing calculation unit 127 uses the value of the probability table number (that is, the probability table number 130) of the context information 129 of each bin stored in the context information memory 128 as the probability table number after the state transition (that is, the probability table number 130). , Update to the probability table number 136) obtained from the state transition table memory 135 before the symbol decoding of each bin.
  • the arithmetic decoding processing calculation unit 127 outputs a binary signal 137 obtained by combining the bin symbols obtained as a result of the arithmetic decoding to the inverse binarization unit 138.
  • the inverse binarization unit 138 selects the same binarization table 139 as at the time of encoding from the binarization table prepared for each type of decoding target parameter stored in the binarization table memory 143.
  • the decoded value 140 of the decoding target parameter is output from the binary signal 137 input from the arithmetic decoding processing calculation unit 127.
  • the binarization table 139 is the same as the binarization table on the encoding apparatus side shown in FIG.
  • the binarization table update unit 141 is based on the binarization table update flag 142 and the binarization table update identification information 144 decoded from the bitstream 60, and the binarization table stored in the binarization table memory 143. Update.
  • the binarization table update flag 142 is information corresponding to the binarization table update flag 113 on the encoding device side, and is included in the header information of the bitstream 60 and the like and indicates whether or not the binarization table is updated. It is. When the decoded value of the binarized table update flag 142 indicates “binary table is updated”, the binarized table update identification information 144 is further decoded from the bitstream 60.
  • the binarization table update identification information 144 is information corresponding to the binarization table update identification information 112 on the encoding device side, and is information for identifying the binarization table of parameters updated on the encoding device side. is there. For example, as described above, when there are a plurality of binarization tables in advance for each encoding target parameter, the ID for identifying each encoding target parameter and the ID of the binarization table are set on the encoding device side and the decoding device side.
  • the binarization table update unit 141 updates the binarization table corresponding to the ID value in the binarization table update identification information 144 decoded from the bitstream 60.
  • the binarization table memory 143 is prepared in advance with the two types of binarization tables of FIG.
  • the state of the binarization table before the update is the state shown in FIG.
  • the binarization table update unit 141 performs the update process according to the binarization table update flag 142 and the binarization table update identification information 144, it corresponds to the ID included in the binarization table update identification information 144. Since the binarized table is selected, the state of the updated binarized table becomes the state shown in FIG. 14, which is the same as the binarized table after the update on the encoding device side.
  • the coding control unit 3 includes the optimum coding mode 7a, the optimum prediction parameters 10a and 18a, and the optimum compression parameter 20a in which the coding efficiency is optimum.
  • the encoding target parameter is selected and output, and the binarization unit 92 of the variable length encoding unit 23 uses the binarization table of the binarization table memory 105 to encode the encoding target represented by a multilevel signal.
  • the parameter is converted into the binary signal 103, the arithmetic encoding processing arithmetic unit 104 arithmetically encodes the binary signal 103 and outputs the encoded bit string 111, and the frequency information generating unit 93 outputs the frequency information 94 of the encoding target parameter.
  • the binarization table update unit 95 is configured to update the correspondence between the multilevel signal and the binary signal of the binarization table based on the frequency information 94, the binarization table There compared with always the conventional method is fixed, in the same quality of the coded video, it is possible to reduce the code amount.
  • the binarization table update unit 95 also includes binarization table update identification information 112 indicating whether or not the binarization table is updated, and binarization table update identification information 112 for identifying the binarized table after the update. Therefore, the video decoding device according to the second embodiment is configured so that the arithmetic decoding processing operation unit 127 of the variable length decoding unit 61 includes the bit stream 60.
  • the multiplexed coded bit string 133 is arithmetically decoded to generate a binary signal 137, and the inverse binarization unit 138 uses the binarization table 139 of the binarization table memory 143 to convert the binary signal 137.
  • a binarized table update flag is obtained by converting the multilevel signal into a decoded value 140, and the binarized table updating unit 141 decodes the header information multiplexed in the bitstream 60. And configured to update the predetermined binary table of binary table memory 143 based on the 142 and the binary table update identification information 144. Therefore, since the moving picture decoding apparatus can update the binarization table by the same procedure as that of the moving picture encoding apparatus and debinarize the encoding target parameter, the moving picture coding according to Embodiment 2 can be performed. It becomes possible to correctly decode the bitstream encoded by the apparatus.
  • Embodiment 3 FIG. In the third embodiment, a modified example of a prediction image generation process by motion compensation prediction of the motion compensation prediction unit 9 in the video encoding device and the video decoding device according to the first and second embodiments will be described.
  • the motion compensation prediction unit 9 of the video encoding apparatus according to the third embodiment will be described.
  • the configuration of the moving picture coding apparatus according to the third embodiment is the same as that of the first embodiment or the second embodiment, and the operation of each component other than the motion compensation prediction unit 9 is the same. 1 to 15 are incorporated by reference.
  • the motion compensated prediction unit 9 has the same configuration and operation except that the configuration and operation related to the predicted image generation process with virtual sample accuracy are different from those of the first and second embodiments. That is, in the first and second embodiments, as shown in FIG. 3, the interpolated image generation unit 43 of the motion compensation prediction unit 9 generates reference image data with virtual pixel accuracy such as a half pixel or a quarter pixel, When the predicted image 45 is generated based on the reference image data with the virtual pixel accuracy, the interpolation image is calculated by a 6-tap filter using six integer pixels in the vertical direction or the horizontal direction as in the MPEG-4 AVC standard.
  • the motion compensated prediction unit 9 While the predicted image is generated by creating the virtual pixel, the motion compensated prediction unit 9 according to the third embodiment super-resolutions the reference image 15 with integer pixel accuracy stored in the motion compensated prediction frame memory 14. By enlarging by the processing, a reference image 207 with virtual pixel accuracy is generated, and a predicted image is generated based on the reference image 207 with virtual pixel accuracy.
  • the interpolated image generation unit 43 of the third embodiment also designates one or more frames of the reference image 15 from the motion compensated prediction frame memory 14, and the motion detection unit 42 is designated.
  • a motion vector 44 is detected within a predetermined motion search range on the reference image 15.
  • the motion vector is detected by a motion vector with virtual pixel accuracy, as in the MPEG-4 AVC standard.
  • a virtual sample pixel is created by interpolation between integer pixels for pixel information (referred to as integer pixels) possessed by a reference image, and is used as a reference image.
  • the motion compensation prediction unit 9 generates a super-resolution reference image 207 with virtual pixel accuracy from the reference image data stored in the motion compensation prediction frame memory 14, and the motion detection unit 42 uses it to generate motion.
  • a configuration for performing vector search processing will be described.
  • FIG. 16 is a block diagram showing an internal configuration of the interpolated image generating unit 43 of the motion compensated predicting unit 9 of the moving picture coding apparatus according to Embodiment 3 of the present invention.
  • the interpolated image generation unit 43 shown in FIG. 16 includes an image enlargement processing unit 205 that enlarges the reference image 15 in the motion compensated prediction frame memory 14, an image reduction processing unit 200 that reduces the reference image 15, and an image reduction process.
  • a high-frequency feature extraction unit 201a that extracts a feature quantity of a high-frequency region component from the unit 200; a high-frequency feature extraction unit 201b that extracts a feature quantity of a high-frequency region component from the reference image 15; and a correlation calculation that calculates a correlation value between the feature amounts.
  • Unit 202 a high-frequency component estimation unit 203 that estimates a high-frequency component from the correlation value and the pre-learning data of high-frequency component pattern memory 204, and corrects the high-frequency component of the enlarged image using the estimated high-frequency component, thereby improving the virtual pixel accuracy.
  • an adding unit 206 that generates a reference image 207.
  • the reference image 15 in the range used for the motion search process is input to the interpolated image generation unit 43 from the reference image data stored in the motion compensated prediction frame memory 14, the reference image 15 is displayed as an image.
  • the data are input to the reduction processing unit 200, the high frequency feature extraction unit 201b, and the image enlargement processing unit 205, respectively.
  • the image reduction processing unit 200 generates a reduced image having a size of 1 / N in the vertical and horizontal directions (N is a power of 2 such as 2, 4) from the reference image 15 and outputs the reduced image to the high frequency feature extraction unit 201a.
  • This reduction processing is realized by a general image reduction filter.
  • the high-frequency feature extraction unit 201a extracts a first feature amount related to a high-frequency component such as an edge component from the reduced image generated by the image reduction processing unit 200.
  • a first feature amount for example, a parameter indicating DCT or Wavelet transform coefficient distribution in the local block can be used.
  • the high-frequency feature extraction unit 201b performs high-frequency feature extraction similar to the high-frequency feature extraction unit 201a, and extracts, from the reference image 15, a second feature amount having a frequency component region different from that of the first feature amount.
  • the second feature amount is output to the correlation calculation unit 202 and also output to the high frequency component estimation unit 203.
  • the correlation calculation unit 202 When the first feature amount is input from the high-frequency feature extraction unit 201a and the second feature amount is input from the high-frequency feature extraction unit 201b, the correlation calculation unit 202 localizes between the reference image 15 and the reduced image. The correlation value of the high frequency component region on the basis of the feature amount in block units is calculated. As this correlation value, for example, there is a distance between the first feature value and the second feature value.
  • the high frequency component estimator 203 uses the second feature amount input from the high frequency feature extractor 201b and the correlation value input from the correlation calculator 202 to pre-learn high frequency component patterns from the high frequency component pattern memory 204. Is identified and a high frequency component to be included in the reference image 207 with virtual pixel accuracy is estimated and generated. The generated high frequency component is output to the adding unit 206.
  • the image enlargement processing unit 205 performs a 6-tap operation using six integer pixels in the vertical direction or the horizontal direction on the input reference image 15 in the same manner as the half-pixel accuracy sample generation processing according to the MPEG-4 AVC standard.
  • An enlarged image obtained by enlarging the reference image 15 in the vertical and horizontal N times size is generated by performing an interpolation operation using a filter or an enlargement filter process such as a bilinear filter.
  • the adding unit 206 adds the high-frequency component input from the high-frequency component estimation unit 203 to the enlarged image input from the image enlargement processing unit 205, that is, corrects the high-frequency component of the enlarged image, thereby obtaining a vertical and horizontal N-fold size.
  • An enlarged enlarged reference image is generated.
  • the interpolated image generation unit 43 uses the enlarged reference image data as a reference image 207 with virtual pixel accuracy in which 1 / N is 1.
  • the interpolation image generation unit 43 switches whether or not to add the high frequency component output from the high frequency component estimation unit 203 to the enlarged image output from the image enlargement processing unit 205,
  • the generation result of the reference image 207 with pixel accuracy may be controlled.
  • the addition unit 206 selectively determines whether or not the high-frequency component output from the high-frequency component estimation unit 203 is added, a predicted image 45 is generated in both cases of addition and non-addition to generate motion compensation. Prediction is performed, and the result is encoded to determine the more efficient one. And the information of the addition process which shows whether it added is multiplexed to the bit stream 30 as control information.
  • the interpolation image generation unit 43 may control the addition process of the addition unit 206 by uniquely determining from other parameters to be multiplexed into the bitstream 30.
  • determining from other parameters for example, the type of encoding mode 7 shown in FIG. 2A or 2B may be used.
  • the interpolation image generation unit 43 considers that the super-resolution effect is low, and controls the addition unit 206 not to add the high frequency component output from the high frequency component estimation unit 203.
  • the interpolated image generation unit 43 considers that the super-resolution effect is high, and controls the addition unit 206 to add the high-frequency component output from the high-frequency component estimation unit 203.
  • parameters such as the magnitude of the motion vector and the variation of the motion vector field in consideration of the surrounding area may be used.
  • the interpolation image generation unit 43 of the motion compensation prediction unit 9 does not have to multiplex the control information of the addition processing directly into the bit stream 30 by determining the type of parameter shared with the decoding device side, and the compression efficiency is improved. Can be increased.
  • the reference image 15 stored in the motion compensated prediction frame memory 14 is converted into the virtual pixel precision reference image 207 by the super-resolution processing described above before being stored in the motion compensated prediction frame memory 14 and then stored thereafter. You may comprise.
  • the memory size required as the motion compensated prediction frame memory 14 increases, but it becomes unnecessary to perform the super-resolution processing sequentially during the motion vector search and the prediction image generation, and the motion compensation prediction processing itself.
  • the frame encoding process and the generation process of the reference image 207 with virtual pixel accuracy can be performed in parallel, and the processing speed can be increased.
  • Motion vector detection procedure I ′ The interpolated image generation unit 43 generates a predicted image 45 for a motion vector 44 with integer pixel accuracy that is within a predetermined motion search range of the motion compensation region block image 41.
  • the predicted image 45 (predicted image 17) generated with integer pixel accuracy is output to the subtracting unit 12, and is subtracted from the motion compensation region block image 41 (macro / sub-block image 5) by the subtracting unit 12, thereby predicting the difference signal 13. become.
  • the encoding control unit 3 evaluates the prediction efficiency with respect to the prediction difference signal 13 and the integer pixel precision motion vector 44 (prediction parameter 18). Since this prediction efficiency may be evaluated by the above equation (1) described in the first embodiment, description thereof is omitted.
  • Motion vector detection procedure II The interpolated image generating unit 43 performs the interpolated image generating unit 43 illustrated in FIG. 16 on the motion vector 44 having the 1 ⁇ 2 pixel accuracy positioned around the motion vector having the integer pixel accuracy determined in the “motion vector detection procedure I”.
  • a predicted image 45 is generated using a reference image 207 with virtual pixel accuracy generated internally.
  • the predicted image 45 (predicted image 17) generated with the 1 ⁇ 2 pixel accuracy is converted by the subtractor 12 into the motion compensation region block image 41 (macro / sub-block image 5). ) To obtain the prediction difference signal 13.
  • the encoding control unit 3 evaluates the prediction efficiency with respect to the prediction difference signal 13 and the motion vector 44 with 1/2 pixel accuracy (prediction parameter 18), and is positioned around the motion vector with integer pixel accuracy.
  • a motion vector 44 having a 1 ⁇ 2 pixel accuracy that minimizes the prediction cost J 1 is determined from one or more motion vectors having a 1 ⁇ 2 pixel accuracy.
  • Motion vector detection procedure III The encoding control unit 3 and the motion compensation prediction unit 9 similarly apply to the motion vector with a 1 ⁇ 4 pixel accuracy around the motion vector with a 1 ⁇ 2 pixel accuracy determined in the “motion vector detection procedure II”.
  • a motion vector 44 having a 1/4 pixel accuracy that minimizes the prediction cost J 1 is determined from one or more motion vectors having a 1/4 pixel accuracy positioned at the position.
  • Motion vector detection procedure IV Similarly, the encoding control unit 3 and the motion compensation prediction unit 9 detect a motion vector with virtual pixel accuracy until a predetermined accuracy is obtained.
  • the motion compensation prediction unit 9 determines the predetermined predetermined values for the motion compensation region block images 41 obtained by dividing the macro / sub-block image 5 into blocks each serving as a motion compensation unit indicated by the encoding mode 7.
  • the motion vector of the accurate virtual pixel accuracy and the identification number of the reference image indicated by the motion vector are output as the prediction parameter 18.
  • the motion compensation prediction unit 9 outputs the prediction image 45 (prediction image 17) generated by the prediction parameter 18 to the subtraction unit 12, and is subtracted from the macro / sub-block image 5 by the subtraction unit 12 to generate the prediction difference signal 13. Get.
  • the prediction difference signal 13 output from the subtraction unit 12 is output to the transform / quantization unit 19. Subsequent processes are the same as those described in the first embodiment, and a description thereof will be omitted.
  • the configuration of the moving picture decoding apparatus according to the third embodiment is the same as that of the first embodiment except that the configuration and operation related to the predicted image generation processing with virtual pixel accuracy in the motion compensation prediction unit 70 according to the first and second embodiments are different. Since it is the same as the moving picture decoding apparatus according to Embodiments 1 and 2, FIGS. 1 to 16 are used.
  • the motion compensated prediction unit 70 when the motion compensated prediction unit 70 generates a predicted image based on a reference image with a virtual pixel accuracy such as a half pixel or a quarter pixel, it is vertical as in the MPEG-4 AVC standard.
  • the motion compensated prediction unit 70 according to the third embodiment generates a predicted image by generating a virtual pixel by interpolation using a 6-tap filter using six integer pixels in the horizontal direction or the horizontal direction.
  • the reference image 76 with integer pixel accuracy stored in the motion compensated prediction frame memory 75 is enlarged by super-resolution processing, thereby generating a reference image with virtual pixel accuracy.
  • the motion compensated prediction unit 70 includes the motion vector included in the input optimum prediction parameter 63 and the identification number of the reference image indicated by each motion vector (reference image).
  • the prediction image 72 is generated from the reference image 76 stored in the motion compensated prediction frame memory 75 and output based on the index).
  • the adder 73 adds the predicted image 72 input from the motion compensation prediction unit 70 to the predicted differential signal decoded value 67 input from the inverse quantization / inverse transform unit 66 to generate a decoded image 74.
  • the method of generating the predicted image 72 by the motion compensation prediction unit 70 is a process of searching for a motion vector from a plurality of reference images among the operations of the motion compensation prediction unit 9 on the encoding device side (motion detection unit 42 shown in FIG. 3). And corresponding to the operation of the interpolated image generating unit 43), and only the process of generating the predicted image 72 is performed according to the optimal prediction parameter 63 given from the variable length decoding unit 61.
  • a motion compensated prediction unit for the reference image 76 specified by the reference image identification number (reference image index) on the motion compensated prediction frame memory 75 is used.
  • 70 performs a process similar to the process shown in FIG. 16 to generate a reference image with virtual pixel accuracy, and generates a predicted image 72 using the decoded motion vector.
  • the decoding device side selectively determines whether or not to add the high-frequency component output from the high-frequency component estimation unit 203 shown in FIG. 16 to the enlarged image
  • the decoding device side performs addition processing.
  • the control information indicating the presence or absence is extracted from the bit stream 60 or is uniquely determined from other parameters to control the addition processing in the motion compensation prediction unit 70.
  • the encoding mode 7 When determining from other parameters, the encoding mode 7, the size of the motion vector, the variation of the motion vector field in consideration of the surrounding area, etc. can be used as in the above-described encoding device side, and motion compensation is performed.
  • the prediction unit 70 determines the parameter type in common with the encoding device side, the encoding device does not have to multiplex the control information of the addition process directly into the bitstream 30, and the compression efficiency can be improved. .
  • the motion vector included in the optimal prediction parameter 18a output from the encoding device side (that is, the optimal prediction parameter 63 on the decoding device side) is virtual. You may implement only when indicating pixel accuracy.
  • the motion compensation prediction unit 9 uses the reference image 15 of the motion compensation prediction frame memory 14 according to the motion vector, or the interpolation image generation unit 43 generates the reference image 207 with virtual pixel accuracy.
  • the prediction image 17 is generated from the reference image 15 or the reference image 207 with virtual pixel accuracy by switching whether to use.
  • the process shown in FIG. 16 is performed on the reference image before being stored in the motion compensated prediction frame memory 75, and the reference image with virtual pixel accuracy in which the enlargement process and the high frequency component are corrected is stored in the motion compensated prediction frame memory 75.
  • the memory size to be prepared as the motion compensated prediction frame memory 75 increases, but when the number of times the motion vector points to the pixel at the same virtual sample position is large, the processing shown in FIG. Since there is no need, the amount of calculation can be reduced.
  • the motion compensation prediction unit 70 may be configured to perform the process shown in FIG.
  • the range of displacement indicated by the motion vector is set by, for example, multiplexing and transmitting a value range indicating the range of displacement indicated by the motion vector in the bitstream 60, or by deciding between the encoding device side and the decoding device side in operation. It may be made known to the decoding device side.
  • the motion compensation prediction unit 9 performs the enlargement process on the reference image 15 in the motion compensation prediction frame memory 14 and corrects the high-frequency component thereof to perform virtual processing.
  • An interpolated image generation unit 43 that generates a reference image 207 with pixel accuracy, and switches between whether to use the reference image 15 or to generate and use a reference image 207 with virtual pixel accuracy according to the motion vector. Therefore, even when the input video signal 1 containing a lot of high-frequency components such as fine edges is highly compressed, the predicted image 17 generated by the motion compensation prediction contains a lot of high-frequency components. It becomes possible to generate from a reference image, and compression encoding can be performed efficiently.
  • the video decoding apparatus also includes an interpolation image generation unit in which the motion compensation prediction unit 70 generates a reference image with virtual pixel accuracy in the same procedure as the video encoding apparatus.
  • the prediction image 72 is generated by switching between using the reference image 76 of the motion compensated prediction frame memory 75 or generating and using a reference image with virtual pixel accuracy according to the motion vector multiplexed in the stream 60. Therefore, it is possible to correctly decode the bitstream encoded by the video encoding apparatus according to Embodiment 3.
  • the reference image 207 with virtual pixel accuracy is generated by the super-resolution processing based on the technique disclosed in (2000), the super-resolution processing itself is not limited to this technique, and any other super-resolution An image technique may be applied to generate the reference image 207 with virtual pixel accuracy.
  • the moving picture coding apparatus when configured by a computer, the block dividing unit 2, the coding control unit 3, the switching unit 6, the intra prediction unit 8, the motion compensation prediction unit 9, the motion A moving picture code describing the processing contents of the compensated prediction frame memory 14, the transform / quantization unit 19, the inverse quantization / inverse transform unit 22, the variable length coding unit 23, the loop filter unit 27, and the intra prediction memory 28
  • the computer program may be stored in the memory of the computer, and the CPU of the computer may execute the moving image encoding program stored in the memory.
  • a variable length decoding unit 61 when the video decoding apparatus according to Embodiments 1 to 3 is configured by a computer, a variable length decoding unit 61, an inverse quantization / inverse transform unit 66, a switching unit 68, an intra prediction unit 69, a motion compensation prediction unit 70, a motion compensation prediction frame memory 75, an intra prediction memory 77, and a moving picture decoding program describing the processing contents of the loop filter unit 78 are stored in the memory of the computer, and the moving image in which the CPU of the computer is stored in the memory An image decoding program may be executed.
  • the moving picture coding apparatus and the moving picture decoding apparatus perform compression coding efficiently while suppressing a code amount related to overhead such as a coding mode even in a macro block size set in advance regardless of image contents. Therefore, a moving image encoding device and a moving image decoding device capable of dividing the moving image into predetermined regions and performing encoding in units of regions, and an encoded moving image It is suitable for a moving picture decoding apparatus that decodes in units of a predetermined area.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

In the disclosed devices, a frequency information generation unit (93) counts the occurrence frequency of an optimal coding mode (7a) and generates frequency information (94); and a binarization table updating unit (95) updates the correspondence between a multilevel signal and a binary signal that are of a binarization table in a binarization table memory (105). The binarization unit (92) converts the optimal coding mode (7a) of the multilevel signal into a binary signal (103) using the appropriately updated binarization table, and codes said signal using an arithmetic coding processing computation unit (104).

Description

動画像符号化装置および動画像復号装置Moving picture encoding apparatus and moving picture decoding apparatus
 この発明は、動画像を所定領域に分割して、領域単位で符号化を行う動画像符号化装置と、符号化された動画像を所定領域単位で復号する動画像復号装置に関するものである。 The present invention relates to a moving image encoding apparatus that divides a moving image into predetermined regions and performs encoding in units of regions, and a moving image decoding device that decodes encoded moving images in units of predetermined regions.
 従来、MPEG(Moving Picture Experts Group)およびITU-T H.26x等の国際標準映像符号化方式では、映像信号の各フレームを、輝度信号16×16画素と対応する色差信号8×8画素分をまとめたブロックデータ(マクロブロックと呼ぶ)を単位として、動き補償技術および直交変換・変換係数量子化技術に基づいて圧縮する方法が採用されている。 Conventionally, MPEG (Moving Picture Experts Group) and ITU-T H.264. In an international standard video coding system such as 26x, each frame of a video signal is moved in units of block data (called a macro block) in which 8 × 8 pixels of color difference signals corresponding to a luminance signal of 16 × 16 pixels are combined. A compression method based on a compensation technique and an orthogonal transform / transform coefficient quantization technique is employed.
 動き補償技術とは、ビデオフレーム間に存在する高い相関を利用してマクロブロック毎に時間方向の信号の冗長度を削減する技術であり、過去に符号化済みのフレームを参照画像としてメモリ内に蓄積しておき、参照画像中の所定の探索範囲内から、動き補償予測の対象となっている現マクロブロックと最も差分電力の小さいブロック領域を探索して、現マクロブロックの空間位置と参照画像中の探索結果ブロックの空間位置とのずれを動きベクトルとして符号化する技術である。 Motion compensation technology is a technology that reduces the redundancy of signals in the time direction for each macroblock using a high correlation existing between video frames. In the memory, a previously encoded frame is used as a reference image. Accumulated and searched from within a predetermined search range in the reference image for a block region having the smallest differential power with the current macroblock that is the target of motion compensation prediction, and the spatial position of the current macroblock and the reference image This is a technique for encoding a shift from a spatial position of a search result block in the middle as a motion vector.
 このような従来の国際標準映像符号化方式では、マクロブロックサイズが固定されていることに起因して、特に画像の解像度が高くなった場合に、固定のマクロブロックサイズではマクロブロックがカバーする領域が局所化しやすい。すると、周辺マクロブロックで同じ符号化モードになったり、同じ動きベクトルが割り当てられたりするケースが発生する。このようなケースでは、予測効率が上がらないにもかかわらず符号化される符号化モード情報および動きベクトル情報等のオーバーヘッドが増えるため、符号化器全体としては符号化効率が低下する。 In such a conventional international standard video encoding system, the area covered by a macroblock with a fixed macroblock size, particularly when the resolution of the image is increased due to the fixed macroblock size. Is easy to localize. Then, there are cases where peripheral macroblocks have the same coding mode or are assigned the same motion vector. In such a case, overhead such as coding mode information and motion vector information to be coded increases even though the prediction efficiency does not increase, so that the coding efficiency of the whole encoder is lowered.
 そのような問題に対して、従来、画像の解像度または内容によってマクロブロックサイズを切り替えるようにした装置があった(例えば、特許文献1参照)。特許文献1に係る動画像符号化装置では、画像内容、解像度、プロファイル等に応じてマクロブロックサイズをスライス単位、フレーム単位、シーケンス単位等に適応的に切り替えて符号化することができるようになっている。 In response to such a problem, there has conventionally been an apparatus that switches the macroblock size depending on the resolution or content of an image (see, for example, Patent Document 1). In the moving picture coding apparatus according to Patent Document 1, the macroblock size can be adaptively switched between slice units, frame units, sequence units, and the like according to image content, resolution, profile, and the like. ing.
 従来の国際標準映像符号化方式および特許文献1に記載された装置では、マクロブロック内に動きの異なるオブジェクトが存在するときは、マクロブロック内を分割する符号化モードを選択することで、局所的に小さい動オブジェクトに対応できるようになっている。しかし、国際標準映像符号化方式では、マクロブロック分割の符号化モードに加えて、マクロブロック内の分割を指示する符号化モードの情報も符号化する必要があった。 In the conventional international standard video coding system and the device described in Patent Document 1, when there are objects with different motions in a macroblock, the coding mode for dividing the macroblock is selected to locally It can handle small moving objects. However, in the international standard video coding method, it is necessary to encode not only the macroblock division coding mode but also the coding mode information for instructing the division within the macroblock.
 これに対して特許文献1では、決定したマクロブロックよりも小さい動オブジェクトがフレーム内にたくさん存在する場合には、マクロブロック内を分割する符号化モードが選択される割合が高くなるため、予め小さいマクロブロックサイズを選択したほうが、大きいサイズのマクロブロックを選択した場合に生じるマクロブロック内の分割を指示する符号化モードに関わる符号量が発生しないため、同じ符号化効率でも符号量を削減することができる。 On the other hand, in Patent Document 1, when there are a large number of moving objects smaller than the determined macroblock in the frame, the ratio of selecting the coding mode for dividing the macroblock becomes high, so that it is small in advance. When the macroblock size is selected, the amount of code related to the coding mode that instructs division within the macroblock that occurs when a macroblock of a larger size is selected does not occur, so the amount of code can be reduced even with the same coding efficiency Can do.
国際公開WO2007/034918号International Publication No. WO2007 / 034918
 しかしながら、特許文献1に係る発明では、最適なマクロブロックサイズを決定するために、画像内容を符号化前に正確に評価するための高度な前処理が必要になり、この前処理に関わる処理量が膨大になるという課題があった。 However, in the invention according to Patent Document 1, in order to determine the optimum macroblock size, advanced preprocessing for accurately evaluating the image content before encoding is necessary, and the amount of processing related to this preprocessing is required. There was a problem that became huge.
 この発明は、上記のような課題を解決するためになされたもので、画像内容によらず予め設定されたマクロブロックサイズにおいても符号化モード等のオーバーヘッドに関わる符号量を抑えて効率よく圧縮符号化することのできる動画像符号化装置および動画像復号装置を得ることを目的とする。 The present invention has been made in order to solve the above-described problems, and can efficiently compress a compression code by suppressing a code amount related to overhead such as an encoding mode even in a macro block size set in advance regardless of image contents. It is an object of the present invention to obtain a moving image encoding device and a moving image decoding device that can be converted into a video image.
 この発明に係る動画像符号化装置は、符号化制御部が、符号化モードの中から、符号化効率に基づく所定のブロック分割タイプを指定した符号化モードを選択して多値信号として出力し、可変長符号化部は、符号化モードを表す多値信号と2値信号との対応関係を指定した2値化テーブルを用いて、符号化制御部が選択した多値信号の符号化モードを2値信号へ変換する2値化部と、2値化部が変換した2値信号を算術符号化して符号化ビット列を出力し、当該符号化ビット列をビットストリームへ多重化させる算術符号化処理演算部と、符号化モードそれぞれの符号化制御部による選択頻度に基づいて、2値化テーブルの多値信号と2値信号との対応関係を更新する2値化テーブル更新部とを有するものである。 In the moving image encoding apparatus according to the present invention, the encoding control unit selects an encoding mode in which a predetermined block division type based on encoding efficiency is selected from the encoding modes, and outputs the selected multilevel signal. The variable length encoding unit uses the binarization table that specifies the correspondence between the multilevel signal and the binary signal representing the encoding mode to determine the encoding mode of the multilevel signal selected by the encoding control unit. Arithmetic coding processing that outputs a coded bit string by arithmetically coding the binary signal converted by the binarized part and the binary signal converted by the binarized part, and multiplexing the coded bit string into a bit stream And a binarization table update unit that updates the correspondence between the multilevel signal and the binary signal of the binarization table based on the selection frequency by the encoding control unit of each encoding mode. .
 この発明に係る動画像復号装置は、可変長復号部が、ビットストリームに多重化された符号化モードを表す符号化ビット列を算術復号して、2値信号を生成する算術復号処理演算部と、符号化モードを表す2値信号と多値信号との対応関係を指定した2値化テーブルを用いて、算術復号処理演算部が生成した2値信号で表される符号化モードを多値信号へ変換する逆2値化部とを有するものである。 In the moving image decoding apparatus according to the present invention, the variable length decoding unit arithmetically decodes a coded bit string representing a coding mode multiplexed in a bitstream to generate a binary signal; Using a binarization table that specifies the correspondence between a binary signal representing a coding mode and a multilevel signal, the coding mode represented by the binary signal generated by the arithmetic decoding processing operation unit is changed to a multilevel signal. An inverse binarization unit for conversion.
 この発明によれば、符号化モードの符号化制御部による選択頻度に基づいて更新される2値化テーブルを用いて、符号化モードを表す多値信号を2値信号へ変換して算術符号化し、ビットストリームへ多重化するようにしたので、画像内容によらず予め設定されたマクロブロックサイズにおいても符号化モード等のオーバーヘッドに関わる符号量を抑えて効率よく圧縮符号化することのできる動画像符号化装置および動画像復号装置を得ることができる。 According to the present invention, using the binarization table updated based on the selection frequency by the encoding control unit in the encoding mode, the multilevel signal representing the encoding mode is converted into a binary signal and arithmetically encoded. Since it is multiplexed into a bitstream, a moving image that can be efficiently compression-coded while suppressing the amount of code related to overhead such as a coding mode even in a macroblock size set in advance regardless of the content of the image An encoding device and a moving image decoding device can be obtained.
この発明の実施の形態1に係る動画像符号化装置の構成を示すブロック図である。It is a block diagram which shows the structure of the moving image encoder which concerns on Embodiment 1 of this invention. 時間方向の予測符号化を行うピクチャの符号化モードの一例を示す図である。It is a figure which shows an example of the encoding mode of the picture which performs the prediction encoding of a time direction. 時間方向の予測符号化を行うピクチャの符号化モードの別の例を示す図である。It is a figure which shows another example of the encoding mode of the picture which performs the prediction encoding of a time direction. 実施の形態1に係る動画像符号化装置の動き補償予測部の内部構成を示すブロック図である。3 is a block diagram showing an internal configuration of a motion compensation prediction unit of the video encoding apparatus according to Embodiment 1. FIG. 符号化モードに応じた動きベクトルの予測値の決定方法を説明する図である。It is a figure explaining the determination method of the predicted value of the motion vector according to encoding mode. 符号化モードに応じた変換ブロックサイズの適応化の一例を示す図である。It is a figure which shows an example of adaptation of the transform block size according to encoding mode. 符号化モードに応じた変換ブロックサイズの適応化の別の例を示す図である。It is a figure which shows another example of adaptation of the transform block size according to encoding mode. 実施の形態1に係る動画像符号化装置の変換・量子化部の内部構成を示すブロック図である。3 is a block diagram showing an internal configuration of a transform / quantization unit of the video encoding apparatus according to Embodiment 1. FIG. この発明の実施の形態1に係る動画像復号装置の構成を示すブロック図である。It is a block diagram which shows the structure of the moving image decoding apparatus which concerns on Embodiment 1 of this invention. この発明の実施の形態2に係る動画像符号化装置の可変長符号化部の内部構成を示すブロック図である。It is a block diagram which shows the internal structure of the variable length encoding part of the moving image encoder which concerns on Embodiment 2 of this invention. 2値化テーブルの一例を示す図であり、更新前の状態を示す。It is a figure which shows an example of a binarization table, and shows the state before an update. 確率テーブルの一例を示す図である。It is a figure which shows an example of a probability table. 状態遷移テーブルの一例を示す図である。It is a figure which shows an example of a state transition table. コンテキスト識別情報の生成手順を説明する図であり、図13(a)は2値化テーブルを二分木表現で表した図、図13(b)は符号化対象マクロブロックと周辺ブロックの位置関係を示す図である。FIG. 13A is a diagram illustrating a procedure for generating context identification information. FIG. 13A is a diagram illustrating a binarization table in binary tree representation, and FIG. 13B is a diagram illustrating the positional relationship between an encoding target macroblock and peripheral blocks. FIG. 2値化テーブルの一例を示す図であり、更新後の状態を示す。It is a figure which shows an example of a binarization table, and shows the state after an update. この発明の実施の形態2に係る動画像復号装置の可変長復号部の内部構成を示すブロック図である。It is a block diagram which shows the internal structure of the variable length decoding part of the moving image decoding apparatus which concerns on Embodiment 2 of this invention. この発明の実施の形態3に係る動画像符号化装置の動き補償予測部が備える補間画像生成部の内部構成を示すブロック図である。It is a block diagram which shows the internal structure of the interpolation image generation part with which the motion compensation prediction part of the moving image encoder which concerns on Embodiment 3 of this invention is provided.
 以下、この発明の実施の形態について図面を参照しながら詳細に説明する。
実施の形態1.
 本実施の形態1では、映像の各フレーム画像を入力として用いて、近接フレーム間で動き補償予測を行い、得られた予測差分信号に対して直交変換・量子化による圧縮処理を施した後、可変長符号化を行ってビットストリームを生成する動画像符号化装置と、そのビットストリームを復号する動画像復号装置について説明する。
Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
Embodiment 1 FIG.
In the first embodiment, each frame image of video is used as an input, motion compensation prediction is performed between adjacent frames, and compression processing by orthogonal transformation / quantization is performed on the obtained prediction difference signal. A moving picture coding apparatus that generates a bit stream by performing variable length coding and a moving picture decoding apparatus that decodes the bit stream will be described.
 図1は、この発明の実施の形態1に係る動画像符号化装置の構成を示すブロック図である。図1に示す動画像符号化装置は、入力映像信号1の各フレーム画像をマクロブロックサイズ4の複数ブロックに分割したマクロブロック画像を、符号化モード7に応じて1以上のサブブロックに分割したマクロ/サブブロック画像5を出力するブロック分割部2と、マクロ/サブブロック画像5が入力されると、当該マクロ/サブブロック画像5に対し、イントラ予測用メモリ28の画像信号を用いてフレーム内予測して予測画像11を生成するイントラ予測部8と、マクロ/サブブロック画像5が入力されると、当該マクロ/サブブロック画像5に対し、動き補償予測フレームメモリ14の参照画像15を用いて動き補償予測を行って予測画像17を生成する動き補償予測部9と、符号化モード7に応じてマクロ/サブブロック画像5をイントラ予測部8または動き補償予測部9のいずれか一方に入力する切替部6と、ブロック分割部2が出力するマクロ/サブブロック画像5から、イントラ予測部8または動き補償予測部9のいずれか一方が出力する予測画像11,17を差し引いて、予測差分信号13を生成する減算部12と、予測差分信号13に対し、変換および量子化処理を行って圧縮データ21を生成する変換・量子化部19と、圧縮データ21をエントロピ符号化してビットストリーム30へ多重化する可変長符号化部23と、圧縮データ21を逆量子化および逆変換処理して局所復号予測差分信号24を生成する逆量子化・逆変換部22と、逆量子化・逆変換部22にイントラ予測部8または動き補償予測部9のいずれか一方が出力する予測画像11,17を加算して局所復号画像信号26を生成する加算部25と、局所復号画像信号26を格納するイントラ予測用メモリ28と、局所復号画像信号26をフィルタ処理して局所復号画像29を生成するループフィルタ部27と、局所復号画像29を格納する動き補償予測フレームメモリ14とを含む。 FIG. 1 is a block diagram showing a configuration of a moving picture coding apparatus according to Embodiment 1 of the present invention. 1 divides a macroblock image obtained by dividing each frame image of the input video signal 1 into a plurality of blocks having a macroblock size of 4 into one or more sub-blocks according to the encoding mode 7. When the block dividing unit 2 that outputs the macro / sub-block image 5 and the macro / sub-block image 5 are input, the macro / sub-block image 5 is intraframed using the image signal of the intra prediction memory 28. When the intra prediction unit 8 that predicts and generates the predicted image 11 and the macro / subblock image 5 are input, the reference image 15 of the motion compensated prediction frame memory 14 is used for the macro / subblock image 5. A motion compensated prediction unit 9 that performs motion compensated prediction to generate a predicted image 17 and a macro / sub-block image 5 according to the encoding mode 7 From the switching unit 6 that is input to either the tiger prediction unit 8 or the motion compensation prediction unit 9 and the macro / sub-block image 5 that is output from the block division unit 2, either the intra prediction unit 8 or the motion compensation prediction unit 9 is used. The subtraction unit 12 that generates the prediction difference signal 13 by subtracting the prediction images 11 and 17 output from one side, and the conversion / quantization that performs the conversion and quantization processing on the prediction difference signal 13 to generate the compressed data 21 Unit 19, variable-length encoding unit 23 that entropy-encodes compressed data 21 and multiplexes it into bit stream 30, and inverse that generates local decoded prediction difference signal 24 by performing inverse quantization and inverse transform processing on compressed data 21 The quantized / inverse transform unit 22 and the prediction images 11 and 17 output from either the intra prediction unit 8 or the motion compensation prediction unit 9 are added to the inverse quantization / inverse transform unit 22. An adder 25 for generating a local decoded image signal 26, an intra prediction memory 28 for storing the local decoded image signal 26, and a loop filter unit 27 for generating a local decoded image 29 by filtering the local decoded image signal 26. And a motion-compensated prediction frame memory 14 that stores the locally decoded image 29.
 符号化制御部3は、各部の処理に必要な情報(マクロブロックサイズ4、符号化モード7、最適符号化モード7a、予測パラメータ10、最適予測パラメータ10a,18a、圧縮パラメータ20、最適圧縮パラメータ20a)を出力する。以下、マクロブロックサイズ4および符号化モード7の詳細を説明する。その他の情報の詳細は後述する。 The encoding control unit 3 includes information necessary for processing of each unit (macroblock size 4, encoding mode 7, optimal encoding mode 7a, prediction parameter 10, optimal prediction parameters 10a and 18a, compression parameter 20, and optimal compression parameter 20a. ) Is output. Details of the macroblock size 4 and the encoding mode 7 will be described below. Details of other information will be described later.
 符号化制御部3は、ブロック分割部2へ、入力映像信号1の各フレーム画像のマクロブロックサイズ4を指定すると共に、符号化対象のマクロブロック毎に、ピクチャタイプに応じて選択可能なすべての符号化モード7を指示する。
 なお、符号化制御部3は符号化モードのセットの中から所定の符号化モードを選択可能であるが、この符号化モードのセットは任意であり、例えば以下に示す図2Aまたは図2Bのセットの中から所定の符号化モードを選択可能とする。
The encoding control unit 3 specifies the macro block size 4 of each frame image of the input video signal 1 to the block dividing unit 2 and selects all the selectable macro blocks according to the picture type for each encoding target macro block. Indicates encoding mode 7.
The encoding control unit 3 can select a predetermined encoding mode from the set of encoding modes, but this encoding mode set is arbitrary, for example, the set shown in FIG. 2A or FIG. 2B described below. A predetermined encoding mode can be selected from the list.
 図2Aは、時間方向の予測符号化を行うP(Predictive)ピクチャの符号化モードの例を示す図である。図2Aにおいて、mb_mode0~2は、マクロブロック(M×L画素ブロック)をフレーム間予測により符号化するモード(inter)である。mb_mode0はマクロブロック全体に対して1つの動きベクトルを割り当てるモードであり、mb_mode1,2はそれぞれマクロブロックを水平または垂直に等分し、分割された各サブブロックにそれぞれ異なる動きベクトルを割り当てるモードである。
 mb_mode3は、マクロブロックを4分割し、分割された各サブブロックに異なる符号化モード(sub_mb_mode)を割り当てるモードである。
FIG. 2A is a diagram illustrating an example of a coding mode of a P (Predictive) picture that performs predictive coding in the time direction. In FIG. 2A, mb_modes 0 to 2 are modes (inter) for encoding macroblocks (M × L pixel blocks) by inter-frame prediction. mb_mode0 is a mode in which one motion vector is assigned to the entire macroblock, and mb_mode1 and 2 are modes in which the macroblock is equally divided horizontally or vertically, and different motion vectors are assigned to the divided sub-blocks. .
mb_mode3 is a mode in which a macroblock is divided into four and different coding modes (sub_mb_mode) are assigned to the divided subblocks.
 sub_mb_mode0~4は、マクロブロックの符号化モードでmb_mode3が選ばれたときに、当該マクロブロックを4分割した各サブブロック(m×l画素ブロック)に対してそれぞれ割り当てられる符号化モードであり、sub_mb_mode0はサブブロックをフレーム内予測により符号化するモード(intra)である。それ以外はフレーム間予測により符号化するモード(inter)であり、sub_mb_mode1はサブブロック全体に対して1つの動きベクトルを割り当てるモード、sub_mb_mode2,3はそれぞれサブブロックを水平または垂直に等分し、分割された各サブブロックにそれぞれ異なる動きベクトルを割り当てるモード、sub_mb_mode4はサブブロックを4分割し、分割された各サブブロックに異なる動きベクトルを割り当てるモードである。 sub_mb_mode0 to sub_mb_mode0 are coding modes respectively assigned to subblocks (m × 1 pixel blocks) obtained by dividing the macroblock into four when mb_mode3 is selected as the macroblock coding mode. Is a mode (intra) in which sub-blocks are encoded by intra-frame prediction. The other modes are modes for inter-frame encoding (inter), sub_mb_mode1 is a mode in which one motion vector is assigned to the entire subblock, and sub_mb_mode2 and 3 are subdivided into subblocks horizontally or vertically, respectively. Sub_mb_mode4 is a mode in which a different motion vector is assigned to each divided subblock, and sub_mb_mode4 is a mode in which a subblock is divided into four and a different motion vector is assigned to each divided subblock.
 また、図2Bは、時間方向の予測符号化を行うPピクチャの符号化モードの別の例を示す図である。図2Bにおいて、mb_mode0~6は、マクロブロック(M×L画素ブロック)をフレーム間予測により符号化するモード(inter)である。mb_mode0はマクロブロック全体に対して1つの動きベクトルを割り当てるモードであり、mb_mode1~6はそれぞれマクロブロックを水平、垂直または対角方向に分割し、分割された各サブブロックにそれぞれ異なる動きベクトルを割り当てるモードである。
 mb_mode7は、マクロブロックを4分割し、分割された各サブブロックに異なる符号化モード(sub_mb_mode)を割り当てるモードである。
FIG. 2B is a diagram illustrating another example of the coding mode of the P picture that performs predictive coding in the time direction. In FIG. 2B, mb_modes 0 to 6 are modes (inter) for encoding macroblocks (M × L pixel blocks) by inter-frame prediction. mb_mode0 is a mode in which one motion vector is assigned to the entire macroblock, and mb_modes 1 to 6 divide the macroblock horizontally, vertically or diagonally, and assign different motion vectors to the divided sub-blocks. Mode.
mb_mode7 is a mode in which a macroblock is divided into four and different coding modes (sub_mb_mode) are allocated to the divided subblocks.
 sub_mb_mode0~8は、マクロブロックの符号化モードでmb_mode7が選ばれたときに、当該マクロブロックを4分割した各サブブロック(m×l画素ブロック)に対してそれぞれ割り当てられる符号化モードであり、sub_mb_mode0はサブブロックをフレーム内予測により符号化するモード(intra)である。それ以外はフレーム間予測により符号化するモード(inter)であり、sub_mb_mode1はサブブロック全体に対して1つの動きベクトルを割り当てるモード、sub_mb_mode2~7はそれぞれサブブロックを水平、垂直または対角方向に分割し、分割された各サブブロックにそれぞれ異なる動きベクトルを割り当てるモード、sub_mb_mode8はサブブロックを4分割し、分割された各サブブロックに異なる動きベクトルを割り当てるモードである。 Sub_mb_mode 0 to 8 are coding modes respectively assigned to sub-blocks (m × 1 pixel blocks) obtained by dividing the macro block into four when mb_mode 7 is selected in the macro block coding mode, and sub_mb_mode 0 Is a mode (intra) in which sub-blocks are encoded by intra-frame prediction. The other modes are inter-frame encoding modes (inter), sub_mb_mode1 is a mode in which one motion vector is assigned to the entire sub-block, and sub_mb_modes 2 to 7 are sub-blocks divided horizontally, vertically, or diagonally, respectively. The sub_block_mode 8 is a mode in which a sub-block is divided into four, and a different motion vector is assigned to each divided sub-block.
 ブロック分割部2は、動画像符号化装置に入力された入力映像信号1の各フレーム画像を、符号化制御部3から指定されるマクロブロックサイズ4のマクロブロック画像に分割する。さらにブロック分割部2は、符号化制御部3から指定される符号化モード7がマクロブロックを分割したサブブロックに対して異なる符号化モードを割り当てるモード(図2Aのsub_mb_mode1~4または図2Bのsub_mb_mode1~8)を含む場合には、マクロブロック画像を符号化モード7が示すサブブロック画像に分割する。よって、ブロック分割部2から出力するブロック画像は、符号化モード7に応じてマクロブロック画像またはサブブロック画像のいずれか一方となる。以下、このブロック画像をマクロ/サブブロック画像5と呼ぶ。 The block dividing unit 2 divides each frame image of the input video signal 1 input to the moving image encoding device into macroblock images having a macroblock size of 4 specified by the encoding control unit 3. Further, the block division unit 2 assigns different coding modes to the sub-blocks in which the coding mode 7 specified by the coding control unit 3 has divided the macroblock (sub_mb_mode1 to 4 in FIG. 2A or sub_mb_mode1 in FIG. 2B). ) To 8), the macroblock image is divided into sub-block images indicated by the encoding mode 7. Therefore, the block image output from the block dividing unit 2 is either a macro block image or a sub block image according to the encoding mode 7. Hereinafter, this block image is referred to as a macro / sub-block image 5.
 なお、入力映像信号1の各フレームの水平または垂直サイズがマクロブロックサイズ4のそれぞれ水平サイズまたは垂直サイズの整数倍ではないときには、入力映像信号1の各フレームに対し、フレームサイズがマクロブロックサイズの整数倍になるまで水平方向または垂直方向に画素を拡張したフレーム(拡張フレーム)を生成する。拡張領域の画素の生成方法として例えば、垂直方向に画素を拡張する場合には元のフレームの下端の画素を繰り返して埋める、あるいは、固定の画素値(グレー、黒、白など)をもつ画素で埋める、などの方法がある。水平方向に画素を拡張する場合も同様に、元のフレームの右端の画素を繰り返して埋める、あるいは、固定の画素値(グレー、黒、白など)をもつ画素で埋める、などの方法がある。入力映像信号1の各フレームに対し生成されたフレームサイズがマクロブロックサイズの整数倍である拡張フレームは、入力映像信号1の各フレーム画像に代わってブロック分割部2へ入力される。 When the horizontal or vertical size of each frame of the input video signal 1 is not an integer multiple of the horizontal size or vertical size of the macroblock size 4, the frame size is equal to the macroblock size for each frame of the input video signal 1. A frame (extended frame) in which pixels are expanded in the horizontal direction or the vertical direction until an integral multiple is generated. For example, when expanding a pixel in the vertical direction, the pixel at the bottom of the original frame is repeatedly filled, or a pixel having a fixed pixel value (gray, black, white, etc.) There are methods such as filling. Similarly, when extending the pixels in the horizontal direction, there are methods such as repeatedly filling the rightmost pixel of the original frame, or filling with a pixel having a fixed pixel value (gray, black, white, etc.). An extension frame generated for each frame of the input video signal 1 and having an integer multiple of the macroblock size is input to the block dividing unit 2 in place of each frame image of the input video signal 1.
 なお、マクロブロックサイズ4および入力映像信号1の各フレームのフレームサイズ(水平サイズおよび垂直サイズ)は、1フレーム以上のピクチャから構成されるシーケンス単位あるいはピクチャ単位にビットストリームに多重化するため、可変長符号化部23へ出力される。 Note that the macroblock size 4 and the frame size (horizontal size and vertical size) of each frame of the input video signal 1 are variable because they are multiplexed into a bitstream in units of sequences or pictures consisting of one or more frames. It is output to the long encoding unit 23.
 なお、マクロブロックサイズの値を直接ビットストリームに多重化せずに、プロファイル等で規定するようにしてもよい。この場合にはシーケンス単位にプロファイルを識別するための識別情報がビットストリームに多重化される。 Note that the value of the macro block size may be specified by a profile or the like without being multiplexed directly into the bit stream. In this case, identification information for identifying the profile in sequence units is multiplexed into the bit stream.
 切替部6は、符号化モード7に応じてマクロ/サブブロック画像5の入力先を切り替えるスイッチである。この切替部6は、符号化モード7がフレーム内予測により符号化するモード(以下、フレーム内予測モードと呼ぶ)である場合には、マクロ/サブブロック画像5イントラ予測部8へ入力し、符号化モード7がフレーム間予測により符号化するモード(以下、フレーム間予測モードと呼ぶ)である場合にはマクロ/サブブロック画像5を動き補償予測部9へ入力する。 The switching unit 6 is a switch for switching the input destination of the macro / sub-block image 5 in accordance with the encoding mode 7. When the encoding mode 7 is a mode for encoding by intra-frame prediction (hereinafter referred to as intra-frame prediction mode), the switching unit 6 inputs the macro / sub-block image 5 intra prediction unit 8 to When the conversion mode 7 is a mode for encoding by inter-frame prediction (hereinafter referred to as inter-frame prediction mode), the macro / sub-block image 5 is input to the motion compensation prediction unit 9.
 イントラ予測部8は、入力されたマクロ/サブブロック画像5について、マクロブロックサイズ4で指定される符号化対象のマクロブロックまたは符号化モード7で指定されるサブブロックの単位でフレーム内予測を行う。なお、イントラ予測部8は、符号化制御部3から指示される予測パラメータ10に含まれるすべてのイントラ予測モードについて、イントラ予測用メモリ28内に格納されているフレーム内の画像信号を用いて、それぞれ予測画像11を生成する。 The intra prediction unit 8 performs intra-frame prediction on the input macro / subblock image 5 in units of a macroblock to be encoded specified by the macroblock size 4 or a subblock specified by the encoding mode 7. . The intra prediction unit 8 uses the image signals in the frames stored in the intra prediction memory 28 for all intra prediction modes included in the prediction parameter 10 instructed from the encoding control unit 3. A predicted image 11 is generated for each.
 ここで、予測パラメータ10の詳細を説明する。符号化モード7がフレーム内予測モードの場合は、符号化制御部3が、その符号化モード7に対応する予測パラメータ10としてイントラ予測モードを指定する。このイントラ予測モードには、例えばマクロブロックまたはサブブロック内を4×4画素ブロック単位にして、イントラ予測用メモリ28内の画像信号の単位ブロック周囲の画素を用いて予測画像を生成するモード、マクロブロックまたはサブブロック内を8×8画素ブロック単位にして、イントラ予測用メモリ28内の画像信号の単位ブロック周辺の画素を用いて予測画像を生成するモード、マクロブロックまたはサブブロック内を16×16画素ブロック単位にして、イントラ予測用メモリ28内の画像信号の単位ブロック周辺の画素を用いて予測画像を生成するモード、マクロブロックまたはサブブロック内を縮小した画像から予測画像を生成するモード等がある。 Here, the details of the prediction parameter 10 will be described. When the encoding mode 7 is the intra-frame prediction mode, the encoding control unit 3 designates the intra prediction mode as the prediction parameter 10 corresponding to the encoding mode 7. In this intra prediction mode, for example, a macroblock or sub-block is set to a 4 × 4 pixel block unit, and a prediction image is generated using pixels around a unit block of an image signal in the intra prediction memory 28. A mode in which a block or sub-block is made into 8 × 8 pixel block units, and a prediction image is generated using pixels around a unit block of the image signal in the intra prediction memory 28, and a macro block or sub-block is 16 × 16 A mode for generating a prediction image using pixels around a unit block of an image signal in the intra prediction memory 28 in units of pixel blocks, a mode for generating a prediction image from an image obtained by reducing the size of a macro block or sub block, and the like. is there.
 動き補償予測部9は、動き補償予測フレームメモリ14に格納されている1フレーム以上の参照画像データの中から予測画像生成に用いる参照画像15を指定して、この参照画像15とマクロ/サブブロック画像5とを用いて、符号化制御部3から指示される符号化モード7に応じた動き補償予測を行い、予測パラメータ18と予測画像17を生成する。 The motion-compensated prediction unit 9 designates a reference image 15 to be used for generating a predicted image from one or more frames of reference image data stored in the motion-compensated prediction frame memory 14, and the reference image 15 and the macro / subblock Using the image 5, motion compensation prediction according to the encoding mode 7 instructed from the encoding control unit 3 is performed, and a prediction parameter 18 and a prediction image 17 are generated.
 ここで、予測パラメータ18の詳細を説明する。符号化モード7がフレーム間予測モードの場合は、動き補償予測部9が、その符号化モード7に対応する予測パラメータ18として動きベクトル、各動きベクトルが指す参照画像の識別番号(参照画像インデックス)等を求める。予測パラメータ18の生成方法の詳細は後述する。 Here, the details of the prediction parameter 18 will be described. When the coding mode 7 is the inter-frame prediction mode, the motion compensated prediction unit 9 uses the motion vector as the prediction parameter 18 corresponding to the coding mode 7 and the reference image identification number (reference image index) indicated by each motion vector. Etc. Details of the method of generating the prediction parameter 18 will be described later.
 減算部12は、予測画像11または予測画像17のいずれか一方をマクロ/サブブロック画像5から差し引いて、予測差分信号13を得る。なお、予測差分信号13は、予測パラメータ10が指定するすべてのイントラ予測モードに応じてイントラ予測部8が生成する予測画像11すべてに対して、各々生成される。 The subtraction unit 12 subtracts either the predicted image 11 or the predicted image 17 from the macro / sub-block image 5 to obtain a predicted difference signal 13. In addition, the prediction difference signal 13 is each produced | generated with respect to all the prediction images 11 which the intra estimation part 8 produces | generates according to all the intra prediction modes which the prediction parameter 10 designates.
 予測パラメータ10が指定するすべてのイントラ予測モードに応じて各々生成された予測差分信号13は符号化制御部3にて評価され、最適なイントラ予測モードを含む最適予測パラメータ10aが決定される。評価方法として例えば、予測差分信号13を変換・量子化して得られる圧縮データ21を用いて後述の符号化コストJ2を計算し、符号化コストJ2を最小にするイントラ予測モードを選択する。 The prediction difference signal 13 generated in accordance with all intra prediction modes specified by the prediction parameter 10 is evaluated by the encoding control unit 3, and the optimal prediction parameter 10a including the optimal intra prediction mode is determined. As an evaluation method, for example, an encoding cost J 2 described later is calculated using the compressed data 21 obtained by transforming and quantizing the prediction difference signal 13, and an intra prediction mode that minimizes the encoding cost J 2 is selected.
 符号化制御部3は、イントラ予測部8または動き補償予測部9において符号化モード7に含まれるすべてのモードに対し各々生成された予測差分信号13を評価し、評価結果に基づいて、符号化モード7のうちから最適な符号化効率が得られる最適符号化モード7aを決定する。また、符号化制御部3は、予測パラメータ10,18および圧縮パラメータ20のうちから最適符号化モード7aに対応する最適予測パラメータ10a,18aおよび最適圧縮パラメータ20aを決定する。それぞれの決定手順については後述する。
 なお、上述したように、フレーム内予測モードの場合、予測パラメータ10および最適予測パラメータ10aにはイントラ予測モードが含まれる。一方、フレーム間予測モードの場合、予測パラメータ18および最適予測パラメータ18aには動きベクトル、各動きベクトルが指す参照画像の識別番号(参照画像インデックス)等が含まれる。
 また、圧縮パラメータ20および最適圧縮パラメータ20aには、変換ブロックサイズ、量子化ステップサイズ等が含まれる。
The encoding control unit 3 evaluates the prediction difference signals 13 respectively generated for all modes included in the encoding mode 7 in the intra prediction unit 8 or the motion compensation prediction unit 9, and performs encoding based on the evaluation result. The optimum coding mode 7a that can obtain the optimum coding efficiency is determined from the modes 7. Also, the encoding control unit 3 determines the optimum prediction parameters 10a, 18a and the optimum compression parameter 20a corresponding to the optimum coding mode 7a from the prediction parameters 10, 18 and the compression parameter 20. Each determination procedure will be described later.
As described above, in the case of the intra-frame prediction mode, the intra prediction mode is included in the prediction parameter 10 and the optimal prediction parameter 10a. On the other hand, in the inter-frame prediction mode, the prediction parameter 18 and the optimal prediction parameter 18a include a motion vector, a reference image identification number (reference image index) indicated by each motion vector, and the like.
Further, the compression parameter 20 and the optimum compression parameter 20a include a transform block size, a quantization step size, and the like.
 この決定手順の結果、符号化制御部3は、符号化対象のマクロブロックまたはサブブロックに対する最適符号化モード7a、最適予測パラメータ10a,18a、最適圧縮パラメータ20aを可変長符号化部23へ出力する。また、符号化制御部3は、圧縮パラメータ20のうちの最適圧縮パラメータ20aを変換・量子化部19および逆量子化・逆変換部22へ出力する。 As a result of this determination procedure, the encoding control unit 3 outputs the optimal encoding mode 7a, the optimal prediction parameters 10a and 18a, and the optimal compression parameter 20a for the macroblock or subblock to be encoded to the variable length encoding unit 23. . Also, the encoding control unit 3 outputs the optimum compression parameter 20 a of the compression parameters 20 to the transform / quantization unit 19 and the inverse quantization / inverse transform unit 22.
 変換・量子化部19は、符号化モード7に含まれるすべてのモードに対応して生成された複数の予測差分信号13のうち、符号化制御部3が決定した最適符号化モード7aと最適予測パラメータ10a,18aとに基づいて生成された予測画像11,17に対応する予測差分信号13(以下、最適予測差分信号13aと呼ぶ)を選択し、この最適予測差分信号13aに対して、符号化制御部3にて決定された最適圧縮パラメータ20aの変換ブロックサイズに基づいてDCT(離散コサイン変換)等の変換処理を実施することで変換係数を算出すると共に、その変換係数を符号化制御部3から指示される最適圧縮パラメータ20aの量子化ステップサイズに基づいて量子化し、量子化後の変換係数である圧縮データ21を逆量子化・逆変換部22および可変長符号化部23へ出力する。 The transform / quantization unit 19 includes the optimal encoding mode 7a determined by the encoding control unit 3 and the optimal prediction among the plurality of prediction difference signals 13 generated corresponding to all modes included in the encoding mode 7. A prediction difference signal 13 (hereinafter referred to as an optimum prediction difference signal 13a) corresponding to the prediction images 11 and 17 generated based on the parameters 10a and 18a is selected, and the optimum prediction difference signal 13a is encoded. A transform coefficient is calculated by performing transform processing such as DCT (Discrete Cosine Transform) based on the transform block size of the optimum compression parameter 20a determined by the control unit 3, and the transform coefficient is encoded by the encoding control unit 3 Is quantized based on the quantization step size of the optimum compression parameter 20a instructed from the above, and the quantized compressed data 21, which is a transform coefficient, is inversely quantized and inversely transformed. And outputs it to 22 and a variable length coding unit 23.
 逆量子化・逆変換部22は、変換・量子化部19から入力された圧縮データ21を、最適圧縮パラメータ20aを用いて逆量子化して、逆DCT等の逆変換処理を実施することで予測差分信号13aの局所復号予測差分信号24を生成し、加算部25へ出力する。 The inverse quantization / inverse transform unit 22 performs inverse transform processing such as inverse DCT by performing inverse quantization on the compressed data 21 input from the transform / quantization unit 19 using the optimal compression parameter 20a. A local decoded prediction difference signal 24 of the difference signal 13 a is generated and output to the adding unit 25.
 加算部25は、局所復号予測差分信号24と、予測画像11または予測画像17とを加算して局所復号画像信号26を生成し、この局所復号画像信号26をループフィルタ部27へ出力すると共にイントラ予測用メモリ28に格納する。この局所復号画像信号26が、フレーム内予測用の画像信号となる。 The adding unit 25 adds the local decoded prediction difference signal 24 and the predicted image 11 or the predicted image 17 to generate a local decoded image signal 26, and outputs the local decoded image signal 26 to the loop filter unit 27 and intra. Store in the prediction memory 28. This locally decoded image signal 26 becomes an image signal for intra-frame prediction.
 ループフィルタ部27は、加算部25から入力された局所復号画像信号26に対し、所定のフィルタリング処理を行い、フィルタリング処理後の局所復号画像29を動き補償予測フレームメモリ14に格納する。この局所復号画像29が動き補償予測用の参照画像15となる。ループフィルタ部27によるフィルタリング処理は、入力される局所復号画像信号26のマクロブロック単位で行ってもよいし、1画面分のマクロブロックに相当する局所復号画像信号26が入力された後に1画面分まとめて行ってもよい。 The loop filter unit 27 performs a predetermined filtering process on the local decoded image signal 26 input from the adder unit 25 and stores the local decoded image 29 after the filtering process in the motion compensated prediction frame memory 14. This locally decoded image 29 becomes the reference image 15 for motion compensation prediction. The filtering process by the loop filter unit 27 may be performed in units of macroblocks of the input local decoded image signal 26, or after one local decoded image signal 26 corresponding to one macroblock is input. You may go together.
 可変長符号化部23は、変換・量子化部19から出力された圧縮データ21と、符号化制御部3から出力される最適符号化モード7aと、最適予測パラメータ10a,18aと、最適圧縮パラメータ20aとをエントロピ符号化して、それらの符号化結果を示すビットストリーム30を生成する。なお、最適予測パラメータ10a,18aと最適圧縮パラメータ20aは、最適符号化モード7aが指す符号化モードに応じた単位に符号化される。 The variable length encoding unit 23 includes the compressed data 21 output from the transform / quantization unit 19, the optimal encoding mode 7a output from the encoding control unit 3, the optimal prediction parameters 10a and 18a, and the optimal compression parameter. 20a is entropy-encoded, and a bit stream 30 indicating the encoding result is generated. The optimal prediction parameters 10a and 18a and the optimal compression parameter 20a are encoded in units corresponding to the encoding mode indicated by the optimal encoding mode 7a.
 上述したように、本実施の形態1に係る動画像符号化装置は、符号化制御部3と連携して動き補償予測部9および変換・量子化部19がそれぞれ動作することによって、最適な符号化効率が得られる符号化モード、予測パラメータ、圧縮パラメータ(即ち、最適符号化モード7a、最適予測パラメータ10a,18a、最適圧縮パラメータ20a)が決定される。 As described above, the moving picture coding apparatus according to the first embodiment is configured so that the motion compensation prediction unit 9 and the transform / quantization unit 19 operate in cooperation with the coding control unit 3, respectively. The coding mode, the prediction parameter, and the compression parameter (that is, the optimum coding mode 7a, the optimum prediction parameters 10a and 18a, and the optimum compression parameter 20a) are determined.
 ここで、符号化制御部3による最適な符号化効率が得られる符号化モード、予測パラメータ、圧縮パラメータの決定手順について、1.予測パラメータ、2.圧縮パラメータ、3.符号化モードの順に説明する。 Here, a procedure for determining a coding mode, a prediction parameter, and a compression parameter by which the coding control unit 3 can obtain optimum coding efficiency is as follows. Prediction parameters, 2. 2. compression parameters; The description will be made in the order of the encoding mode.
1.予測パラメータの決定手順
 ここでは、符号化モード7がフレーム間予測モードのときに、そのフレーム間予測に係わる動きベクトル、各動きベクトルが指す参照画像の識別番号(参照画像インデックス)等を含む予測パラメータ18を決定する手順を説明する。
1. Prediction Parameter Determination Procedure Here, when the encoding mode 7 is the inter-frame prediction mode, a prediction parameter including a motion vector related to the inter-frame prediction, a reference image identification number (reference image index) indicated by each motion vector, etc. A procedure for determining 18 will be described.
 動き補償予測部9では、符号化制御部3と連携して、符号化制御部3から動き補償予測部9へ指示されるすべての符号化モード7(例えば図2Aまたは図2Bに示す符号化モードのセット)に対してそれぞれ予測パラメータ18を決定する。以下、その詳細な手順について説明する。 In the motion compensation prediction unit 9, in cooperation with the encoding control unit 3, all the encoding modes 7 (for example, the encoding modes shown in FIG. 2A or FIG. 2B) instructed from the encoding control unit 3 to the motion compensation prediction unit 9 are performed. The prediction parameter 18 is determined for each set. The detailed procedure will be described below.
 図3は、動き補償予測部9の内部構成を示すブロック図である。図3に示す動き補償予測部9は、動き補償領域分割部40と、動き検出部42と、補間画像生成部43とを含む。また、入力データとしては、符号化制御部3から入力される符号化モード7と、切替部6から入力されるマクロ/サブブロック画像5と、動き補償予測フレームメモリ14から入力される参照画像15とがある。 FIG. 3 is a block diagram showing the internal configuration of the motion compensation prediction unit 9. The motion compensation prediction unit 9 illustrated in FIG. 3 includes a motion compensation region division unit 40, a motion detection unit 42, and an interpolated image generation unit 43. As input data, the encoding mode 7 input from the encoding control unit 3, the macro / sub-block image 5 input from the switching unit 6, and the reference image 15 input from the motion compensated prediction frame memory 14. There is.
 動き補償領域分割部40は、符号化制御部3から指示される符号化モード7に応じて、切替部6から入力されるマクロ/サブブロック画像5を動き補償の単位となるブロックに分割し、この動き補償領域ブロック画像41を動き検出部42へ出力する。 The motion compensation region dividing unit 40 divides the macro / sub-block image 5 input from the switching unit 6 into blocks serving as motion compensation units in accordance with the encoding mode 7 instructed from the encoding control unit 3. The motion compensation region block image 41 is output to the motion detection unit 42.
 補間画像生成部43は、動き補償予測フレームメモリ14に格納されている1フレーム以上の参照画像データの中から予測画像生成に用いる参照画像15を指定し、動き検出部42が指定された参照画像15上の所定の動き探索範囲内で動きベクトル44を検出する。なお、動きベクトルの検出は、MPEG-4 AVC規格等と同様に、仮想サンプル精度の動きベクトルによって行う。この検出方法は、参照画像の持つ画素情報(整数画素と呼ぶ)に対し、整数画素の間に内挿演算によって仮想的なサンプル(画素)を作り出し、それを予測画像として利用するものであり、MPEG-4 AVC規格では1/8画素精度の仮想サンプルを生成して利用できる。なお、MPEG-4 AVC規格では、1/2画素精度の仮想サンプルは、垂直方向または水平方向に6つの整数画素を用いた6タップのフィルタによる内挿演算によって生成される。1/4画素精度の仮想サンプルは、隣接する1/2画素または整数画素の平均値フィルタを用いた内挿演算によって生成される。 The interpolated image generation unit 43 specifies the reference image 15 used for prediction image generation from one or more frames of reference image data stored in the motion compensated prediction frame memory 14, and the motion detection unit 42 specifies the reference image specified. The motion vector 44 is detected within a predetermined motion search range on 15. The motion vector is detected by a virtual sample precision motion vector, as in the MPEG-4 AVC standard. This detection method creates a virtual sample (pixel) by interpolation between integer pixels for pixel information (referred to as an integer pixel) of a reference image, and uses it as a predicted image. In the MPEG-4 AVC standard, a virtual sample with 1/8 pixel accuracy can be generated and used. In the MPEG-4 AVC standard, a virtual sample with 1/2 pixel accuracy is generated by interpolation using a 6-tap filter using six integer pixels in the vertical direction or the horizontal direction. The ¼ pixel precision virtual sample is generated by an interpolation operation using an average value filter of adjacent ½ pixels or integer pixels.
 本実施の形態1における動き補償予測部9においても、補間画像生成部43が、動き検出部42から指示される動きベクトル44の精度に応じた仮想画素の予測画像45を生成する。以下、仮想画素精度の動きベクトル検出手順の一例を示す。 Also in the motion compensation prediction unit 9 in the first embodiment, the interpolation image generation unit 43 generates a predicted image 45 of virtual pixels according to the accuracy of the motion vector 44 instructed from the motion detection unit 42. Hereinafter, an example of a motion vector detection procedure with virtual pixel accuracy will be described.
動きベクトル検出手順I
 補間画像生成部43は、動き補償領域ブロック画像41の所定の動き探索範囲内にある整数画素精度の動きベクトル44に対する予測画像45を生成する。整数画素精度で生成された予測画像45(予測画像17)は、減算部12へ出力され、減算部12により動き補償領域ブロック画像41(マクロ/サブブロック画像5)から差し引かれて予測差分信号13になる。符号化制御部3は、予測差分信号13と整数画素精度の動きベクトル44(予測パラメータ18)とに対して予測効率の評価を行う。予測効率の評価は、例えば下式(1)より予測コストJ1を計算し、所定の動き探索範囲内で予測コストJ1を最小にする整数画素精度の動きベクトル44を決定する。
  J1=D1+λR1   (1)
 ここでは評価値としてD1,R1を用いることとする。D1は予測差分信号のマクロブロック内またはサブブロック内の絶対値和(SAD)、R1は動きベクトルおよびこの動きベクトルが指す参照画像の識別番号の推定符号量、λは正数である。
Motion vector detection procedure I
The interpolated image generation unit 43 generates a predicted image 45 for a motion vector 44 with integer pixel accuracy that is within a predetermined motion search range of the motion compensation region block image 41. The predicted image 45 (predicted image 17) generated with integer pixel accuracy is output to the subtracting unit 12, and is subtracted from the motion compensation region block image 41 (macro / sub-block image 5) by the subtracting unit 12, thereby predicting the difference signal 13. become. The encoding control unit 3 evaluates the prediction efficiency with respect to the prediction difference signal 13 and the integer pixel precision motion vector 44 (prediction parameter 18). For the evaluation of the prediction efficiency, for example, the prediction cost J 1 is calculated from the following equation (1), and a motion vector 44 with integer pixel precision that minimizes the prediction cost J 1 within a predetermined motion search range is determined.
J 1 = D 1 + λR 1 (1)
Here, D 1 and R 1 are used as evaluation values. D 1 is the sum of absolute values (SAD) in the macro block or sub block of the prediction difference signal, R 1 is the estimated code amount of the identification number of the motion vector and the reference image pointed to by this motion vector, and λ is a positive number.
 なお、評価値R1を求めるにあたって、動きベクトルの符号量は、図2Aまたは図2Bの各モードにおける動きベクトルの値を近傍の動きベクトルの値を用いて予測し、予測差分値を確率分布に基づいてエントロピ符号化することで求めるか、それに相当する符号量推定を行って求める。 In determining the evaluation value R 1 , the code amount of the motion vector is calculated by predicting the motion vector value in each mode of FIG. 2A or FIG. 2B using the value of a nearby motion vector and converting the predicted difference value into a probability distribution. It is obtained by entropy coding based on it or by performing code amount estimation corresponding to it.
 図4は、図2Bに示す各符号化モード7の動きベクトルの予測値(以下、予測ベクトルと呼ぶ)の決定方法を説明する図である。図4においてmb_mode0,sub_mb_mode1等の矩形ブロックでは、その左横(位置A)、上(位置B)、右上(位置C)に位置するそれぞれ符号化済みの動きベクトルMVa,MVb,MVcを用いて、当該矩形ブロックの予測ベクトルPMVを下式(2)より算出する。median()はメディアンフィルタ処理に対応し、動きベクトルMVa,MVb,MVcの中央値を出力する関数である。
  PMV=median(MVa,MVb,MVc)   (2)
FIG. 4 is a diagram for explaining a method for determining a motion vector prediction value (hereinafter referred to as a prediction vector) in each encoding mode 7 shown in FIG. 2B. In the rectangular blocks such as mb_mode0 and sub_mb_mode1 in FIG. 4, using the encoded motion vectors MVa, MVb, and MVc located at the left side (position A), the top (position B), and the top right (position C), respectively, The prediction vector PMV of the rectangular block is calculated from the following equation (2). median () corresponds to the median filter process and is a function that outputs the median value of the motion vectors MVa, MVb, and MVc.
PMV = median (MVa, MVb, MVc) (2)
 一方、対角形状を持つ対角ブロックmb_mode1,sub_mb_mode2,mb_mode2,sub_mb_mode3,mb_mode3,sub_mb_mode4,mb_mode4,sub_mb_mode5の場合は、矩形ブロックと同様の処理を適用できるようにするため、対角形状に応じてメディアン値をとる位置A,B,Cの位置を変更する。これにより、予測ベクトルPMVを算出する方法自体は変更することなく、各動きベクトル割り当て領域の形状に応じて算出することができ、評価値R1のコストを小さく抑えることができる。 On the other hand, in the case of the diagonal block mb_mode1, sub_mb_mode2, mb_mode2, sub_mb_mode3, mb_mode3, sub_mb_mode4, mb_mode4, sub_mb_mode5 having the diagonal shape, the same processing as the rectangular block can be applied. The positions A, B, and C that take values are changed. As a result, the method for calculating the prediction vector PMV itself can be calculated according to the shape of each motion vector allocation region without changing, and the cost of the evaluation value R 1 can be kept small.
動きベクトル検出手順II
 補間画像生成部43は、上記「動きベクトル検出手順I」で決定した整数画素精度の動きベクトルの周囲に位置する1以上の1/2画素精度の動きベクトル44に対し、予測画像45を生成する。以下、上記「動きベクトル検出手順I」と同様に、1/2画素精度で生成された予測画像45(予測画像17)が、減算部12により動き補償領域ブロック画像41(マクロ/サブブロック画像5)から差し引かれ、予測差分信号13を得る。続いて符号化制御部3が、この予測差分信号13と1/2画素精度の動きベクトル44(予測パラメータ18)とに対して予測効率の評価を行い、整数画素精度の動きベクトルの周囲に位置する1以上の1/2画素精度の動きベクトルの中から予測コストJ1を最小にする1/2画素精度の動きベクトル44を決定する。
Motion vector detection procedure II
The interpolated image generation unit 43 generates a predicted image 45 for one or more ½ pixel precision motion vectors 44 positioned around the integer pixel precision motion vector determined in the “motion vector detection procedure I”. . Hereinafter, similarly to the “motion vector detection procedure I”, the predicted image 45 (predicted image 17) generated with the ½ pixel accuracy is converted by the subtractor 12 into the motion compensation region block image 41 (macro / sub-block image 5). ) To obtain the prediction difference signal 13. Subsequently, the encoding control unit 3 evaluates the prediction efficiency with respect to the prediction difference signal 13 and the motion vector 44 with 1/2 pixel accuracy (prediction parameter 18), and is positioned around the motion vector with integer pixel accuracy. A motion vector 44 having a ½ pixel accuracy that minimizes the prediction cost J 1 is determined from one or more motion vectors having a ½ pixel accuracy.
動きベクトル検出手順III
 符号化制御部3と動き補償予測部9とは、1/4画素精度の動きベクトルに対しても同様に、上記「動きベクトル検出手順II」で決定した1/2画素精度の動きベクトルの周囲に位置する1以上の1/4画素精度の動きベクトルの中から予測コストJ1を最小にする1/4画素精度の動きベクトル44を決定する。
Motion vector detection procedure III
The encoding control unit 3 and the motion compensation prediction unit 9 similarly apply to the motion vector with a ¼ pixel accuracy around the motion vector with a ½ pixel accuracy determined in the “motion vector detection procedure II”. A motion vector 44 having a 1/4 pixel accuracy that minimizes the prediction cost J 1 is determined from one or more motion vectors having a 1/4 pixel accuracy positioned at the position.
動きベクトル検出手順IV
 以下同様に、符号化制御部3と動き補償予測部9とが、所定の精度になるまで仮想画素精度の動きベクトルの検出を行う。
Motion vector detection procedure IV
Similarly, the encoding control unit 3 and the motion compensation prediction unit 9 detect a motion vector with virtual pixel accuracy until a predetermined accuracy is obtained.
 なお、本実施の形態では、所定の精度になるまで仮想画素精度の動きベクトルの検出を行うようにしたが、例えば予測コストに対する閾値を決めておいて、予測コストJ1が所定の閾値より小さくなった場合には、所定の精度になる前に仮想画素精度の動きベクトルの検出を打ち切るようにしてもよい。 In this embodiment, motion vector detection with virtual pixel accuracy is performed until a predetermined accuracy is reached. For example, a threshold for the prediction cost is determined, and the prediction cost J 1 is smaller than the predetermined threshold. In such a case, the detection of the motion vector with the virtual pixel accuracy may be stopped before the predetermined accuracy is reached.
 なお、動きベクトルは、参照フレームサイズで規定されるフレームの外の画素を参照するようにしてもよい。その場合にはフレーム外の画素を生成する必要がある。フレーム外の画素の生成方法の一つとして、画面端の画素で埋めるなどの方法がある。 Note that the motion vector may refer to a pixel outside the frame defined by the reference frame size. In that case, it is necessary to generate pixels outside the frame. One method of generating pixels outside the frame is a method of filling in pixels at the screen edge.
 なお、入力映像信号1の各フレームのフレームサイズがマクロブロックサイズの整数倍ではないときで入力映像信号1の各フレームに代わって拡張フレームが入力された場合には、マクロブロックサイズの整数倍に拡張されたサイズ(拡張フレームのサイズ)が参照フレームのフレームサイズとなる。一方、拡張領域の局所復号部分を参照せず、元のフレームに対する局所復号部分のみをフレーム内の画素として参照する場合には、参照フレームのフレームサイズは元の入力映像信号のフレームサイズになる。 Note that when the frame size of each frame of the input video signal 1 is not an integer multiple of the macroblock size and an extended frame is input instead of each frame of the input video signal 1, the frame size is set to an integer multiple of the macroblock size. The expanded size (extended frame size) becomes the frame size of the reference frame. On the other hand, when only the local decoding part for the original frame is referred to as a pixel in the frame without referring to the local decoding part of the extension region, the frame size of the reference frame is the frame size of the original input video signal.
 このように、動き補償予測部9は、マクロ/サブブロック画像5内を符号化モード7が示す動き補償の単位となるブロック単位に分割した動き補償領域ブロック画像41に対し、各々決定された所定精度の仮想画素精度の動きベクトルとその動きベクトルが指す参照画像の識別番号を予測パラメータ18として出力する。また、動き補償予測部9は、その予測パラメータ18によって生成される予測画像45(予測画像17)を減算部12へ出力し、減算部12によってマクロ/サブブロック画像5から差し引かれ予測差分信号13を得る。減算部12から出力される予測差分信号13は変換・量子化部19へ出力される。 As described above, the motion compensation prediction unit 9 determines the predetermined predetermined values for the motion compensation region block images 41 obtained by dividing the macro / sub-block image 5 into blocks each serving as a motion compensation unit indicated by the encoding mode 7. The motion vector of the accurate virtual pixel accuracy and the identification number of the reference image indicated by the motion vector are output as the prediction parameter 18. Further, the motion compensation prediction unit 9 outputs the prediction image 45 (prediction image 17) generated by the prediction parameter 18 to the subtraction unit 12, and is subtracted from the macro / sub-block image 5 by the subtraction unit 12 to generate the prediction difference signal 13. Get. The prediction difference signal 13 output from the subtraction unit 12 is output to the transform / quantization unit 19.
2.圧縮パラメータの決定手順
 ここでは、上記「1.予測パラメータの決定手順」にて符号化モード7毎に決定された予測パラメータ18に基づいて生成される予測差分信号13を、変換・量子化処理する際に用いる圧縮パラメータ20(変換ブロックサイズ)を決定する手順を説明する。
2. Compression Parameter Determination Procedure Here, transform / quantization processing is performed on the prediction difference signal 13 generated based on the prediction parameter 18 determined for each encoding mode 7 in the above “1. Prediction Parameter Determination Procedure”. A procedure for determining the compression parameter 20 (transform block size) used in the process will be described.
 図5は、図2Bに示す符号化モード7に応じた変換ブロックサイズの適応化の一例を示す図である。図5では、M×L画素ブロックとして32×32画素ブロックを例に用いる。符号化モード7の指定するモードがmb_mode0~6のとき、変換ブロックサイズは16×16または8×8画素のいずれか一方を適応的に選択可能である。符号化モード7がmb_mode7のとき、変換ブロックサイズはマクロブロックを4分割した16×16画素サブブロック毎に、8×8または4×4画素の中から適応的に選択可能である。
 なお、それぞれの符号化モードごとに選択可能な変換ブロックサイズのセットは、符号化モードによって均等分割されるサブブロックサイズ以下の任意の矩形ブロックサイズの中から定義することができる。
FIG. 5 is a diagram illustrating an example of adaptation of the transform block size according to the coding mode 7 illustrated in FIG. 2B. In FIG. 5, a 32 × 32 pixel block is used as an example of the M × L pixel block. When the mode specified by the encoding mode 7 is mb_mode 0 to 6, either one of 16 × 16 and 8 × 8 pixels can be adaptively selected as the transform block size. When the encoding mode 7 is mb_mode7, the transform block size can be adaptively selected from 8 × 8 or 4 × 4 pixels for each 16 × 16 pixel sub-block obtained by dividing the macroblock into four.
The set of transform block sizes that can be selected for each encoding mode can be defined from any rectangular block size that is equal to or smaller than the sub-block size that is equally divided by the encoding mode.
 図6は、図2Bに示す符号化モード7に応じた変換ブロックサイズの適応化の別の例を示す図である。図6の例では、符号化モード7の指定するモードが前述のmb_mode0,5,6のとき、選択可能な変換ブロックサイズとして16×16、8×8画素に加え、動き補償の単位であるサブブロックの形状に応じた変換ブロックサイズを選択可能である。mb_mode0の場合には、16×16、8×8、32×32画素の中から適応的に選択可能である。mb_mode5の場合には、16×16、8×8、16×32画素の中から適応的に選択可能である。mb_mode6の場合には、16×16、8×8、32×16画素の中から適応的に選択可能である。また、図示は省略するが、mb_mode7の場合には16×16、8×8、16×32画素の中から適応的に選択可能であり、mb_mode1~4の場合には、矩形でない領域に対しては16×16、8×8画素の中から選択し、矩形の領域に対しては8×8、4×4画素の中から選択するというような適応化を行ってもよい。 FIG. 6 is a diagram showing another example of adaptation of the transform block size according to the coding mode 7 shown in FIG. 2B. In the example of FIG. 6, when the coding mode 7 designates the above-described mb_mode 0, 5 and 6, in addition to 16 × 16 and 8 × 8 pixels as selectable transform block sizes, a sub-unit that is a unit of motion compensation A transform block size corresponding to the block shape can be selected. In the case of mb_mode0, it can be adaptively selected from 16 × 16, 8 × 8, and 32 × 32 pixels. In the case of mb_mode5, it is possible to adaptively select from 16 × 16, 8 × 8, and 16 × 32 pixels. In the case of mb_mode6, it is possible to adaptively select from 16 × 16, 8 × 8, and 32 × 16 pixels. Although illustration is omitted, in the case of mb_mode7, it is possible to adaptively select from 16 × 16, 8 × 8, and 16 × 32 pixels, and in the case of mb_mode1 to 4, it can be applied to a non-rectangular region. May be selected from 16 × 16 and 8 × 8 pixels, and may be adapted to select a rectangular area from 8 × 8 and 4 × 4 pixels.
 符号化制御部3は、図5および図6に例示した符号化モード7に応じた変換ブロックサイズのセットを圧縮パラメータ20とする。
 なお、図5および図6の例では、マクロブロックの符号化モード7に応じて選択可能な変換ブロックサイズのセットを予め決めておき、マクロブロック単位またはサブブロック単位に適応的に選択できるようにしたが、同様にマクロブロックを分割したサブブロックの符号化モード7(図2Bのsub_mb_mode1~8等)に応じて、選択可能な変換ブロックサイズのセットを予め決めておき、サブブロック単位またはサブブロックをさらに分割したブロック単位に適応的に選択できるようにしてもよい。
 同様に、符号化制御部3は、図2Aに示す符号化モード7を用いる場合にはその符号化モード7に応じた変換ブロックサイズのセットを予め決めておき、適応的に選択できるようにしておけばよい。
The encoding control unit 3 sets a transform block size set corresponding to the encoding mode 7 illustrated in FIGS. 5 and 6 as the compression parameter 20.
In the examples of FIGS. 5 and 6, a set of transform block sizes that can be selected in accordance with the macroblock encoding mode 7 is determined in advance so that it can be adaptively selected in units of macroblocks or subblocks. Similarly, a set of selectable transform block sizes is determined in advance in accordance with the encoding mode 7 (sub_mb_mode 1 to 8 in FIG. 2B) of the sub-block obtained by dividing the macro block. May be adaptively selected for each divided block unit.
Similarly, when the encoding mode 7 shown in FIG. 2A is used, the encoding control unit 3 determines in advance a set of transform block sizes corresponding to the encoding mode 7 so that it can be selected adaptively. Just keep it.
 変換・量子化部19は、符号化制御部3と連携して、マクロブロックサイズ4で指定されるマクロブロック単位に、または当該マクロブロック単位を符号化モード7に応じてさらに分割したサブブロック単位に、変換ブロックサイズの中から最適な変換ブロックサイズを決定する。以下、その詳細な手順について説明する。 The transform / quantization unit 19 cooperates with the encoding control unit 3 to make a macroblock unit designated by the macroblock size 4 or a subblock unit obtained by further dividing the macroblock unit according to the encoding mode 7 Then, an optimum conversion block size is determined from the conversion block sizes. The detailed procedure will be described below.
 図7は、変換・量子化部19の内部構成を示すブロック図である。図7に示す変換・量子化部19は、変換ブロックサイズ分割部50と、変換部52と、量子化部54とを含む。また、入力データとしては、符号化制御部3から入力される圧縮パラメータ20(変換ブロックサイズおよび量子化ステップサイズ等)と、符号化制御部3から入力される予測差分信号13とがある。 FIG. 7 is a block diagram showing the internal configuration of the transform / quantization unit 19. The transform / quantization unit 19 illustrated in FIG. 7 includes a transform block size dividing unit 50, a transform unit 52, and a quantization unit 54. Input data includes a compression parameter 20 (transform block size, quantization step size, etc.) input from the encoding control unit 3 and a prediction difference signal 13 input from the encoding control unit 3.
 変換ブロックサイズ分割部50は、変換ブロックサイズを決定する対象であるマクロブロックまたはサブブロック毎の予測差分信号13を、圧縮パラメータ20の変換ブロックサイズに応じたブロックに変換し、変換対象ブロック51として変換部52へ出力する。
 なお、圧縮パラメータ20で1つのマクロブロックまたはサブブロックに対して複数の変換ブロックサイズが選択指定されている場合は、各変換ブロックサイズの変換対象ブロック51を順次、変換部52へ出力する。
The transform block size dividing unit 50 transforms the prediction difference signal 13 for each macroblock or sub-block for which the transform block size is to be determined into a block corresponding to the transform block size of the compression parameter 20, and serves as a transform target block 51. The data is output to the conversion unit 52.
If a plurality of transform block sizes are selected and specified for one macroblock or sub-block by the compression parameter 20, the transform target blocks 51 of the respective transform block sizes are sequentially output to the transform unit 52.
 変換部52は、入力された変換対象ブロック51に対し、DCT、DCTの変換係数を整数で近似した整数変換、アダマール変換等の変換方式に従って変換処理を実施し、生成した変換係数53を量子化部54へ出力する。 The conversion unit 52 performs conversion processing on the input conversion target block 51 in accordance with a conversion method such as integer conversion or Hadamard conversion in which DCT and DCT conversion coefficients are approximated by integers, and the generated conversion coefficient 53 is quantized. To the unit 54.
 量子化部54は、入力された変換係数53を、符号化制御部3から指示される圧縮パラメータ20の量子化ステップサイズに従って量子化し、量子化後の変換係数である圧縮データ21を逆量子化・逆変換部22および符号化制御部3へ出力する。
 なお、変換部52および量子化部54は、圧縮パラメータ20で1つのマクロブロックまたはサブブロックに対して複数の変換ブロックサイズが選択指定されている場合にはそれらすべての変換ブロックサイズに対して上述の変換・量子化処理を行って、各々の圧縮データ21を出力する。
The quantization unit 54 quantizes the input transform coefficient 53 according to the quantization step size of the compression parameter 20 instructed from the encoding control unit 3, and dequantizes the compressed data 21 that is the quantized transform coefficient. Output to the inverse transform unit 22 and the encoding control unit 3.
Note that, when a plurality of transform block sizes are selected and designated for one macroblock or sub-block by the compression parameter 20, the transform unit 52 and the quantization unit 54 are described above for all transform block sizes. Then, each compressed data 21 is output.
 量子化部54から出力された圧縮データ21は符号化制御部3に入力され、圧縮パラメータ20の変換ブロックサイズに対する符号化効率の評価に用いられる。符号化制御部3は、符号化モード7に含まれる符号化モードそれぞれについて選択可能なすべての変換ブロックサイズそれぞれに対して得られた圧縮データ21を用いて、例えば下式(3)より符号化コストJ2を計算し、符号化コストJ2を最小にする変換ブロックサイズを選択する。
  J2=D2+λR2   (3)
 ここでは評価値としてD2,R2を用いることとする。D2として、変換ブロックサイズに対して得られた圧縮データ21を逆量子化・逆変換部22へ入力して、圧縮データ21を逆変換・逆量子化処理して得られる局所復号予測差分信号24に予測画像17を加算して得られる局所復号画像信号26と、マクロ/サブブロック画像5との間の二乗ひずみ和等を用いる。R2として、変換ブロックサイズに対して得られた圧縮データ21と、圧縮データ21に係わる符号化モード7および予測パラメータ10,18とを可変長符号化部23で実際に符号化して得られる符号量(または推定符号量)を用いる。
The compressed data 21 output from the quantization unit 54 is input to the encoding control unit 3 and is used for evaluating the encoding efficiency with respect to the transform block size of the compression parameter 20. The encoding control unit 3 uses the compressed data 21 obtained for all the transform block sizes that can be selected for each of the encoding modes included in the encoding mode 7, for example, encoding using the following equation (3): the cost J 2 calculates, selects a transform block size for the encoding cost J 2 to a minimum.
J 2 = D 2 + λR 2 (3)
Here, D 2 and R 2 are used as evaluation values. As D 2 , the compressed data 21 obtained for the transform block size is input to the inverse quantization / inverse transform unit 22, and the local decoded prediction difference signal obtained by performing the inverse transform / inverse quantization process on the compressed data 21 The sum of square distortion between the local decoded image signal 26 obtained by adding the predicted image 17 to 24 and the macro / sub-block image 5 is used. R 2 is a code obtained by actually encoding the compressed data 21 obtained with respect to the transform block size, the encoding mode 7 and the prediction parameters 10 and 18 relating to the compressed data 21 by the variable length encoding unit 23. A quantity (or estimated code quantity) is used.
 符号化制御部3は、後述する「3.符号化モードの決定手順」による最適符号化モード7a決定の後、決定された最適符号化モード7aに対応する変換ブロックサイズを選択して最適圧縮パラメータ20aに含め、可変長符号化部23へ出力する。可変長符号化部23はこの最適圧縮パラメータ20aをエントロピ符号化したのちビットストリーム30へ多重化する。 The encoding control unit 3 selects an optimum compression parameter by selecting a transform block size corresponding to the determined optimal encoding mode 7a after determining the optimal encoding mode 7a according to “3. Procedure for determining encoding mode” described later. 20a and output to the variable length coding unit 23. The variable length coding unit 23 entropy codes the optimum compression parameter 20a and then multiplexes it into the bit stream 30.
 ここで、変換ブロックサイズは、マクロブロックまたはサブブロックの最適符号化モード7aに応じて予め定義された変換ブロックサイズセット(図5および図6に例示する)の中から選択されるので、変換ブロックサイズセット毎にそのセット中に含まれる変換ブロックサイズに対してID等の識別情報を割り当てておき、その識別情報を変換ブロックサイズの情報としてエントロピ符号化し、ビットストリーム30へ多重化すればよい。この場合、復号装置側にも変換ブロックサイズセットの識別情報を設定しておく。ただし、変換ブロックサイズセットに含まれる変換ブロックサイズが1つの場合には、復号装置側でセット中から変換ブロックサイズを自動的に決定可能なので、符号化装置側で変換ブロックサイズの識別情報をビットストリーム30へ多重化する必要はない。 Here, the transform block size is selected from a transform block size set (illustrated in FIGS. 5 and 6) defined in advance according to the optimal encoding mode 7a of the macroblock or sub-block. For each size set, identification information such as an ID may be assigned to the transform block size included in the set, and the identification information may be entropy-coded as transform block size information and multiplexed into the bitstream 30. In this case, identification information of the transform block size set is also set on the decoding device side. However, when there is one transform block size included in the transform block size set, the transform block size can be automatically determined from the set on the decoding device side. There is no need to multiplex into stream 30.
3.符号化モードの決定手順
 上記「1.予測パラメータの決定手順」および「2.圧縮パラメータの決定手順」によって、符号化制御部3が指示したすべての符号化モード7に対してそれぞれ予測パラメータ10,18および圧縮パラメータ20が決定すると、符号化制御部3は、それぞれの符号化モード7とそのときの予測パラメータ10,18および圧縮パラメータ20を用いて得られる予測差分信号13をさらに変換・量子化して得られる圧縮データ21を用いて、符号化コストJ2が小さくなる符号化モード7を上式(3)より求め、その符号化モード7を当該マクロブロックの最適符号化モード7aとして選択する。
3. Coding Mode Determination Procedure According to the above “1. Prediction Parameter Determination Procedure” and “2. Compression Parameter Determination Procedure”, the prediction parameters 10, When 18 and the compression parameter 20 are determined, the encoding control unit 3 further transforms and quantizes the prediction difference signal 13 obtained by using each encoding mode 7, the prediction parameters 10 and 18 and the compression parameter 20 at that time. using compressed data 21 obtained Te, determined from the above equation (3) a coding mode 7 encoding cost J 2 is reduced, it selects the coding mode 7 as an optimal coding mode 7a of the macroblock.
 なお、図2Aまたは図2Bに示す符号化モードに、マクロブロックまたはサブブロックのモードとしてスキップモードを加えたすべての符号化モードの中から、最適符号化モード7aを決定するようにしてもよい。スキップモードとは、符号化装置側で隣接するマクロブロックまたはサブブロックの動きベクトルを使って動き補償された予測画像を局所復号画像信号とするモードであり、符号化モード以外の予測パラメータや圧縮パラメータを算出してビットストリームへ多重化する必要がないため、符号量を抑えて符号化することができる。復号装置側では、符号化装置側と同様の手順で隣接するマクロブロックまたはサブブロックの動きベクトルを使って動き補償された予測画像を復号画像信号として出力する。 Note that the optimal encoding mode 7a may be determined from all the encoding modes in which the skip mode is added as the macroblock or sub-block mode to the encoding mode shown in FIG. 2A or 2B. The skip mode is a mode in which a prediction image that has been motion-compensated using a motion vector of an adjacent macroblock or sub-block on the encoding device side is used as a local decoded image signal, and a prediction parameter or compression parameter other than the encoding mode Since it is not necessary to calculate and multiplex the bit stream, it is possible to perform encoding with a reduced code amount. On the decoding device side, a predicted image that has been motion-compensated using motion vectors of adjacent macroblocks or sub-blocks is output as a decoded image signal in the same procedure as the coding device side.
 なお、入力映像信号1の各フレームのフレームサイズがマクロブロックサイズの整数倍ではないときで入力映像信号1の各フレームに代わって拡張フレームが入力された場合には、拡張領域を含むマクロブロックまたはサブブロックに対しては、スキップモードのみを選択するように制御して、拡張領域に費やす符号量を抑えるように、符号化モードを決定してもよい。 When an extended frame is input instead of each frame of the input video signal 1 when the frame size of each frame of the input video signal 1 is not an integral multiple of the macroblock size, For the sub-block, the encoding mode may be determined so as to control only the skip mode and suppress the amount of code consumed in the extension region.
 符号化制御部3は、以上の「1.予測パラメータの決定手順」、「2.圧縮パラメータの決定手順」、「3.符号化モードの決定手順」により決定された最適な符号化効率が得られる最適符号化モード7aを可変長符号化部23に出力すると共に、その最適符号化モード7aに対応する予測パラメータ10,18を最適予測パラメータ10a,18aとして選択し、同じく最適符号化モード7aに対応する圧縮パラメータ20を最適圧縮パラメータ20aとして選択して、可変長符号化部23へ出力する。可変長符号化部23は、最適符号化モード7a、最適予測パラメータ10a,18aおよび最適圧縮パラメータ20aをエントロピ符号化して、ビットストリーム30に多重化する。 The encoding control unit 3 obtains the optimum encoding efficiency determined by the above “1. Prediction parameter determination procedure”, “2. Compression parameter determination procedure”, and “3. Encoding mode determination procedure”. Is output to the variable length coding unit 23, and the prediction parameters 10 and 18 corresponding to the optimum coding mode 7a are selected as the optimum prediction parameters 10a and 18a. The corresponding compression parameter 20 is selected as the optimum compression parameter 20 a and output to the variable length coding unit 23. The variable length encoding unit 23 entropy-encodes the optimal encoding mode 7a, the optimal prediction parameters 10a and 18a, and the optimal compression parameter 20a and multiplexes them into the bit stream 30.
 また、決定された最適符号化モード7aと最適予測パラメータ10a,18aと最適圧縮パラメータ20aとに基づく予測画像11,17から得られる最適予測差分信号13aは、上述の通り、変換・量子化部19で変換・量子化されて圧縮データ21となり、この圧縮データ21は可変長符号化部23にてエントロピ符号化され、ビットストリーム30に多重化される。また、この圧縮データ21は逆量子化・逆変換部22、加算部25を経て局所復号画像信号26となり、ループフィルタ部27へ入力される。 Further, as described above, the optimal prediction difference signal 13a obtained from the predicted images 11 and 17 based on the determined optimal encoding mode 7a, the optimal prediction parameters 10a and 18a, and the optimal compression parameter 20a is converted into a transform / quantization unit 19 as described above. The compressed data 21 is converted and quantized in the above manner into compressed data 21, which is entropy-coded by the variable length coding unit 23 and multiplexed into the bit stream 30. The compressed data 21 passes through an inverse quantization / inverse transform unit 22 and an addition unit 25 to become a locally decoded image signal 26 and is input to a loop filter unit 27.
 次に、本実施の形態1に係る動画像復号装置を説明する。
 図8は、この発明の実施の形態1に係る動画像復号装置の構成を示すブロック図である。図8に示す動画像復号装置は、ビットストリーム60から、マクロブロック単位に最適符号化モード62をエントロピ復号すると共に、当該復号された最適符号化モード62に応じて分割されたマクロブロックまたはサブブロック単位に最適予測パラメータ63、圧縮データ64、最適圧縮パラメータ65をエントロピ復号する可変長復号部61と、最適予測パラメータ63が入力されると、当該最適予測パラメータ63に含まれるイントラ予測モードとイントラ予測用メモリ77に格納された復号画像74aとを用いて予測画像71を生成するイントラ予測部69と、最適予測パラメータ63が入力されると、当該最適予測パラメータ63に含まれる動きベクトルと、当該最適予測パラメータ63に含まれる参照画像インデックスで特定される動き補償予測フレームメモリ75内の参照画像76とを用いて動き補償予測を行って予測画像72を生成する動き補償予測部70と、復号された最適符号化モード62に応じて、可変長復号部61が復号した最適予測パラメータ63をイントラ予測部69または動き補償予測部70のいずれか一方に入力する切替部68と、最適圧縮パラメータ65を用いて、圧縮データ64に対して逆量子化および逆変換処理を行い、予測差分信号復号値67を生成する逆量子化・逆変換部66と、予測差分信号復号値67に、イントラ予測部69または動き補償予測部70のいずれか一方が出力する予測画像71,72を加算して復号画像74を生成する加算部73と、復号画像74を格納するイントラ予測用メモリ77と、復号画像74をフィルタ処理して再生画像79を生成するループフィルタ部78と、再生画像79を格納する動き補償予測フレームメモリ75とを含む。
Next, the moving picture decoding apparatus according to the first embodiment will be described.
FIG. 8 is a block diagram showing the configuration of the video decoding apparatus according to Embodiment 1 of the present invention. The moving picture decoding apparatus shown in FIG. 8 entropy-decodes the optimal encoding mode 62 from the bitstream 60 in units of macroblocks, and the macroblock or sub-block divided according to the decoded optimal encoding mode 62 When the optimal prediction parameter 63, the compressed data 64, and the variable length decoding unit 61 that entropy decodes the optimal compression parameter 65 and the optimal prediction parameter 63 are input, the intra prediction mode and the intra prediction included in the optimal prediction parameter 63 are input. When the intra prediction unit 69 that generates the predicted image 71 using the decoded image 74a stored in the memory 77 and the optimal prediction parameter 63 are input, the motion vector included in the optimal prediction parameter 63 and the optimal Specified by the reference image index included in the prediction parameter 63 In accordance with the motion compensation prediction unit 70 that performs motion compensation prediction using the reference image 76 in the motion compensated prediction frame memory 75 to generate the prediction image 72, and variable length decoding according to the decoded optimum encoding mode 62 Using the switching unit 68 that inputs the optimal prediction parameter 63 decoded by the unit 61 to either the intra prediction unit 69 or the motion compensated prediction unit 70 and the optimal compression parameter 65, inverse quantization and compression are performed on the compressed data 64. Either the intra-prediction unit 69 or the motion compensation prediction unit 70 outputs the inverse quantization / inverse transformation unit 66 that performs the inverse transformation process and generates the prediction difference signal decoded value 67 and the prediction difference signal decoded value 67. An addition unit 73 that adds the predicted images 71 and 72 to generate a decoded image 74, an intra prediction memory 77 that stores the decoded image 74, and a filter for filtering the decoded image 74 It includes a loop filter unit 78 that sense to generate a reproduced image 79, and a motion compensated prediction frame memory 75 for storing the reproduced image 79.
 可変長復号部61は、本実施の形態1に係る動画像復号装置がビットストリーム60を受け取ると、そのビットストリーム60をエントロピ復号処理して、1フレーム以上のピクチャから構成されるシーケンス単位あるいはピクチャ単位にマクロブロックサイズおよびフレームサイズを復号する。なお、マクロブロックサイズがビットストリームに直接多重化されずにプロファイル等で規定されている場合には、シーケンス単位にビットストリームから復号されるプロファイルの識別情報に基づいて、マクロブロックサイズが決定される。各フレームの復号マクロブロックサイズおよび復号フレームサイズをもとに、各フレームに含まれるマクロブロック数が決定され、フレームに含まれる各マクロブロックの最適符号化モード62、最適予測パラメータ63、圧縮データ64(即ち、量子化変換係数データ)、最適圧縮パラメータ65(変換ブロックサイズ情報、量子化ステップサイズ)等を復号する。
 なお、復号装置側で復号した最適符号化モード62、最適予測パラメータ63、圧縮データ64、最適圧縮パラメータ65は、符号化装置側で符号化した最適符号化モード7a、最適予測パラメータ10a,18a、圧縮データ21、最適圧縮パラメータ20aに対応するものである。
When the moving picture decoding apparatus according to the first embodiment receives the bit stream 60, the variable length decoding unit 61 performs entropy decoding processing on the bit stream 60 and performs a sequence unit or a picture including one or more frames. Decode macroblock size and frame size in units. When the macroblock size is defined by a profile or the like without being multiplexed directly to the bitstream, the macroblock size is determined based on profile identification information decoded from the bitstream in sequence units. . Based on the decoded macroblock size and decoded frame size of each frame, the number of macroblocks included in each frame is determined, and the optimal encoding mode 62, optimal prediction parameter 63, and compressed data 64 of each macroblock included in the frame are determined. (That is, quantized transform coefficient data), optimum compression parameter 65 (transform block size information, quantization step size), and the like are decoded.
Note that the optimum coding mode 62, optimum prediction parameter 63, compressed data 64, and optimum compression parameter 65 decoded on the decoding device side are the optimum coding mode 7a, optimum prediction parameters 10a, 18a, coded on the coding device side, This corresponds to the compressed data 21 and the optimum compression parameter 20a.
 ここで、最適圧縮パラメータ65の変換ブロックサイズ情報は、符号化装置側にて符号化モード7に応じてマクロブロックまたはサブブロック単位に予め定義された変換ブロックサイズセットの中から選択された変換ブロックサイズを特定する識別情報であり、復号装置側では最適符号化モード62と最適圧縮パラメータ65の変換ブロックサイズ情報とからマクロブロックまたはサブブロックの変換ブロックサイズを特定することになる。 Here, the transform block size information of the optimum compression parameter 65 is a transform block selected from a transform block size set defined in advance in units of macroblocks or sub-blocks according to the coding mode 7 on the encoding device side. This is identification information for specifying the size, and the decoding apparatus side specifies the conversion block size of the macroblock or sub-block from the optimal encoding mode 62 and the conversion block size information of the optimal compression parameter 65.
 逆量子化・逆変換部66は、可変長復号部61から入力される圧縮データ64および最適圧縮パラメータ65を用いて、変換ブロックサイズ情報より特定されるブロック単位で逆量子化・逆変換処理を行い、予測差分信号復号値67を算出する。 The inverse quantization / inverse transform unit 66 uses the compressed data 64 and the optimum compression parameter 65 input from the variable length decoding unit 61 to perform inverse quantization / inverse transform processing in units of blocks specified by the transform block size information. And a prediction difference signal decoded value 67 is calculated.
 また、可変長復号部61は、動きベクトルの復号に際して、すでに復号済みの周辺ブロックの動きベクトルを参照して図4に示す処理により予測ベクトルを決定し、ビットストリーム60から復号した予測差分値を加算することによって動きベクトルの復号値を得る。可変長復号部61は、この動きベクトルの復号値を最適予測パラメータ63に含めて切替部68へ出力する。 In addition, when decoding the motion vector, the variable length decoding unit 61 determines a prediction vector by the process shown in FIG. 4 with reference to the motion vectors of the peripheral blocks that have already been decoded, and uses the prediction difference value decoded from the bitstream 60. The decoded value of the motion vector is obtained by addition. The variable length decoding unit 61 includes the decoded value of the motion vector in the optimum prediction parameter 63 and outputs it to the switching unit 68.
 切替部68は、最適符号化モード62に応じて最適予測パラメータ63の入力先を切り替えるスイッチである。この切替部68は、可変長復号部61から入力される最適符号化モード62がフレーム内予測モードを示す場合には、同じく可変長復号部61から入力される最適予測パラメータ63(イントラ予測モード)をイントラ予測部69へ出力し、最適符号化モード62がフレーム間予測モードを示す場合には、最適予測パラメータ63(動きベクトル、各動きベクトルが指す参照画像の識別番号(参照画像インデックス)等)を動き補償予測部70へ出力する。 The switching unit 68 is a switch that switches the input destination of the optimal prediction parameter 63 in accordance with the optimal encoding mode 62. When the optimum encoding mode 62 input from the variable length decoding unit 61 indicates the intra-frame prediction mode, the switching unit 68 also uses the optimum prediction parameter 63 (intra prediction mode) input from the variable length decoding unit 61. Is output to the intra prediction unit 69, and when the optimal encoding mode 62 indicates the inter-frame prediction mode, the optimal prediction parameter 63 (motion vector, reference image identification number (reference image index) indicated by each motion vector, etc.) Is output to the motion compensation prediction unit 70.
 イントラ予測部69は、イントラ予測用メモリ77に格納されているフレーム内の復号画像(フレーム内の復号済み画像信号)74aを参照して、最適予測パラメータ63で指示されるイントラ予測モードに対応する予測画像71を生成して出力する。 The intra prediction unit 69 refers to the decoded image (decoded image signal in the frame) 74a in the frame stored in the intra prediction memory 77, and corresponds to the intra prediction mode indicated by the optimal prediction parameter 63. A predicted image 71 is generated and output.
 なお、イントラ予測部69による予測画像71の生成方法は符号化装置側におけるイントラ予測部8の動作と同じであるが、イントラ予測部8が符号化モード7で指示されるすべてのイントラ予測モードに対応する予測画像11を生成するのに対し、このイントラ予測部69は最適符号化モード62で指示されるイントラ予測モードに対応する予測画像71のみを生成する点で異なる。 Note that the method of generating the predicted image 71 by the intra prediction unit 69 is the same as the operation of the intra prediction unit 8 on the encoding device side. However, the intra prediction unit 8 applies to all intra prediction modes instructed in the encoding mode 7. In contrast to the generation of the corresponding prediction image 11, the intra prediction unit 69 is different in that only the prediction image 71 corresponding to the intra prediction mode indicated in the optimum encoding mode 62 is generated.
 動き補償予測部70は、入力された最適予測パラメータ63で指示される動きベクトル、参照画像インデックス等に基づいて、動き補償予測フレームメモリ75に格納されている1フレーム以上の参照画像76から予測画像72を生成して出力する。 The motion compensated prediction unit 70 predicts a predicted image from one or more reference images 76 stored in the motion compensated prediction frame memory 75 based on a motion vector, a reference image index, and the like indicated by the input optimum prediction parameter 63. 72 is generated and output.
 なお、動き補償予測部70による予測画像72の生成方法は符号化装置側における動き補償予測部9の動作のうち、複数の参照画像から動きベクトルを探索する処理(図3に示す動き検出部42および補間画像生成部43の動作に相当する)を除外したものであり、可変長復号部61から与えられる最適予測パラメータ63に従って、予測画像72を生成する処理のみを行う。動き補償予測部70は、符号化装置と同様に、動きベクトルが参照フレームサイズで規定されるフレームの外の画素を参照する場合には、フレーム外の画素を画面端の画素で埋めるなどの方法で予測画像72を生成する。なお、参照フレームサイズは、復号フレームサイズを復号マクロブロックサイズの整数倍になるまで拡張したサイズで規定される場合と、復号フレームサイズで規定される場合とがあり、符号化装置と同様の手順で参照フレームサイズを決定する。 Note that the method of generating the predicted image 72 by the motion compensation prediction unit 70 is a process of searching for a motion vector from a plurality of reference images among the operations of the motion compensation prediction unit 9 on the encoding device side (motion detection unit 42 shown in FIG. 3). And corresponding to the operation of the interpolated image generating unit 43), and only the process of generating the predicted image 72 is performed according to the optimal prediction parameter 63 given from the variable length decoding unit 61. Similar to the encoding device, the motion compensation prediction unit 70 embeds pixels outside the frame with pixels at the end of the screen when the motion vector refers to a pixel outside the frame defined by the reference frame size. The prediction image 72 is produced | generated by. The reference frame size may be defined by a size obtained by extending the decoded frame size to an integral multiple of the decoded macroblock size, or may be defined by the decoded frame size. To determine the reference frame size.
 加算部73は、予測画像71または予測画像72のいずれか一方と、逆量子化・逆変換部66から出力される予測差分信号復号値67とを加算して復号画像74を生成する。 The adding unit 73 adds either one of the predicted image 71 or the predicted image 72 and the predicted difference signal decoded value 67 output from the inverse quantization / inverse transform unit 66 to generate a decoded image 74.
 この復号画像74は、以降のマクロブロックのイントラ予測画像生成のための参照画像(復号画像74a)として用いるため、イントラ予測用メモリ77に格納されると共に、ループフィルタ部78に入力される。 The decoded image 74 is stored in the intra prediction memory 77 and input to the loop filter unit 78 in order to be used as a reference image (decoded image 74a) for generating an intra predicted image of a subsequent macroblock.
 ループフィルタ部78は、符号化装置側のループフィルタ部27と同じ動作を行って、再生画像79を生成し、この動画像復号装置から出力する。また、この再生画像79は、以降の予測画像生成のための参照画像76として用いるため、動き補償予測フレームメモリ75に格納される。なお、フレーム内のすべてのマクロブロックを復号後に得られる再生画像のサイズは、マクロブロックサイズの整数倍のサイズである。符号化装置に入力された映像信号の各フレームのフレームサイズに対応する復号フレームサイズより再生画像のサイズが大きい場合には、再生画像には水平方向または垂直方向に拡張領域が含まれる。この場合、再生画像から拡張領域部分の復号画像が取り除かれた復号画像が復号装置から出力される。 The loop filter unit 78 performs the same operation as the loop filter unit 27 on the encoding device side, generates a reproduced image 79, and outputs it from the moving image decoding device. Further, the reproduced image 79 is stored in the motion compensated prediction frame memory 75 for use as a reference image 76 for subsequent prediction image generation. Note that the size of the reproduced image obtained after decoding all the macroblocks in the frame is an integral multiple of the macroblock size. When the size of the reproduced image is larger than the decoded frame size corresponding to the frame size of each frame of the video signal input to the encoding device, the reproduced image includes an extended region in the horizontal direction or the vertical direction. In this case, a decoded image obtained by removing the decoded image of the extended area portion from the reproduced image is output from the decoding device.
 なお、参照フレームサイズが、復号フレームサイズで規定される場合には、動き補償予測フレームメモリ75に格納された再生画像の拡張領域部分の復号画像は以降の予測画像生成において参照されない。従って、再生画像から拡張領域部分の復号画像を取り除いた復号画像を動き補償予測フレームメモリ75に格納するようにしてもよい。 When the reference frame size is defined by the decoded frame size, the decoded image in the extended region portion of the reproduced image stored in the motion compensated prediction frame memory 75 is not referred to in the subsequent predicted image generation. Therefore, the decoded image obtained by removing the decoded image of the extended area portion from the reproduced image may be stored in the motion compensated prediction frame memory 75.
 以上より、実施の形態1に係る動画像符号化装置によれば、マクロブロックの符号化モード7に応じて分割したマクロ/サブブロック画像5に対して、マクロブロックまたはサブブロックのサイズに応じて複数の変換ブロックサイズを含む変換ブロックのセットを予め定めておき、符号化制御部3が、変換ブロックサイズのセットの中から、符号化効率が最適となる1つの変換ブロックサイズを最適圧縮パラメータ20aに含めて変換・量子化部19へ指示し、変換・量子化部19が、最適予測差分信号13aを、最適圧縮パラメータ20aに含まれる変換ブロックサイズのブロックに分割して変換および量子化処理を行い、圧縮データ21を生成するように構成したので、変換ブロックサイズのセットがマクロブロックまたはサブブロックのサイズに拘らず固定された従来の方法に比べ、同等の符号量で、符号化映像の品質を向上させることが可能になる。 As described above, according to the moving picture coding apparatus according to Embodiment 1, the macro / subblock image 5 divided according to the macroblock coding mode 7 depends on the size of the macroblock or subblock. A set of transform blocks including a plurality of transform block sizes is determined in advance, and the encoding control unit 3 selects one transform block size with the best coding efficiency from the transform block size set as the optimum compression parameter 20a. And the conversion / quantization unit 19 divides the optimal prediction difference signal 13a into blocks of the conversion block size included in the optimal compression parameter 20a, and performs conversion and quantization processing. Since the compressed data 21 is generated, the transform block size set is set to a macro block or a sub block. Compared to the conventional method which is irrespective fixed size, with equal code amount, it is possible to improve the quality of encoded video.
 また、可変長符号化部23が、変換ブロックサイズのセットの中から符号化モード7に応じて適応的に選択された変換ブロックサイズをビットストリーム30に多重化するように構成したので、これに対応して、実施の形態1に係る動画像復号装置を、可変長復号部61が、マクロブロックまたはサブブロック単位にビットストリーム60から最適圧縮パラメータ65を復号し、逆量子化・逆変換部66が、この最適圧縮パラメータ65に含まれる変換ブロックサイズ情報に基づいて変換ブロックサイズを決定して、圧縮データ64を当該変換ブロックサイズのブロック単位に逆変換および逆量子化処理するように構成した。そのため、動画像復号装置が動画像符号化装置と同様に定義された変換ブロックサイズのセットの中から符号化装置側で用いた変換ブロックサイズを選択して圧縮データを復号することができるので、実施の形態1に係る動画像符号化装置にて符号化されたビットストリームを正しく復号することが可能になる。 In addition, since the variable length encoding unit 23 is configured to multiplex the transform block size adaptively selected from the set of transform block sizes according to the encoding mode 7 into the bitstream 30, Correspondingly, in the video decoding apparatus according to Embodiment 1, the variable length decoding unit 61 decodes the optimal compression parameter 65 from the bit stream 60 in units of macroblocks or subblocks, and the inverse quantization / inverse conversion unit 66 However, the transform block size is determined based on the transform block size information included in the optimum compression parameter 65, and the compressed data 64 is inversely transformed and inversely quantized in units of the transform block size. Therefore, the video decoding device can decode the compressed data by selecting the transform block size used on the encoding device side from the set of transform block sizes defined in the same way as the video encoding device. It becomes possible to correctly decode the bitstream encoded by the moving picture encoding apparatus according to Embodiment 1.
実施の形態2.
 本実施の形態2では、上記実施の形態1に係る動画像符号化装置の可変長符号化部23の変形例と、同じく上記実施の形態1に係る動画像復号装置の可変長復号部61の変形例を説明する。
Embodiment 2. FIG.
In the second embodiment, a modification of the variable length coding unit 23 of the video encoding device according to the first embodiment and the variable length decoding unit 61 of the video decoding device according to the first embodiment are the same. A modification will be described.
 先ず、本実施の形態2に係る動画像符号化装置の可変長符号化部23を説明する。
 図9は、この発明の実施の形態2に係る動画像符号化装置の可変長符号化部23の内部構成を示すブロック図である。なお、図9において図1と同一または相当の部分については同一の符号を付し説明を省略する。また、本実施の形態2に係る動画像符号化装置の構成は上記実施の形態1と同じであり、可変長符号化部23を除く各構成要素の動作も上記実施の形態1と同じであるため、図1~図8を援用する。また、説明の便宜上、本実施の形態2では図2Aに示す符号化モードのセットを用いることを前提とした装置構成および処理方法にするが、図2Bに示す符号化モードのセットを用いることを前提とした装置構成および処理方法にも適用可能であることは言うまでもない。
First, the variable length coding unit 23 of the moving picture coding apparatus according to the second embodiment will be described.
FIG. 9 is a block diagram showing an internal configuration of the variable length coding unit 23 of the moving picture coding apparatus according to Embodiment 2 of the present invention. In FIG. 9, the same or equivalent parts as in FIG. In addition, the configuration of the video encoding apparatus according to the second embodiment is the same as that of the first embodiment, and the operation of each component except for the variable-length encoding unit 23 is the same as that of the first embodiment. Therefore, FIG. 1 to FIG. 8 are used. Further, for convenience of explanation, in the second embodiment, the apparatus configuration and the processing method are based on the assumption that the encoding mode set shown in FIG. 2A is used. However, the encoding mode set shown in FIG. 2B is used. Needless to say, the present invention is also applicable to the assumed apparatus configuration and processing method.
 図9に示す可変長符号化部23は、符号化モード7(または最適予測パラメータ10a,18a、最適圧縮パラメータ20a)を表す多値信号のインデックス値と2値信号との対応関係を指定した2値化テーブルを格納する2値化テーブルメモリ105と、この2値化テーブルを用いて、符号化制御部3が選択した多値信号の最適符号化モード7a(または最適予測パラメータ10a,18a、最適圧縮パラメータ20a)の多値信号のインデックス値を2値信号103に変換する2値化部92と、コンテキスト生成部99の生成するコンテキスト識別情報102、コンテキスト情報メモリ96、確率テーブルメモリ97および状態遷移テーブルメモリ98を参照して2値化部92が変換した2値信号103を算術符号化して符号化ビット列111を出力し、当該符号化ビット列111をビットストリーム30へ多重化させる算術符号化処理演算部104と、最適符号化モード7a(または最適予測パラメータ10a,18a、最適圧縮パラメータ20a)の発生頻度をカウントして頻度情報94を生成する頻度情報生成部93と、頻度情報94に基づいて2値化テーブルメモリ105の2値化テーブルの多値信号と2値信号との対応関係を更新する2値化テーブル更新部95とを含む。 The variable length encoding unit 23 illustrated in FIG. 9 specifies the correspondence between the index value of the multilevel signal representing the encoding mode 7 (or the optimal prediction parameters 10a and 18a and the optimal compression parameter 20a) and the binary signal. A binarization table memory 105 for storing a binarization table, and an optimal encoding mode 7a (or optimal prediction parameters 10a, 18a, optimal) of the multilevel signal selected by the encoding control unit 3 using the binarization table The binarization unit 92 that converts the index value of the multilevel signal of the compression parameter 20a) into the binary signal 103, the context identification information 102 generated by the context generation unit 99, the context information memory 96, the probability table memory 97, and the state transition The binary signal 103 converted by the binarization unit 92 with reference to the table memory 98 is arithmetically encoded and encoded bits 111, and the frequency of occurrence of the arithmetic coding processing operation unit 104 that multiplexes the coded bit string 111 into the bit stream 30 and the optimum coding mode 7a (or the optimum prediction parameters 10a and 18a, the optimum compression parameter 20a). A frequency information generation unit 93 that counts and generates frequency information 94, and a binary that updates the correspondence between the multilevel signal and the binary signal in the binarization table of the binarization table memory 105 based on the frequency information 94 Table update unit 95.
 以下では、エントロピ符号化されるパラメータとして、符号化制御部3から出力されるマクロブロックの最適符号化モード7aを例に、可変長符号化部23の可変長符号化手順を説明する。同じく符号化対象のパラメータである最適予測パラメータ10a,18a、最適圧縮パラメータ20aについは、最適符号化モード7aと同様の手順で可変長符号化すればよいため説明を省略する。 Hereinafter, the variable-length coding procedure of the variable-length coding unit 23 will be described using the macroblock optimum coding mode 7a output from the coding control unit 3 as an example of the entropy-coded parameter. Similarly, the optimal prediction parameters 10a and 18a and the optimal compression parameter 20a, which are parameters to be encoded, may be variable-length encoded in the same procedure as in the optimal encoding mode 7a, and the description thereof is omitted.
 なお、本実施の形態2の符号化制御部3は、コンテキスト情報初期化フラグ91、種別信号100、周辺ブロック情報101、2値化テーブル更新フラグ113を出力するものとする。各情報の詳細は後述する。 Note that the encoding control unit 3 according to the second embodiment outputs a context information initialization flag 91, a type signal 100, peripheral block information 101, and a binarization table update flag 113. Details of each information will be described later.
 初期化部90は、符号化制御部3から指示されるコンテキスト情報初期化フラグ91に応じて、コンテキスト情報メモリ96に格納されているコンテキスト情報106の初期化を行って初期状態にする。初期化部90による初期化処理の詳細は後述する。 The initialization unit 90 initializes the context information 106 stored in the context information memory 96 in accordance with the context information initialization flag 91 instructed from the encoding control unit 3, and sets the initial state. Details of the initialization processing by the initialization unit 90 will be described later.
 2値化部92は、2値化テーブルメモリ105に格納されている2値化テーブルを参照して、符号化制御部3から入力される最適符号化モード7aの種類を表す多値信号のインデックス値を2値信号103へ変換し、算術符号化処理演算部104へ出力する。 The binarization unit 92 refers to the binarization table stored in the binarization table memory 105 and refers to the index of the multilevel signal representing the type of the optimal encoding mode 7a input from the encoding control unit 3 The value is converted into a binary signal 103 and output to the arithmetic coding processing operation unit 104.
 図10は、2値化テーブルメモリ105が保持する2値化テーブルの一例を示す図である。図10に示す「符号化モード」は、図2Aに示した符号化モード(mb_mode0~3)にスキップモード(mb_skip:符号化装置側で隣接するマクロブロックの動きベクトルを使って動き補償された予測画像を復号装置側で復号画像に用いるモード)を加えた5種類の符号化モード7であり、各符号化モードに対応する「インデックス」値が格納されている。また、これら符号化モードのインデックス値はそれぞれ1~3ビットで2値化され、「2値信号」として格納されている。ここでは、2値信号の各ビットを「ビン」番号と呼ぶ。
 なお、詳細は後述するが、図10の例では、発生頻度の高い符号化モードに小さいインデックス値が割り当てられており、また、2値信号も1ビットと短く設定されている。
FIG. 10 is a diagram illustrating an example of a binarization table held in the binarization table memory 105. The “coding mode” shown in FIG. 10 is a prediction that is motion-compensated using a motion mode of a macroblock adjacent to the coding mode (mb_skip: coding apparatus side) in the coding mode (mb_mode 0 to 3) shown in FIG. 2A. 5 types of encoding modes 7 to which an image is used as a decoded image on the decoding device side, and an “index” value corresponding to each encoding mode is stored. The index values of these encoding modes are each binarized with 1 to 3 bits and stored as “binary signal”. Here, each bit of the binary signal is referred to as a “bin” number.
Although details will be described later, in the example of FIG. 10, a small index value is assigned to a coding mode having a high occurrence frequency, and the binary signal is also set to be as short as 1 bit.
 符号化制御部3が出力する最適符号化モード7aは、2値化部92へ入力されると共に頻度情報生成部93へも入力される。 The optimal encoding mode 7 a output from the encoding control unit 3 is input to the binarization unit 92 and also to the frequency information generation unit 93.
 頻度情報生成部93は、この最適符号化モード7aに含まれる符号化モードのインデックス値の発生頻度(符号化制御部が選択する符号化モードの選択頻度)をカウントして頻度情報94を作成し、後述の2値化テーブル更新部95へ出力する。 The frequency information generation unit 93 counts the frequency of occurrence of the index value of the encoding mode included in the optimum encoding mode 7a (the selection frequency of the encoding mode selected by the encoding control unit) to generate the frequency information 94. The data is output to a binarization table update unit 95 described later.
 確率テーブルメモリ97は、2値信号103に含まれる各ビンのシンボル値「0」または「1」のうち発生確率が高いいずれかのシンボル(MPS:Most Probable Symbol)とその発生確率の組み合わせを複数組格納したテーブルを保持するメモリである。 The probability table memory 97 includes a plurality of symbols (MPS: Most Probable Symbol) having a high occurrence probability among symbol values “0” or “1” of each bin included in the binary signal 103 and a plurality of combinations of the occurrence probabilities. This is a memory for holding a pair of stored tables.
 図11は、確率テーブルメモリ97が保持する確率テーブルの一例を示す図である。図11では、0.5~1.0の間の離散的な確率値(「発生確率」)に対し、各々「確率テーブル番号」を割り当てている。 FIG. 11 is a diagram showing an example of a probability table held in the probability table memory 97. As shown in FIG. In FIG. 11, “probability table numbers” are assigned to discrete probability values (“occurrence probabilities”) between 0.5 and 1.0.
 状態遷移テーブルメモリ98は、確率テーブルメモリ97に格納された「確率テーブル番号」と、その確率テーブル番号が示す「0」または「1」のうちのMPSの符号化前の確率状態から符号化後の確率状態への状態遷移の組み合わせを複数組格納したテーブルを保持するメモリである。 The state transition table memory 98 is encoded from the “probability table number” stored in the probability table memory 97 and the probability state before the MPS encoding of “0” or “1” indicated by the probability table number. This is a memory for holding a table storing a plurality of combinations of state transitions to the probability state.
 図12は、状態遷移テーブルメモリ98が保持する状態遷移テーブルの一例を示す図である。図12の「確率テーブル番号」、「LPS符号化後の確率遷移」、「MPS符号化後の確率遷移」はそれぞれ図11に示す確率テーブル番号に対応する。
 例えば、図12中に枠で囲った「確率テーブル番号1」の確率状態(図11よりMPSの発生確率0.527)のときに、「0」または「1」のうち発生確率が低いいずれかのシンボル(LPS:Least Probable Symbol)を符号化したことによって、確率状態は「LPS符号化後の確率遷移」より確率テーブル番号0(図11よりMPSの発生確率0.500)へ遷移することを表す。即ち、LPSが発生したことによって、MPSの発生確率は小さくなっている。
 逆に、MPSを符号化すると、確率状態は「MPS符号化後の確率遷移」より確率テーブル番号2(図11よりMPSの発生確率0.550)へ遷移することを表す。即ち、MPSが発生したことによって、MPSの発生確率は大きくなっている。
FIG. 12 is a diagram illustrating an example of a state transition table held in the state transition table memory 98. “Probability table number”, “Probability transition after LPS encoding”, and “Probability transition after MPS encoding” in FIG. 12 respectively correspond to the probability table numbers shown in FIG.
For example, in the probability state of “probability table number 1” surrounded by a frame in FIG. 12 (MPS occurrence probability 0.527 from FIG. 11), one of “0” or “1” having a low occurrence probability By encoding the symbol (LPS: Last Probable Symbol), the probability state changes from “probability transition after LPS encoding” to probability table number 0 (MPS occurrence probability 0.500 from FIG. 11). To express. That is, the occurrence probability of MPS is reduced due to the occurrence of LPS.
Conversely, when MPS is encoded, the probability state represents a transition from “probability transition after MPS encoding” to probability table number 2 (MPS occurrence probability 0.550 from FIG. 11). That is, the occurrence probability of MPS is increased due to the occurrence of MPS.
 コンテキスト生成部99は、符号化制御部3から入力される符号化対象のパラメータ(最適符号化モード7a、最適予測パラメータ10a,18a、最適圧縮パラメータ20a)の種別を示す種別信号100と周辺ブロック情報101とを参照して、符号化対象のパラメータを2値化して得られる2値信号103のビン毎にコンテキスト識別情報102を生成する。この説明中では、種別信号100は、符号化対象マクロブロックの最適符号化モード7aである。また、周辺ブロック情報101は、符号化対象マクロブロックに隣接するマクロブロックの最適符号化モード7aである。
 以下、コンテキスト生成部99によるコンテキスト識別情報の生成手順を説明する。
The context generation unit 99 includes a type signal 100 indicating the types of parameters to be encoded (the optimal encoding mode 7a, the optimal prediction parameters 10a and 18a, and the optimal compression parameter 20a) input from the encoding control unit 3 and peripheral block information. 101, the context identification information 102 is generated for each bin of the binary signal 103 obtained by binarizing the encoding target parameter. In this description, the type signal 100 is the optimum encoding mode 7a of the encoding target macroblock. The peripheral block information 101 is the optimal encoding mode 7a of the macroblock adjacent to the encoding target macroblock.
Hereinafter, a procedure for generating context identification information by the context generation unit 99 will be described.
 図13(a)は、図10に示す2値化テーブルを二分木表現で表した図である。ここでは、図13(b)に示す太枠の符号化対象マクロブロックと、この符号化対象マクロブロックに隣接する周辺ブロックA,Bとを例に用いて説明する。
 図13(a)において、黒丸をノード、ノード間を結ぶ線をパスと呼ぶ。二分木の終端ノードには、2値化対象の多値信号のインデックスが割り当てられている。また、紙面上の上から下へ向って、二分木の深さがビン番号に対応し、ルートノードから終端ノードまでの各パスに割り当てられたシンボル(0または1)を結合したビット列が、各終端ノードに割り当てられた多値信号のインデックスに対応する2値信号103になる。二分木の各親ノード(終端ではないノード)に対し、周辺ブロックA,Bの情報に応じて1以上のコンテキスト識別情報が用意されている。
FIG. 13A is a diagram showing the binary table shown in FIG. 10 in binary tree representation. Here, a thick-framed encoding target macroblock shown in FIG. 13B and peripheral blocks A and B adjacent to the encoding target macroblock will be described as an example.
In FIG. 13A, a black circle is called a node, and a line connecting the nodes is called a path. An index of a multilevel signal to be binarized is assigned to the end node of the binary tree. Further, from the top to the bottom of the page, the depth of the binary tree corresponds to the bin number, and a bit string obtained by combining symbols (0 or 1) assigned to each path from the root node to the end node is The binary signal 103 corresponding to the index of the multilevel signal assigned to the terminal node is obtained. For each parent node (non-terminal node) of the binary tree, one or more context identification information is prepared according to the information of the peripheral blocks A and B.
 例えば、図13(a)において、ルートノードに対してC0,C1,C2の3つのコンテキスト識別情報が用意されている場合に、コンテキスト生成部99は、隣接する周辺ブロックA,Bの周辺ブロック情報101を参照して、下式(4)よりC0,C1,C2の3つのコンテキスト識別情報のうちいずれか1つを選択する。コンテキスト生成部99は、選択したコンテキスト識別情報をコンテキスト識別情報102として出力する。
Figure JPOXMLDOC01-appb-I000001
For example, in FIG. 13A, when three pieces of context identification information C0, C1, and C2 are prepared for the root node, the context generation unit 99 uses the neighboring block information of neighboring neighboring blocks A and B. Referring to 101, any one of the three pieces of context identification information C0, C1, and C2 is selected from the following equation (4). The context generation unit 99 outputs the selected context identification information as the context identification information 102.
Figure JPOXMLDOC01-appb-I000001
 上式(4)は、周辺ブロックA,BをマクロブロックXとした場合に、周辺ブロックA,Bの符号化モードが“0”(mb_skip)ならば符号化対象マクロブロックの符号化モードも“0”(mb_skip)になる確率が高いという仮定のもとに用意された式である。よって、上式(4)より選択したコンテキスト識別情報102も同様の仮定に基づくものである。 In the above equation (4), when the peripheral blocks A and B are the macroblock X, if the encoding mode of the peripheral blocks A and B is “0” (mb_skip), the encoding mode of the encoding target macroblock is “ This is an expression prepared under the assumption that the probability of becoming 0 ″ (mb_skip) is high. Therefore, the context identification information 102 selected from the above equation (4) is also based on the same assumption.
 なお、ルートノード以外の親ノードには、それぞれ1つのコンテキスト識別情報(C3,C4,C5)が割り当てられている。 Note that one context identification information (C3, C4, C5) is assigned to each parent node other than the root node.
 コンテキスト識別情報102で識別されるコンテキスト情報には、MPSの値(0または1)と、その発生確率を近似する確率テーブル番号とが保持されており、今、初期状態にある。このコンテキスト情報はコンテキスト情報メモリ96が格納している。 The context information identified by the context identification information 102 holds an MPS value (0 or 1) and a probability table number that approximates the occurrence probability, and is in an initial state. This context information is stored in the context information memory 96.
 算術符号化処理演算部104は、2値化部92から入力される1~3ビットの2値信号103を、ビン毎に算術符号化して符号化ビット列111を生成し、ビットストリーム30に多重化させる。以下、コンテキスト情報に基づく算術符号化手順を説明する。 The arithmetic coding processing operation unit 104 arithmetically codes the 1 to 3 bit binary signal 103 input from the binarizing unit 92 for each bin to generate an encoded bit string 111 and multiplexes the encoded bit string 111 into the bit stream 30. Let Hereinafter, an arithmetic coding procedure based on the context information will be described.
 算術符号化処理演算部104は、先ず、コンテキスト情報メモリ96を参照して、2値信号103のビン0に対応するコンテキスト識別情報102に基づくコンテキスト情報106を得る。続いて、算術符号化処理演算部104は、確率テーブルメモリ97を参照して、コンテキスト情報106に保持されている確率テーブル番号107に対応するビン0のMPS発生確率108を特定する。 First, the arithmetic coding processing operation unit 104 refers to the context information memory 96 to obtain context information 106 based on the context identification information 102 corresponding to the bin 0 of the binary signal 103. Subsequently, the arithmetic coding processing calculation unit 104 refers to the probability table memory 97 and specifies the MPS occurrence probability 108 of bin 0 corresponding to the probability table number 107 held in the context information 106.
 続いて算術符号化処理演算部104は、コンテキスト情報106に保持されているMPSの値(0または1)と、特定されたMPS発生確率108とに基づいて、ビン0のシンボル値109(0または1)を算術符号化する。続いて、算術符号化処理演算部104は、状態遷移テーブルメモリ98を参照して、コンテキスト情報106に保持されている確率テーブル番号107と、先に算術符号化したビン0のシンボル値109とに基づいて、ビン0のシンボル符号化後の確率テーブル番号110を得る。 Subsequently, the arithmetic coding processing operation unit 104 determines the symbol value 109 (0 or 0) of bin 0 based on the MPS value (0 or 1) held in the context information 106 and the identified MPS occurrence probability 108. 1) is arithmetically encoded. Subsequently, the arithmetic coding processing calculation unit 104 refers to the state transition table memory 98 and converts the probability table number 107 held in the context information 106 and the symbol value 109 of bin 0 previously arithmetically coded. Based on this, the probability table number 110 after bin 0 symbol encoding is obtained.
 続いて算術符号化処理演算部104は、コンテキスト情報メモリ96に格納されているビン0のコンテキスト情報106の確率テーブル番号(即ち、確率テーブル番号107)の値を、状態遷移後の確率テーブル番号(即ち、先に状態遷移テーブルメモリ98から取得した、ビン0のシンボル符号化後の確率テーブル番号110)へ更新する。 Subsequently, the arithmetic coding processing calculation unit 104 uses the value of the probability table number (that is, the probability table number 107) of the context information 106 of bin 0 stored in the context information memory 96 as the probability table number after the state transition ( That is, it is updated to the probability table number 110) obtained from the state transition table memory 98 before encoding the symbol of bin 0.
 算術符号化処理演算部104は、ビン1,2についてもビン0と同様に、各々のコンテキスト識別情報102で識別されるコンテキスト情報106に基づく算術符号化を行い、各ビンのシンボル符号化後にコンテキスト情報106の更新を行う。
 算術符号化処理演算部104は、すべてのビンのシンボルを算術符号化して得られる符号化ビット列111を出力し、可変長符号化部23がビットストリーム30に多重化する。
The arithmetic coding processing calculation unit 104 performs arithmetic coding on the bins 1 and 2 based on the context information 106 identified by each context identification information 102 as in the bin 0, and after the symbol coding of each bin, The information 106 is updated.
The arithmetic encoding processing operation unit 104 outputs an encoded bit string 111 obtained by arithmetically encoding all bin symbols, and the variable length encoding unit 23 multiplexes the bit stream 30.
 上述の通り、コンテキスト識別情報102で識別されるコンテキスト情報106は、シンボルを算術符号化する毎に更新される。即ち、それは各ノードの確率状態がシンボル符号化毎に遷移していくことを意味する。そして、コンテキスト情報106の初期化、即ち、確率状態のリセットは前述の初期化部90により行われる。
 初期化部90は、符号化制御部3のコンテキスト情報初期化フラグ91による指示に応じて初期化するが、この初期化はスライスの先頭等で行われる。各コンテキスト情報106の初期状態(MPSの値とその発生確率を近似する確率テーブル番号の初期値)については、予め複数のセットを用意しておき、いずれの初期状態を選択するかどうかを符号化制御部3がコンテキスト情報初期化フラグ91に含めて、初期化部90へ指示するようにしてもよい。
As described above, the context information 106 identified by the context identification information 102 is updated every time a symbol is arithmetically encoded. That is, it means that the probability state of each node transitions for each symbol encoding. The initialization of the context information 106, that is, the resetting of the probability state is performed by the initialization unit 90 described above.
The initialization unit 90 performs initialization in response to an instruction by the context information initialization flag 91 of the encoding control unit 3, and this initialization is performed at the head of the slice or the like. As for the initial state of each context information 106 (MPS value and probability table number initial value approximating the occurrence probability), a plurality of sets are prepared in advance, and which initial state is selected is encoded. The control unit 3 may include the context information initialization flag 91 and instruct the initialization unit 90.
 2値化テーブル更新部95は、符号化制御部3から指示される2値化テーブル更新フラグ113に基づき、頻度情報生成部93により生成された、符号化対象パラメータ(ここでは最適符号化モード7a)のインデックス値の発生頻度を表す頻度情報94を参照し、2値化テーブルメモリ105を更新する。以下、2値化テーブル更新部95による2値化テーブルを更新する手順を説明する。 The binarization table update unit 95 is based on the binarization table update flag 113 instructed by the encoding control unit 3 and is generated by the frequency information generation unit 93. The encoding target parameter (here, the optimal encoding mode 7a). The binarization table memory 105 is updated with reference to the frequency information 94 indicating the frequency of occurrence of the index values. The procedure for updating the binarization table by the binarization table update unit 95 will be described below.
 この例では、符号化対象パラメータである最適符号化モード7aが指定する符号化モードの発生頻度に応じて、発生頻度が最も高い符号化モードを短い符号語で2値化できるように2値化テーブルの符号化モードとインデックスの対応関係を更新し、符号量の低減を図る。 In this example, binarization is performed so that the coding mode with the highest occurrence frequency can be binarized with a short code word according to the occurrence frequency of the encoding mode specified by the optimum encoding mode 7a that is the encoding target parameter. The correspondence between the table coding mode and the index is updated to reduce the code amount.
 図14は、更新後の2値化テーブルの一例を示す図であり、更新前の2値化テーブルの状態が図10に示す状態であると仮定した場合の更新後状態である。2値化テーブル更新部95は、頻度情報94に従って、例えばmb_mode3の発生頻度が最も高い場合、そのmb_mode3に短い符号語の2値信号が割り当てられるように最も小さいインデックス値を割り当てる。 FIG. 14 is a diagram illustrating an example of the binarized table after the update, and is a post-update state when it is assumed that the state of the binarized table before the update is the state illustrated in FIG. For example, when the occurrence frequency of mb_mode3 is the highest, the binarization table update unit 95 assigns the smallest index value so that a binary signal of a short code word is assigned to the mb_mode3.
 また、2値化テーブル更新部95は、2値化テーブルを更新した場合に、更新した2値化テーブルを復号装置側で識別できるようにするための2値化テーブル更新識別情報112を生成して、ビットストリーム30に多重化させる必要がある。例えば、符号化対象パラメータ毎に複数の2値化テーブルがある場合、各符号化対象パラメータを識別できるIDを符号化装置側および復号装置側にそれぞれ予め付与しておき、2値化テーブル更新部95は、更新後の2値化テーブルのIDを2値化テーブル更新識別情報112として出力し、ビットストリーム30に多重化させるようにしてもよい。 Further, the binarization table update unit 95 generates binarization table update identification information 112 for enabling the decoding device to identify the updated binarization table when the binarization table is updated. Thus, it is necessary to multiplex the bit stream 30. For example, when there are a plurality of binarization tables for each encoding target parameter, an ID for identifying each encoding target parameter is assigned in advance to the encoding device side and the decoding device side, respectively, and the binarization table update unit 95 may output the ID of the updated binarization table as the binarization table update identification information 112 and multiplex it with the bitstream 30.
 更新タイミングの制御は、符号化制御部3が、スライスの先頭で符号化対象パラメータの頻度情報94を参照して、符号化対象パラメータの発生頻度分布が所定の許容範囲以上に大きく変わったと判定した場合に、2値化テーブル更新フラグ113を出力して行う。可変長符号化部23は、2値化テーブル更新フラグ113をビットストリーム30のスライスヘッダに多重化すればよい。また、可変長符号化部23は、2値化テーブル更新フラグ113が「2値化テーブルの更新あり」を示している場合には、符号化モード、圧縮パラメータ、予測パラメータの2値化テーブルのうち、どの2値化テーブルを更新したかを示す2値化テーブル更新識別情報112をビットストリーム30へ多重化する。 In the update timing control, the encoding control unit 3 refers to the frequency information 94 of the encoding target parameter at the head of the slice, and determines that the occurrence frequency distribution of the encoding target parameter has greatly changed beyond a predetermined allowable range. In this case, the binarization table update flag 113 is output. The variable length coding unit 23 may multiplex the binarization table update flag 113 into the slice header of the bit stream 30. Further, when the binarization table update flag 113 indicates “the binarization table is updated”, the variable length coding unit 23 stores the coding mode, compression parameter, and prediction parameter binarization table. Among them, the binarization table update identification information 112 indicating which binarization table is updated is multiplexed into the bitstream 30.
 また、符号化制御部3は、スライスの先頭以外のタイミングで2値化テーブルの更新を指示してもよく、例えば任意のマクロブロックの先頭で2値化テーブル更新フラグ113を出力して更新指示してもよい。この場合には、2値化テーブル更新部95が、2値化テーブルの更新を行ったマクロブロック位置を特定する情報を出力し、可変長符号化部23がその情報もビットストリーム30に多重化する必要がある。 In addition, the encoding control unit 3 may instruct the update of the binarization table at a timing other than the head of the slice. For example, the encoding control unit 3 outputs the binarization table update flag 113 at the head of an arbitrary macroblock. May be. In this case, the binarization table update unit 95 outputs information specifying the macroblock position where the binarization table has been updated, and the variable length encoding unit 23 also multiplexes the information into the bitstream 30. There is a need to.
 なお、符号化制御部3は、2値化テーブル更新部95へ2値化テーブル更新フラグ113を出力して2値化テーブルを更新させた場合には、初期化部90へコンテキスト情報初期化フラグ91を出力して、コンテキスト情報メモリ96の初期化を行う必要がある。 When the encoding control unit 3 outputs the binarization table update flag 113 to the binarization table update unit 95 to update the binarization table, the encoding control unit 3 notifies the initialization unit 90 of the context information initialization flag. 91 is output to initialize the context information memory 96.
 次に、本実施の形態2に係る動画像復号装置の可変長復号部61を説明する。
 図15は、この発明の実施の形態2に係る動画像復号装置の可変長復号部61の内部構成を示すブロック図である。なお、本実施の形態2に係る動画像復号装置の構成は上記実施の形態1と同じであり、可変長復号部61を除く各構成要素の動作も上記実施の形態1と同じであるため、図1~図8を援用する。
Next, the variable length decoding unit 61 of the video decoding apparatus according to the second embodiment will be described.
FIG. 15 is a block diagram showing an internal configuration of the variable length decoding unit 61 of the video decoding apparatus according to Embodiment 2 of the present invention. The configuration of the moving picture decoding apparatus according to the second embodiment is the same as that of the first embodiment, and the operation of each component other than the variable length decoding unit 61 is the same as that of the first embodiment. 1 to 8 are referred to.
 図15に示す可変長復号部61は、コンテキスト生成部122が生成するコンテキスト識別情報126、コンテキスト情報メモリ128、確率テーブルメモリ131、および状態遷移テーブルメモリ135を参照してビットストリーム60に多重化された最適符号化モード62(または最適予測パラメータ63、最適圧縮パラメータ65)を表す符号化ビット列133を算術復号して2値信号137を生成する算術復号処理演算部127と、2値信号で表された最適符号化モード62(または最適予測パラメータ63、最適圧縮パラメータ65)と多値信号との対応関係を指定した2値化テーブル139を格納する2値化テーブルメモリ143と、2値化テーブル139を用いて、算術復号処理演算部127が生成した2値信号137を多値信号の復号値140へ変換する逆2値化部138とを含む。 The variable length decoding unit 61 illustrated in FIG. 15 is multiplexed into the bitstream 60 with reference to the context identification information 126, the context information memory 128, the probability table memory 131, and the state transition table memory 135 generated by the context generation unit 122. An arithmetic decoding processing operation unit 127 that arithmetically decodes the encoded bit string 133 representing the optimal encoding mode 62 (or optimal prediction parameter 63, optimal compression parameter 65) to generate a binary signal 137, and a binary signal. A binarization table memory 143 for storing a binarization table 139 designating a correspondence relationship between the optimum coding mode 62 (or the optimum prediction parameter 63, the optimum compression parameter 65) and the multilevel signal, and the binarization table 139 The binary signal 137 generated by the arithmetic decoding processing operation unit 127 is And a reverse binarizing section 138 for converting the decoded values 140 of the signal.
 以下では、エントロピ復号されるパラメータとして、ビットストリーム60に含まれるマクロブロックの最適符号化モード62を例に、可変長復号部61の可変長復号手順を説明する。同じく復号対象のパラメータである最適予測パラメータ63、最適圧縮パラメータ65については、最適符号化モード62と同様の手順で可変長復号すればよいため説明を省略する。 Hereinafter, the variable-length decoding procedure of the variable-length decoding unit 61 will be described taking the optimal encoding mode 62 of the macroblock included in the bitstream 60 as an example of the entropy-decoded parameter. Similarly, the optimal prediction parameter 63 and the optimal compression parameter 65 that are parameters to be decoded may be variable-length decoded in the same procedure as in the optimal encoding mode 62, and thus description thereof is omitted.
 なお、本実施の形態2のビットストリーム60には、符号化装置側にて多重化されたコンテキスト初期化情報121、符号化ビット列133、2値化テーブル更新フラグ142、2値化テーブル更新識別情報144が含まれている。各情報の詳細は後述する。 In the bit stream 60 of the second embodiment, the context initialization information 121, the encoded bit string 133, the binary table update flag 142, and the binary table update identification information multiplexed on the encoding device side are included. 144 is included. Details of each information will be described later.
 初期化部120は、スライスの先頭等でコンテキスト情報メモリ128に格納されているコンテキスト情報の初期化を行う。あるいは、初期化部120に、コンテキスト情報の初期状態(MPSの値とその発生確率を近似する確率テーブル番号の初期値)について予め複数のセットを用意しておき、コンテキスト初期化情報121の復号値に対応する初期状態をセット中から選択するようにしてもよい。 The initialization unit 120 initializes the context information stored in the context information memory 128 at the head of the slice or the like. Alternatively, the initialization unit 120 prepares a plurality of sets in advance for the initial state of the context information (the initial value of the MPS value and the probability table number approximating the occurrence probability), and the decoded value of the context initialization information 121 You may make it select the initial state corresponding to to from a set.
 コンテキスト生成部122は、復号対象のパラメータ(最適符号化モード62、最適予測パラメータ63、最適圧縮パラメータ65)の種別を示す種別信号123と周辺ブロック情報124とを参照して、コンテキスト識別情報126を生成する。 The context generation unit 122 refers to the type signal 123 indicating the type of parameters to be decoded (optimal encoding mode 62, optimal prediction parameter 63, and optimal compression parameter 65) and the peripheral block information 124, and provides context identification information 126. Generate.
 種別信号123は、復号対象のパラメータの種別を表す信号であり、復号対象のパラメータが何であるかは、可変長復号部61内に保持しているシンタックスに従って判定する。従って、符号化装置側と復号装置側とで同じシンタックスを保持している必要があり、ここでは符号化装置側の符号化制御部3がそのシンタックスを保持していることとする。符号化装置側では、符号化制御部3が保持しているシンタックスに従って、次に符号化すべきパラメータの種別とそのパラメータの値(インデックス値)、即ち種別信号100を可変長符号化部23へ順次出力していくこととなる。 The type signal 123 is a signal representing the type of parameter to be decoded, and it is determined according to the syntax held in the variable-length decoding unit 61 what the decoding target parameter is. Therefore, it is necessary to hold the same syntax on the encoding device side and the decoding device side. Here, it is assumed that the encoding control unit 3 on the encoding device side holds the syntax. On the encoding device side, according to the syntax held by the encoding control unit 3, the parameter type to be encoded next and the parameter value (index value), that is, the type signal 100, are sent to the variable length encoding unit 23. It will be output sequentially.
 また、周辺ブロック情報124は、マクロブロックまたはサブブロックを復号して得られる符号化モード等の情報であり、以降のマクロブロックまたはサブブロックの復号のための周辺ブロック情報124として用いるために可変長復号部61内のメモリ(不図示)に格納しておき、必要に応じてコンテキスト生成部122へ出力される。 The peripheral block information 124 is information such as a coding mode obtained by decoding a macroblock or a subblock, and has a variable length for use as peripheral block information 124 for subsequent decoding of the macroblock or subblock. It is stored in a memory (not shown) in the decoding unit 61 and is output to the context generation unit 122 as necessary.
 なお、コンテキスト生成部122によるコンテキスト識別情報126の生成手順は符号化装置側におけるコンテキスト生成部99の動作と同じである。復号装置側のコンテキスト生成部122においても、逆2値化部138にて参照される2値化テーブル139のビン毎にコンテキスト識別情報126を生成する。 Note that the generation procedure of the context identification information 126 by the context generation unit 122 is the same as the operation of the context generation unit 99 on the encoding device side. The context generation unit 122 on the decoding device side also generates context identification information 126 for each bin of the binarization table 139 referred to by the inverse binarization unit 138.
 各ビンのコンテキスト情報には、そのビンを算術復号するための確率情報として、MPSの値(0または1)とそのMPSの発生確率を特定する確率テーブル番号とが保持されている。
 また、確率テーブルメモリ131および状態遷移テーブルメモリ135は、符号化装置側の確率テーブルメモリ97および状態遷移テーブルメモリ98と同じ確率テーブル(図11)および状態遷移テーブル(図12)を格納している。
The context information of each bin holds an MPS value (0 or 1) and a probability table number for specifying the occurrence probability of the MPS as probability information for arithmetic decoding of the bin.
In addition, the probability table memory 131 and the state transition table memory 135 store the same probability table (FIG. 11) and state transition table (FIG. 12) as the coding device side probability table memory 97 and the state transition table memory 98. .
 算術復号処理演算部127は、ビットストリーム60に多重化された符号化ビット列133をビン毎に算術復号して2値信号137を生成し、逆2値化部138へ出力する。 The arithmetic decoding processing calculation unit 127 performs arithmetic decoding on the encoded bit string 133 multiplexed in the bit stream 60 for each bin to generate a binary signal 137 and outputs the binary signal 137 to the inverse binarization unit 138.
 算術復号処理演算部127は、先ず、コンテキスト情報メモリ128を参照して、符号化ビット列133の各ビンに対応するコンテキスト識別情報126に基づくコンテキスト情報129を得る。続いて、算術復号処理演算部127は、確率テーブルメモリ131を参照して、コンテキスト情報129に保持されている確率テーブル番号130に対応する各ビンのMPS発生確率132を特定する。 First, the arithmetic decoding processing calculation unit 127 refers to the context information memory 128 to obtain context information 129 based on the context identification information 126 corresponding to each bin of the encoded bit string 133. Subsequently, the arithmetic decoding processing calculation unit 127 refers to the probability table memory 131 and specifies the MPS occurrence probability 132 of each bin corresponding to the probability table number 130 held in the context information 129.
 続いて算術復号処理演算部127は、コンテキスト情報129に保持されているMPSの値(0または1)と、特定されたMPS発生確率132とに基づいて、算術復号処理演算部127へ入力された符号化ビット列133を算術復号し、各ビンのシンボル値134(0または1)を得る。各ビンのシンボル値を復号後、算術復号処理演算部127は、状態遷移テーブルメモリ135を参照して、符号化装置側の算術符号化処理演算部104と同様の手順で、復号された各ビンのシンボル値134とコンテキスト情報129に保持されている確率テーブル番号130とに基づいて、各ビンのシンボル復号後(状態遷移後)の確率テーブル番号136を得る。 Subsequently, the arithmetic decoding process calculation unit 127 is input to the arithmetic decoding process calculation unit 127 based on the MPS value (0 or 1) held in the context information 129 and the identified MPS occurrence probability 132. The encoded bit string 133 is arithmetically decoded to obtain a symbol value 134 (0 or 1) for each bin. After decoding the symbol value of each bin, the arithmetic decoding process calculation unit 127 refers to the state transition table memory 135 and decodes each bin decoded in the same procedure as the arithmetic coding process calculation unit 104 on the encoding device side. The probability table number 136 after the symbol decoding of each bin (after the state transition) is obtained based on the symbol value 134 and the probability table number 130 held in the context information 129.
 続いて算術復号処理演算部127は、コンテキスト情報メモリ128に格納されている各ビンのコンテキスト情報129の確率テーブル番号(即ち、確率テーブル番号130)の値を、状態遷移後の確率テーブル番号(即ち、先に状態遷移テーブルメモリ135から取得した、各ビンのシンボル復号後の確率テーブル番号136)へ更新する。
 算術復号処理演算部127は、上記算術復号の結果得られた各ビンのシンボルを結合した2値信号137を、逆2値化部138へ出力する。
Subsequently, the arithmetic decoding processing calculation unit 127 uses the value of the probability table number (that is, the probability table number 130) of the context information 129 of each bin stored in the context information memory 128 as the probability table number after the state transition (that is, the probability table number 130). , Update to the probability table number 136) obtained from the state transition table memory 135 before the symbol decoding of each bin.
The arithmetic decoding processing calculation unit 127 outputs a binary signal 137 obtained by combining the bin symbols obtained as a result of the arithmetic decoding to the inverse binarization unit 138.
 逆2値化部138は、2値化テーブルメモリ143に格納されている復号対象パラメータの種別毎に用意された2値化テーブルの中から、符号化時と同じ2値化テーブル139を選択して参照し、算術復号処理演算部127から入力された2値信号137から復号対象パラメータの復号値140を出力する。
 なお、復号対象パラメータの種別がマクロブロックの符号化モード(最適符号化モード62)のとき、2値化テーブル139は図10に示した符号化装置側の2値化テーブルと同じである。
The inverse binarization unit 138 selects the same binarization table 139 as at the time of encoding from the binarization table prepared for each type of decoding target parameter stored in the binarization table memory 143. The decoded value 140 of the decoding target parameter is output from the binary signal 137 input from the arithmetic decoding processing calculation unit 127.
When the type of decoding target parameter is the macroblock encoding mode (optimal encoding mode 62), the binarization table 139 is the same as the binarization table on the encoding apparatus side shown in FIG.
 2値化テーブル更新部141は、ビットストリーム60から復号された2値化テーブル更新フラグ142および2値化テーブル更新識別情報144に基づき、2値化テーブルメモリ143に格納されている2値化テーブルの更新を行う。 The binarization table update unit 141 is based on the binarization table update flag 142 and the binarization table update identification information 144 decoded from the bitstream 60, and the binarization table stored in the binarization table memory 143. Update.
 2値化テーブル更新フラグ142は、符号化装置側の2値化テーブル更新フラグ113に対応する情報であり、ビットストリーム60のヘッダ情報等に含まれ、2値化テーブルの更新の有無を示す情報である。2値化テーブル更新フラグ142の復号値が「2値化テーブルの更新あり」を示す場合には、ビットストリーム60からさらに2値化テーブル更新識別情報144が復号されることとなる。 The binarization table update flag 142 is information corresponding to the binarization table update flag 113 on the encoding device side, and is included in the header information of the bitstream 60 and the like and indicates whether or not the binarization table is updated. It is. When the decoded value of the binarized table update flag 142 indicates “binary table is updated”, the binarized table update identification information 144 is further decoded from the bitstream 60.
 2値化テーブル更新識別情報144は、符号化装置側の2値化テーブル更新識別情報112に対応する情報であり、符号化装置側で更新したパラメータの2値化テーブルを識別するための情報である。例えば、上述したように、符号化対象パラメータ毎に予め複数の2値化テーブルがある場合、各符号化対象パラメータを識別できるIDおよび2値化テーブルのIDを符号化装置側および復号装置側にそれぞれ予め付与しておき、2値化テーブル更新部141はビットストリーム60から復号された2値化テーブル更新識別情報144中のID値に対応した2値化テーブルを更新する。この例では、2値化テーブルメモリ143に図10と図14の2種類の2値化テーブルとそのIDが予め用意され、更新前の2値化テーブルの状態が図10に示す状態であると仮定した場合、2値化テーブル更新部141が2値化テーブル更新フラグ142および2値化テーブル更新識別情報144に従って更新処理を実施すれば、2値化テーブル更新識別情報144に含まれるIDに対応した2値化テーブルを選択することになるので、更新後の2値化テーブルの状態が図14に示す状態になり、符号化装置側の更新後の2値化テーブルと同じになる。 The binarization table update identification information 144 is information corresponding to the binarization table update identification information 112 on the encoding device side, and is information for identifying the binarization table of parameters updated on the encoding device side. is there. For example, as described above, when there are a plurality of binarization tables in advance for each encoding target parameter, the ID for identifying each encoding target parameter and the ID of the binarization table are set on the encoding device side and the decoding device side. The binarization table update unit 141 updates the binarization table corresponding to the ID value in the binarization table update identification information 144 decoded from the bitstream 60. In this example, the binarization table memory 143 is prepared in advance with the two types of binarization tables of FIG. 10 and FIG. 14 and their IDs, and the state of the binarization table before the update is the state shown in FIG. Assuming that the binarization table update unit 141 performs the update process according to the binarization table update flag 142 and the binarization table update identification information 144, it corresponds to the ID included in the binarization table update identification information 144. Since the binarized table is selected, the state of the updated binarized table becomes the state shown in FIG. 14, which is the same as the binarized table after the update on the encoding device side.
 以上より、実施の形態2に係る動画像符号化装置によれば、符号化制御部3が、符号化効率が最適となる最適符号化モード7a、最適予測パラメータ10a,18a、最適圧縮パラメータ20aといった符号化対象パラメータを選択して出力し、可変長符号化部23の2値化部92は、2値化テーブルメモリ105の2値化テーブルを用いて、多値信号で表される符号化対象パラメータを2値信号103へ変換し、算術符号化処理演算部104が2値信号103を算術符号化して符号化ビット列111を出力し、頻度情報生成部93が符号化対象パラメータの頻度情報94を生成して、2値化テーブル更新部95が頻度情報94に基づいて2値化テーブルの多値信号と2値信号との対応関係を更新するように構成にしたので、2値化テーブルが常に固定である従来の方法に比べ、同等の符号化映像の品質で、符号量を削減することができる。 As described above, according to the moving picture coding apparatus according to Embodiment 2, the coding control unit 3 includes the optimum coding mode 7a, the optimum prediction parameters 10a and 18a, and the optimum compression parameter 20a in which the coding efficiency is optimum. The encoding target parameter is selected and output, and the binarization unit 92 of the variable length encoding unit 23 uses the binarization table of the binarization table memory 105 to encode the encoding target represented by a multilevel signal. The parameter is converted into the binary signal 103, the arithmetic encoding processing arithmetic unit 104 arithmetically encodes the binary signal 103 and outputs the encoded bit string 111, and the frequency information generating unit 93 outputs the frequency information 94 of the encoding target parameter. Since the binarization table update unit 95 is configured to update the correspondence between the multilevel signal and the binary signal of the binarization table based on the frequency information 94, the binarization table There compared with always the conventional method is fixed, in the same quality of the coded video, it is possible to reduce the code amount.
 また、2値化テーブル更新部95が、2値化テーブルの更新の有無を示す2値化テーブル更新識別情報112および更新後の2値化テーブルを識別するための2値化テーブル更新識別情報112をビットストリーム30へ多重化させるように構成したので、これに対応して、実施の形態2に係る動画像復号装置を、可変長復号部61の算術復号処理演算部127が、ビットストリーム60に多重化された符号化ビット列133を算術復号して2値信号137を生成し、逆2値化部138が、2値化テーブルメモリ143の2値化テーブル139を用いて、2値信号137を多値信号に変換して復号値140を取得し、2値化テーブル更新部141が、ビットストリーム60に多重化されたヘッダ情報から復号される2値化テーブル更新フラグ142および2値化テーブル更新識別情報144に基づいて2値化テーブルメモリ143のうちの所定の2値化テーブルを更新するように構成した。そのため、動画像復号装置が動画像符号化装置と同様の手順で2値化テーブルの更新を行って符号化対象パラメータを逆2値化することができるので、実施の形態2に係る動画符号化装置にて符号化されたビットストリームを正しく復号することが可能になる。 The binarization table update unit 95 also includes binarization table update identification information 112 indicating whether or not the binarization table is updated, and binarization table update identification information 112 for identifying the binarized table after the update. Therefore, the video decoding device according to the second embodiment is configured so that the arithmetic decoding processing operation unit 127 of the variable length decoding unit 61 includes the bit stream 60. The multiplexed coded bit string 133 is arithmetically decoded to generate a binary signal 137, and the inverse binarization unit 138 uses the binarization table 139 of the binarization table memory 143 to convert the binary signal 137. A binarized table update flag is obtained by converting the multilevel signal into a decoded value 140, and the binarized table updating unit 141 decodes the header information multiplexed in the bitstream 60. And configured to update the predetermined binary table of binary table memory 143 based on the 142 and the binary table update identification information 144. Therefore, since the moving picture decoding apparatus can update the binarization table by the same procedure as that of the moving picture encoding apparatus and debinarize the encoding target parameter, the moving picture coding according to Embodiment 2 can be performed. It becomes possible to correctly decode the bitstream encoded by the apparatus.
実施の形態3.
 本実施の形態3では、上記実施の形態1,2に係る動画像符号化装置および動画像復号装置において、動き補償予測部9の動き補償予測による予測画像の生成処理の変形例を説明する。
Embodiment 3 FIG.
In the third embodiment, a modified example of a prediction image generation process by motion compensation prediction of the motion compensation prediction unit 9 in the video encoding device and the video decoding device according to the first and second embodiments will be described.
 先ず、本実施の形態3に係る動画像符号化装置の動き補償予測部9を説明する。なお、本実施の形態3に係る動画符号化装置の構成は上記実施の形態1または実施の形態2と同じであり、動き補償予測部9を除く各構成要素の動作も同じであるため、図1~図15を援用する。 First, the motion compensation prediction unit 9 of the video encoding apparatus according to the third embodiment will be described. The configuration of the moving picture coding apparatus according to the third embodiment is the same as that of the first embodiment or the second embodiment, and the operation of each component other than the motion compensation prediction unit 9 is the same. 1 to 15 are incorporated by reference.
 本実施の形態3に係る動き補償予測部9は、仮想サンプル精度の予測画像生成処理に係る構成および動作が、上記実施の形態1,2とは異なる以外は同じ構成および動作である。即ち、上記実施の形態1,2では、図3に示すように、動き補償予測部9の補間画像生成部43が半画素または1/4画素等の仮想画素精度の参照画像データを生成し、この仮想画素精度の参照画像データに基づいて予測画像45を生成する際に、MPEG-4 AVC規格のように垂直方向または水平方向に6つの整数画素を用いた6タップフィルタによる内挿演算等によって仮想画素を作り出して予測画像を生成したのに対して、本実施の形態3に係る動き補償予測部9では、動き補償予測フレームメモリ14に格納される整数画素精度の参照画像15を超解像処理によって拡大することにより、仮想画素精度の参照画像207を生成し、この仮想画素精度の参照画像207に基づいて予測画像を生成する。 The motion compensated prediction unit 9 according to the third embodiment has the same configuration and operation except that the configuration and operation related to the predicted image generation process with virtual sample accuracy are different from those of the first and second embodiments. That is, in the first and second embodiments, as shown in FIG. 3, the interpolated image generation unit 43 of the motion compensation prediction unit 9 generates reference image data with virtual pixel accuracy such as a half pixel or a quarter pixel, When the predicted image 45 is generated based on the reference image data with the virtual pixel accuracy, the interpolation image is calculated by a 6-tap filter using six integer pixels in the vertical direction or the horizontal direction as in the MPEG-4 AVC standard. While the predicted image is generated by creating the virtual pixel, the motion compensated prediction unit 9 according to the third embodiment super-resolutions the reference image 15 with integer pixel accuracy stored in the motion compensated prediction frame memory 14. By enlarging by the processing, a reference image 207 with virtual pixel accuracy is generated, and a predicted image is generated based on the reference image 207 with virtual pixel accuracy.
 次に、本実施の形態3に係る動き補償予測部9を、図3を援用して説明する。
 上記実施の形態1,2と同様に、本実施の形態3の補間画像生成部43も、動き補償予測フレームメモリ14から1フレーム以上の参照画像15を指定し、動き検出部42が指定された参照画像15上の所定の動き探索範囲内で動きベクトル44を検出する。動きベクトルの検出は、MPEG-4 AVC規格等と同様に、仮想画素精度の動きベクトルによって行う。この検出方法は、参照画像の持つ画素情報(整数画素と呼ぶ)に対し、整数画素の間に内挿演算によって仮想的なサンプル(画素)を作り出し、それを参照画像として利用するものである。
Next, the motion compensation prediction unit 9 according to the third embodiment will be described with reference to FIG.
As in the first and second embodiments, the interpolated image generation unit 43 of the third embodiment also designates one or more frames of the reference image 15 from the motion compensated prediction frame memory 14, and the motion detection unit 42 is designated. A motion vector 44 is detected within a predetermined motion search range on the reference image 15. The motion vector is detected by a motion vector with virtual pixel accuracy, as in the MPEG-4 AVC standard. In this detection method, a virtual sample (pixel) is created by interpolation between integer pixels for pixel information (referred to as integer pixels) possessed by a reference image, and is used as a reference image.
 仮想画素精度の参照画像を生成するためには、整数画素精度の参照画像を拡大(高精細化)して仮想画素からなるサンプルプレーンを生成する必要がある。そこで、本実施の形態3の補間画像生成部43では、仮想画素精度の動き探索用参照画像が必要な場合、「W.T.Freeman,E.C.Pasztor and O.T.Carmichael,“Learning Low-Level Vision”,International Journal of Computer Vision,vol.40,no.1,2000」に開示された超解像技術を利用して、仮想画素精度の参照画像を生成する。以下の説明では、動き補償予測部9において、動き補償予測フレームメモリ14に格納される参照画像データから仮想画素精度の参照画像207を超解像生成し、それを用いて動き検出部42が動きベクトル探索処理を行う構成について述べる。 In order to generate a reference image with virtual pixel accuracy, it is necessary to generate a sample plane composed of virtual pixels by enlarging (higher definition) the reference image with integer pixel accuracy. Therefore, in the interpolated image generation unit 43 according to the third embodiment, when a reference image for motion search with virtual pixel accuracy is necessary, “WT Freeman, EC Pastor and OT Carmichial,“ Learning ”. A reference image with virtual pixel accuracy is generated using the super-resolution technique disclosed in “Low-Level Vision”, International Journal of Computer Vision, vol.40, no.1, 2000 ”. In the following description, the motion compensation prediction unit 9 generates a super-resolution reference image 207 with virtual pixel accuracy from the reference image data stored in the motion compensation prediction frame memory 14, and the motion detection unit 42 uses it to generate motion. A configuration for performing vector search processing will be described.
 図16は、この発明の実施の形態3に係る動画像符号化装置の動き補償予測部9の補間画像生成部43の内部構成を示すブロック図である。図16に示す補間画像生成部43は、動き補償予測フレームメモリ14中の参照画像15を拡大処理する画像拡大処理部205と、参照画像15を縮小処理する画像縮小処理部200と、画像縮小処理部200から高周波領域成分の特徴量を抽出する高周波特徴抽出部201aと、参照画像15から高周波領域成分の特徴量を抽出する高周波特徴抽出部201bと、特徴量間の相関値を計算する相関計算部202と、相関値と高周波成分パターンメモリ204の事前学習データから高周波成分を推定する高周波成分推定部203と、推定した高周波成分を用いて拡大画像の高周波成分を補正して、仮想画素精度の参照画像207を生成する加算部206とを含む。 FIG. 16 is a block diagram showing an internal configuration of the interpolated image generating unit 43 of the motion compensated predicting unit 9 of the moving picture coding apparatus according to Embodiment 3 of the present invention. The interpolated image generation unit 43 shown in FIG. 16 includes an image enlargement processing unit 205 that enlarges the reference image 15 in the motion compensated prediction frame memory 14, an image reduction processing unit 200 that reduces the reference image 15, and an image reduction process. A high-frequency feature extraction unit 201a that extracts a feature quantity of a high-frequency region component from the unit 200; a high-frequency feature extraction unit 201b that extracts a feature quantity of a high-frequency region component from the reference image 15; and a correlation calculation that calculates a correlation value between the feature amounts. Unit 202, a high-frequency component estimation unit 203 that estimates a high-frequency component from the correlation value and the pre-learning data of high-frequency component pattern memory 204, and corrects the high-frequency component of the enlarged image using the estimated high-frequency component, thereby improving the virtual pixel accuracy. And an adding unit 206 that generates a reference image 207.
 図16において、動き補償予測フレームメモリ14に格納されている参照画像データのうちから、動き探索処理に用いる範囲の参照画像15が補間画像生成部43に入力されると、この参照画像15が画像縮小処理部200、高周波特徴抽出部201bおよび画像拡大処理部205にそれぞれに入力される。 In FIG. 16, when the reference image 15 in the range used for the motion search process is input to the interpolated image generation unit 43 from the reference image data stored in the motion compensated prediction frame memory 14, the reference image 15 is displayed as an image. The data are input to the reduction processing unit 200, the high frequency feature extraction unit 201b, and the image enlargement processing unit 205, respectively.
 画像縮小処理部200は、参照画像15から縦横1/N(Nは2,4等、2のべき乗値)サイズの縮小画像を生成して、高周波特徴抽出部201aへ出力する。この縮小処理は、一般的な画像縮小フィルタによって実現する。 The image reduction processing unit 200 generates a reduced image having a size of 1 / N in the vertical and horizontal directions (N is a power of 2 such as 2, 4) from the reference image 15 and outputs the reduced image to the high frequency feature extraction unit 201a. This reduction processing is realized by a general image reduction filter.
 高周波特徴抽出部201aは、画像縮小処理部200が生成した縮小画像から、エッジ成分等の高周波成分に関する第1の特徴量を抽出する。第1の特徴量として、例えば局所ブロック内のDCTまたはWavelet変換係数分布を示すパラメータ等が利用できる。 The high-frequency feature extraction unit 201a extracts a first feature amount related to a high-frequency component such as an edge component from the reduced image generated by the image reduction processing unit 200. As the first feature amount, for example, a parameter indicating DCT or Wavelet transform coefficient distribution in the local block can be used.
 高周波特徴抽出部201bは、高周波特徴抽出部201aと同様の高周波特徴抽出を行い、参照画像15から、第1の特徴量とは周波数成分領域の異なる、第2の特徴量を抽出する。第2の特徴量は相関計算部202へ出力されると共に、高周波成分推定部203へも出力される。 The high-frequency feature extraction unit 201b performs high-frequency feature extraction similar to the high-frequency feature extraction unit 201a, and extracts, from the reference image 15, a second feature amount having a frequency component region different from that of the first feature amount. The second feature amount is output to the correlation calculation unit 202 and also output to the high frequency component estimation unit 203.
 相関計算部202は、高周波特徴抽出部201aから第1の特徴量が入力され、高周波特徴抽出部201bから第2の特徴量が入力されると、参照画像15とその縮小画像との間の局所ブロック単位における、特徴量ベースでの高周波成分領域の相関値を計算する。この相関値としては、例えば第1の特徴量と第2の特徴量の間の距離がある。 When the first feature amount is input from the high-frequency feature extraction unit 201a and the second feature amount is input from the high-frequency feature extraction unit 201b, the correlation calculation unit 202 localizes between the reference image 15 and the reduced image. The correlation value of the high frequency component region on the basis of the feature amount in block units is calculated. As this correlation value, for example, there is a distance between the first feature value and the second feature value.
 高周波成分推定部203は、高周波特徴抽出部201bから入力される第2の特徴量と、相関計算部202から入力される相関値とに基づいて、高周波成分パターンメモリ204から高周波成分の事前学習パターンを特定し、仮想画素精度の参照画像207が備えるべき高周波成分を推定して生成する。生成した高周波成分は、加算部206へ出力される。 The high frequency component estimator 203 uses the second feature amount input from the high frequency feature extractor 201b and the correlation value input from the correlation calculator 202 to pre-learn high frequency component patterns from the high frequency component pattern memory 204. Is identified and a high frequency component to be included in the reference image 207 with virtual pixel accuracy is estimated and generated. The generated high frequency component is output to the adding unit 206.
 画像拡大処理部205は、入力された参照画像15に対して、MPEG-4 AVC規格による半画素精度サンプルの生成処理と同様に、垂直方向または水平方向に6つの整数画素を用いた6タップのフィルタによる内挿演算、または双線形フィルタ等の拡大フィルタ処理を施して、参照画像15を縦横N倍サイズに拡大した拡大画像を生成する。 The image enlargement processing unit 205 performs a 6-tap operation using six integer pixels in the vertical direction or the horizontal direction on the input reference image 15 in the same manner as the half-pixel accuracy sample generation processing according to the MPEG-4 AVC standard. An enlarged image obtained by enlarging the reference image 15 in the vertical and horizontal N times size is generated by performing an interpolation operation using a filter or an enlargement filter process such as a bilinear filter.
 加算部206は、画像拡大処理部205から入力される拡大画像に、高周波成分推定部203から入力される高周波成分を加算して、即ち拡大画像の高周波成分を補正して、縦横N倍サイズに拡大された拡大参照画像を生成する。補間画像生成部43は、この拡大参照画像データを、1/Nを1とする仮想画素精度の参照画像207として用いる。 The adding unit 206 adds the high-frequency component input from the high-frequency component estimation unit 203 to the enlarged image input from the image enlargement processing unit 205, that is, corrects the high-frequency component of the enlarged image, thereby obtaining a vertical and horizontal N-fold size. An enlarged enlarged reference image is generated. The interpolated image generation unit 43 uses the enlarged reference image data as a reference image 207 with virtual pixel accuracy in which 1 / N is 1.
 なお、補間画像生成部43は、N=2として半画素(1/2画素)精度の参照画像207を生成した後、1/4画素精度の仮想サンプル(画素)を、隣接する1/2画素または整数画素の平均値フィルタを用いた内挿演算によって生成するように構成してもよい。 The interpolated image generation unit 43 generates a reference image 207 with half pixel (1/2 pixel) accuracy with N = 2, and then converts a virtual sample (pixel) with 1/4 pixel accuracy into an adjacent 1/2 pixel. Or you may comprise so that it may produce | generate by the interpolation calculation using the average value filter of an integer pixel.
 また、補間画像生成部43は、図16に示す構成に加えて、画像拡大処理部205の出力する拡大画像に高周波成分推定部203の出力する高周波成分を加算するかしないかを切り替えて、仮想画素精度の参照画像207の生成結果を制御するように構成してもよい。この構成の場合には、画像パターンが特異である等、何らかの理由で高周波成分推定部203による推定精度が悪いときに、その符号化効率への悪影響を抑制する効果がある。
 なお、高周波成分推定部203が出力する高周波成分を加算部206において加算するかしないかを選択的に定める場合は、加算した場合と加算しない場合の両ケースの予測画像45を生成して動き補償予測を行い、その結果を符号化して効率のよいほうを決定する。そして、加算したか否かを示す加算処理の情報は、制御情報としてビットストリーム30へ多重化する。
In addition to the configuration shown in FIG. 16, the interpolation image generation unit 43 switches whether or not to add the high frequency component output from the high frequency component estimation unit 203 to the enlarged image output from the image enlargement processing unit 205, The generation result of the reference image 207 with pixel accuracy may be controlled. In the case of this configuration, when the estimation accuracy by the high-frequency component estimation unit 203 is poor for some reason, such as when the image pattern is unique, there is an effect of suppressing an adverse effect on the encoding efficiency.
When the addition unit 206 selectively determines whether or not the high-frequency component output from the high-frequency component estimation unit 203 is added, a predicted image 45 is generated in both cases of addition and non-addition to generate motion compensation. Prediction is performed, and the result is encoded to determine the more efficient one. And the information of the addition process which shows whether it added is multiplexed to the bit stream 30 as control information.
 あるいは、補間画像生成部43が、ビットストリーム30へ多重化する他のパラメータから一意に決定して、加算部206の加算処理を制御してもよい。他のパラメータから決定する例としては、例えば図2Aまたは図2Bに示す符号化モード7の種別を用いることが考えられる。マクロブロック内の動き補償領域ブロック分割が細かいことを示す符号化モードが選択された場合は、動きの激しい絵柄である確率が高い。よって、補間画像生成部43は超解像の効果が低いとみなし、高周波成分推定部203の出力した高周波成分を加算部206において加算しないように制御する。一方、マクロブロック内の動き補償領域ブロックのサイズが大きいことを示す符号化モードまたはブロックサイズの大きいイントラ予測モードが選択された場合は、比較的静止した画像領域である確率が高い。よって、補間画像生成部43は超解像の効果が高いとみなし、高周波成分推定部203の出力した高周波成分を加算部206において加算するように制御する。 Alternatively, the interpolation image generation unit 43 may control the addition process of the addition unit 206 by uniquely determining from other parameters to be multiplexed into the bitstream 30. As an example of determining from other parameters, for example, the type of encoding mode 7 shown in FIG. 2A or 2B may be used. When the coding mode indicating that the motion compensation area block division in the macroblock is fine is selected, there is a high probability that the pattern is intensely moving. Therefore, the interpolation image generation unit 43 considers that the super-resolution effect is low, and controls the addition unit 206 not to add the high frequency component output from the high frequency component estimation unit 203. On the other hand, when the coding mode indicating that the size of the motion compensation region block in the macroblock is large or the intra prediction mode having a large block size is selected, there is a high probability that the image region is a relatively still image region. Therefore, the interpolated image generation unit 43 considers that the super-resolution effect is high, and controls the addition unit 206 to add the high-frequency component output from the high-frequency component estimation unit 203.
 他のパラメータとして符号化モード7を利用する以外にも、動きベクトルの大きさ、周辺領域を考慮した動きベクトル場のばらつき、といったパラメータを利用してもよい。動き補償予測部9の補間画像生成部43が、パラメータの種類を復号装置側と共有して判断することにより、直接ビットストリーム30に加算処理の制御情報を多重化しなくてもよく、圧縮効率を高めることができる。 In addition to using the encoding mode 7 as another parameter, parameters such as the magnitude of the motion vector and the variation of the motion vector field in consideration of the surrounding area may be used. The interpolation image generation unit 43 of the motion compensation prediction unit 9 does not have to multiplex the control information of the addition processing directly into the bit stream 30 by determining the type of parameter shared with the decoding device side, and the compression efficiency is improved. Can be increased.
 なお、動き補償予測フレームメモリ14に格納される参照画像15を、動き補償予測フレームメモリ14へ格納する前に上述の超解像処理によって仮想画素精度の参照画像207にしてからその後に格納するように構成してもよい。この構成の場合、動き補償予測フレームメモリ14として必要になるメモリサイズは増加するが、動きベクトル探索および予測画像生成の最中にシーケンシャルに超解像処理を行う必要が無くなり、動き補償予測処理そのものの処理負荷が低減でき、かつ、フレーム符号化処理と仮想画素精度の参照画像207の生成処理とを並列処理させることが可能となり、処理を高速化できる。 Note that the reference image 15 stored in the motion compensated prediction frame memory 14 is converted into the virtual pixel precision reference image 207 by the super-resolution processing described above before being stored in the motion compensated prediction frame memory 14 and then stored thereafter. You may comprise. In the case of this configuration, the memory size required as the motion compensated prediction frame memory 14 increases, but it becomes unnecessary to perform the super-resolution processing sequentially during the motion vector search and the prediction image generation, and the motion compensation prediction processing itself. In addition, the frame encoding process and the generation process of the reference image 207 with virtual pixel accuracy can be performed in parallel, and the processing speed can be increased.
 以下、図3を援用して、仮想画素精度の参照画像207を用いた仮想画素精度の動きベクトル検出手順の一例を示す。 Hereinafter, an example of a virtual pixel accuracy motion vector detection procedure using the virtual pixel accuracy reference image 207 will be described with reference to FIG.
動きベクトル検出手順I’
 補間画像生成部43は、動き補償領域ブロック画像41の所定の動き探索範囲内にある整数画素精度の動きベクトル44に対する予測画像45を生成する。整数画素精度で生成された予測画像45(予測画像17)は、減算部12へ出力され、減算部12により動き補償領域ブロック画像41(マクロ/サブブロック画像5)から差し引かれて予測差分信号13になる。符号化制御部3は、予測差分信号13と整数画素精度の動きベクトル44(予測パラメータ18)とに対して予測効率の評価を行う。この予測効率の評価は上記実施の形態1で説明した上式(1)により行えばよいので、説明は省略する。
Motion vector detection procedure I ′
The interpolated image generation unit 43 generates a predicted image 45 for a motion vector 44 with integer pixel accuracy that is within a predetermined motion search range of the motion compensation region block image 41. The predicted image 45 (predicted image 17) generated with integer pixel accuracy is output to the subtracting unit 12, and is subtracted from the motion compensation region block image 41 (macro / sub-block image 5) by the subtracting unit 12, thereby predicting the difference signal 13. become. The encoding control unit 3 evaluates the prediction efficiency with respect to the prediction difference signal 13 and the integer pixel precision motion vector 44 (prediction parameter 18). Since this prediction efficiency may be evaluated by the above equation (1) described in the first embodiment, description thereof is omitted.
動きベクトル検出手順II’
 補間画像生成部43は、上記「動きベクトル検出手順I」で決定した整数画素精度の動きベクトルの周囲に位置する1/2画素精度の動きベクトル44に対し、図16に示す補間画像生成部43内部で生成される仮想画素精度の参照画像207を用いて予測画像45を生成する。以下、上記「動きベクトル検出手順I」と同様に、1/2画素精度で生成された予測画像45(予測画像17)が、減算部12により動き補償領域ブロック画像41(マクロ/サブブロック画像5)から差し引かれ、予測差分信号13を得る。続いて符号化制御部3が、この予測差分信号13と1/2画素精度の動きベクトル44(予測パラメータ18)とに対して予測効率の評価を行い、整数画素精度の動きベクトルの周囲に位置する1以上の1/2画素精度の動きベクトルの中から予測コストJ1を最小にする1/2画素精度の動きベクトル44を決定する。
Motion vector detection procedure II '
The interpolated image generating unit 43 performs the interpolated image generating unit 43 illustrated in FIG. 16 on the motion vector 44 having the ½ pixel accuracy positioned around the motion vector having the integer pixel accuracy determined in the “motion vector detection procedure I”. A predicted image 45 is generated using a reference image 207 with virtual pixel accuracy generated internally. Hereinafter, similarly to the “motion vector detection procedure I”, the predicted image 45 (predicted image 17) generated with the ½ pixel accuracy is converted by the subtractor 12 into the motion compensation region block image 41 (macro / sub-block image 5). ) To obtain the prediction difference signal 13. Subsequently, the encoding control unit 3 evaluates the prediction efficiency with respect to the prediction difference signal 13 and the motion vector 44 with 1/2 pixel accuracy (prediction parameter 18), and is positioned around the motion vector with integer pixel accuracy. A motion vector 44 having a ½ pixel accuracy that minimizes the prediction cost J 1 is determined from one or more motion vectors having a ½ pixel accuracy.
動きベクトル検出手順III’
 符号化制御部3と動き補償予測部9とは、1/4画素精度の動きベクトルに対しても同様に、上記「動きベクトル検出手順II」で決定した1/2画素精度の動きベクトルの周囲に位置する1以上の1/4画素精度の動きベクトルの中から予測コストJ1を最小にする1/4画素精度の動きベクトル44を決定する。
Motion vector detection procedure III '
The encoding control unit 3 and the motion compensation prediction unit 9 similarly apply to the motion vector with a ¼ pixel accuracy around the motion vector with a ½ pixel accuracy determined in the “motion vector detection procedure II”. A motion vector 44 having a 1/4 pixel accuracy that minimizes the prediction cost J 1 is determined from one or more motion vectors having a 1/4 pixel accuracy positioned at the position.
動きベクトル検出手順IV’
 以下同様に、符号化制御部3と動き補償予測部9とが、所定の精度になるまで仮想画素精度の動きベクトルの検出を行う。
Motion vector detection procedure IV '
Similarly, the encoding control unit 3 and the motion compensation prediction unit 9 detect a motion vector with virtual pixel accuracy until a predetermined accuracy is obtained.
 このように、動き補償予測部9は、マクロ/サブブロック画像5内を符号化モード7が示す動き補償の単位となるブロック単位に分割した動き補償領域ブロック画像41に対し、各々決定された所定精度の仮想画素精度の動きベクトルとその動きベクトルが指す参照画像の識別番号を予測パラメータ18として出力する。また、動き補償予測部9は、その予測パラメータ18によって生成される予測画像45(予測画像17)を減算部12へ出力し、減算部12によってマクロ/サブブロック画像5から差し引かれ予測差分信号13を得る。減算部12から出力される予測差分信号13は変換・量子化部19へ出力される。これ以降は、上記実施の形態1において説明した処理と同じであるため、説明を省略する。 As described above, the motion compensation prediction unit 9 determines the predetermined predetermined values for the motion compensation region block images 41 obtained by dividing the macro / sub-block image 5 into blocks each serving as a motion compensation unit indicated by the encoding mode 7. The motion vector of the accurate virtual pixel accuracy and the identification number of the reference image indicated by the motion vector are output as the prediction parameter 18. Further, the motion compensation prediction unit 9 outputs the prediction image 45 (prediction image 17) generated by the prediction parameter 18 to the subtraction unit 12, and is subtracted from the macro / sub-block image 5 by the subtraction unit 12 to generate the prediction difference signal 13. Get. The prediction difference signal 13 output from the subtraction unit 12 is output to the transform / quantization unit 19. Subsequent processes are the same as those described in the first embodiment, and a description thereof will be omitted.
 次に、本実施の形態3に係る動画像復号装置を説明する。
 本実施の形態3に係る動画像復号装置の構成は、上記実施の形態1,2の動き補償予測部70における仮想画素精度の予測画像生成処理に係る構成および動作が異なる以外は、上記実施の形態1,2の動画像復号装置と同じであるため、図1~図16を援用する。
Next, the moving picture decoding apparatus according to the third embodiment will be described.
The configuration of the moving picture decoding apparatus according to the third embodiment is the same as that of the first embodiment except that the configuration and operation related to the predicted image generation processing with virtual pixel accuracy in the motion compensation prediction unit 70 according to the first and second embodiments are different. Since it is the same as the moving picture decoding apparatus according to Embodiments 1 and 2, FIGS. 1 to 16 are used.
 上記実施の形態1,2では、動き補償予測部70において半画素または1/4画素等の仮想画素精度の参照画像に基づいて予測画像を生成する際にMPEG-4 AVC規格のように、垂直方向または水平方向に6つの整数画素を用いた6タップのフィルタによる内挿演算等によって仮想画素を作り出して予測画像を生成したのに対して、本実施の形態3の動き補償予測部70では、動き補償予測フレームメモリ75に格納される整数画素精度の参照画像76を超解像処理によって拡大することにより、仮想画素精度の参照画像を生成する。 In the first and second embodiments, when the motion compensated prediction unit 70 generates a predicted image based on a reference image with a virtual pixel accuracy such as a half pixel or a quarter pixel, it is vertical as in the MPEG-4 AVC standard. The motion compensated prediction unit 70 according to the third embodiment generates a predicted image by generating a virtual pixel by interpolation using a 6-tap filter using six integer pixels in the horizontal direction or the horizontal direction. The reference image 76 with integer pixel accuracy stored in the motion compensated prediction frame memory 75 is enlarged by super-resolution processing, thereby generating a reference image with virtual pixel accuracy.
 本実施の形態3の動き補償予測部70は、上記実施の形態1,2と同様に、入力された最適予測パラメータ63に含まれる動きベクトル、各動きベクトルが指す参照画像の識別番号(参照画像インデックス)等に基づいて、動き補償予測フレームメモリ75に格納された参照画像76から予測画像72を生成して出力する。
 加算部73は、動き補償予測部70から入力された予測画像72を、逆量子化・逆変換部66から入力される予測差分信号復号値67に加算して、復号画像74を生成する。
Similar to the first and second embodiments, the motion compensated prediction unit 70 according to the third embodiment includes the motion vector included in the input optimum prediction parameter 63 and the identification number of the reference image indicated by each motion vector (reference image). The prediction image 72 is generated from the reference image 76 stored in the motion compensated prediction frame memory 75 and output based on the index).
The adder 73 adds the predicted image 72 input from the motion compensation prediction unit 70 to the predicted differential signal decoded value 67 input from the inverse quantization / inverse transform unit 66 to generate a decoded image 74.
 なお、動き補償予測部70による予測画像72の生成方法は符号化装置側における動き補償予測部9の動作のうち、複数の参照画像から動きベクトルを探索する処理(図3に示す動き検出部42および補間画像生成部43の動作に相当する)を除外したものであり、可変長復号部61から与えられる最適予測パラメータ63に従って、予測画像72を生成する処理のみを行う。 Note that the method of generating the predicted image 72 by the motion compensation prediction unit 70 is a process of searching for a motion vector from a plurality of reference images among the operations of the motion compensation prediction unit 9 on the encoding device side (motion detection unit 42 shown in FIG. 3). And corresponding to the operation of the interpolated image generating unit 43), and only the process of generating the predicted image 72 is performed according to the optimal prediction parameter 63 given from the variable length decoding unit 61.
 ここで、予測画像72を仮想画素精度で生成する場合は、動き補償予測フレームメモリ75上の、参照画像の識別番号(参照画像インデックス)で指定される参照画像76に対して、動き補償予測部70が図16に示した処理と同様の処理を行って仮想画素精度の参照画像を生成し、復号した動きベクトルを用いて予測画像72を生成する。この際、符号化装置側において、図16に示す高周波成分推定部203が出力する高周波成分を拡大画像に加算するかしないかを選択的に定めた場合には、復号装置側にて、加算処理の有無を示す制御情報をビットストリーム60から抽出するか、または他のパラメータから一意に決定するかして、動き補償予測部70内部での加算処理を制御する。他のパラメータから決定する場合には、上述の符号化装置側と同様に符号化モード7、動きベクトルの大きさ、周辺領域を考慮した動きベクトル場のばらつき等を利用することができ、動き補償予測部70がパラメータの種類を符号装置側と共有して判断することにより、符号装置側で直接ビットストリーム30に加算処理の制御情報を多重化しなくてもよくなり、圧縮効率を高めることができる。 Here, when the predicted image 72 is generated with virtual pixel accuracy, a motion compensated prediction unit for the reference image 76 specified by the reference image identification number (reference image index) on the motion compensated prediction frame memory 75 is used. 70 performs a process similar to the process shown in FIG. 16 to generate a reference image with virtual pixel accuracy, and generates a predicted image 72 using the decoded motion vector. At this time, when the encoding device side selectively determines whether or not to add the high-frequency component output from the high-frequency component estimation unit 203 shown in FIG. 16 to the enlarged image, the decoding device side performs addition processing. The control information indicating the presence or absence is extracted from the bit stream 60 or is uniquely determined from other parameters to control the addition processing in the motion compensation prediction unit 70. When determining from other parameters, the encoding mode 7, the size of the motion vector, the variation of the motion vector field in consideration of the surrounding area, etc. can be used as in the above-described encoding device side, and motion compensation is performed. When the prediction unit 70 determines the parameter type in common with the encoding device side, the encoding device does not have to multiplex the control information of the addition process directly into the bitstream 30, and the compression efficiency can be improved. .
 なお、動き補償予測部70において仮想画素精度の参照画像を生成する処理は、符号化装置側から出力された最適予測パラメータ18a(即ち復号装置側の最適予測パラメータ63)に含まれる動きベクトルが仮想画素精度を指し示す場合にのみ実施してもよい。この構成の場合には、動き補償予測部9が動きベクトルに応じて、動き補償予測フレームメモリ14の参照画像15を用いるかまたは補間画像生成部43で仮想画素精度の参照画像207を生成して用いるかを切り替えて、参照画像15または仮想画素精度の参照画像207から予測画像17を生成する。 Note that in the process of generating a reference image with virtual pixel accuracy in the motion compensated prediction unit 70, the motion vector included in the optimal prediction parameter 18a output from the encoding device side (that is, the optimal prediction parameter 63 on the decoding device side) is virtual. You may implement only when indicating pixel accuracy. In the case of this configuration, the motion compensation prediction unit 9 uses the reference image 15 of the motion compensation prediction frame memory 14 according to the motion vector, or the interpolation image generation unit 43 generates the reference image 207 with virtual pixel accuracy. The prediction image 17 is generated from the reference image 15 or the reference image 207 with virtual pixel accuracy by switching whether to use.
 あるいは、動き補償予測フレームメモリ75に格納する前の参照画像に対して図16に示す処理を実施して、拡大処理および高周波成分を補正した仮想画素精度の参照画像を動き補償予測フレームメモリ75に格納するように構成してもよい。この構成の場合は、動き補償予測フレームメモリ75として用意すべきメモリサイズが増加するが、動きベクトルが同じ仮想サンプル位置の画素を指し示す回数が多い場合に図16に示す処理を重複して実施する必要がないため、演算量を削減できる。また、動きベクトルの指す変位の範囲が予め復号装置側に既知であれば、動き補償予測部70がその範囲だけに限定して図16に示す処理を行うように構成してもよい。動きベクトルの指す変位の範囲は、例えばビットストリーム60に動きベクトルの指す変位の範囲を示す値域を多重して伝送したり、運用上、符号化装置側と復号装置側とで相互に取り決めて設定したりして、復号装置側に既知にすればよい。 Alternatively, the process shown in FIG. 16 is performed on the reference image before being stored in the motion compensated prediction frame memory 75, and the reference image with virtual pixel accuracy in which the enlargement process and the high frequency component are corrected is stored in the motion compensated prediction frame memory 75. You may comprise so that it may store. In the case of this configuration, the memory size to be prepared as the motion compensated prediction frame memory 75 increases, but when the number of times the motion vector points to the pixel at the same virtual sample position is large, the processing shown in FIG. Since there is no need, the amount of calculation can be reduced. If the range of displacement indicated by the motion vector is known in advance to the decoding device side, the motion compensation prediction unit 70 may be configured to perform the process shown in FIG. The range of displacement indicated by the motion vector is set by, for example, multiplexing and transmitting a value range indicating the range of displacement indicated by the motion vector in the bitstream 60, or by deciding between the encoding device side and the decoding device side in operation. It may be made known to the decoding device side.
 以上より、実施の形態3に係る動画像符号化装置によれば、動き補償予測部9が、動き補償予測フレームメモリ14中の参照画像15を拡大処理すると共にその高周波成分を補正して、仮想画素精度の参照画像207を生成する補間画像生成部43を有して、動きベクトルに応じて参照画像15を用いるかまたは仮想画素精度の参照画像207を生成して用いるかを切り替えて予測画像17を生成するように構成したので、細かいエッジ等の高周波成分を多く含む入力映像信号1を高圧縮するような場合であっても、動き補償予測により生成する予測画像17を、高周波成分を多く含む参照画像から生成することができるようになり、効率よく圧縮符号化することが可能になる。 As described above, according to the video encoding apparatus according to Embodiment 3, the motion compensation prediction unit 9 performs the enlargement process on the reference image 15 in the motion compensation prediction frame memory 14 and corrects the high-frequency component thereof to perform virtual processing. An interpolated image generation unit 43 that generates a reference image 207 with pixel accuracy, and switches between whether to use the reference image 15 or to generate and use a reference image 207 with virtual pixel accuracy according to the motion vector. Therefore, even when the input video signal 1 containing a lot of high-frequency components such as fine edges is highly compressed, the predicted image 17 generated by the motion compensation prediction contains a lot of high-frequency components. It becomes possible to generate from a reference image, and compression encoding can be performed efficiently.
 また、実施の形態3に係る動画像復号装置も、動き補償予測部70が、動画像符号化装置と同様の手順で仮想画素精度の参照画像を生成する補間画像生成部を有して、ビットストリーム60に多重化された動きベクトルに応じて動き補償予測フレームメモリ75の参照画像76を用いるかまたは仮想画素精度の参照画像を生成して用いるかを切り替えて予測画像72を生成するように構成したので、実施の形態3に係る動画像符号化装置にて符号化されたビットストリームを正しく復号することが可能になる。 The video decoding apparatus according to Embodiment 3 also includes an interpolation image generation unit in which the motion compensation prediction unit 70 generates a reference image with virtual pixel accuracy in the same procedure as the video encoding apparatus. The prediction image 72 is generated by switching between using the reference image 76 of the motion compensated prediction frame memory 75 or generating and using a reference image with virtual pixel accuracy according to the motion vector multiplexed in the stream 60. Therefore, it is possible to correctly decode the bitstream encoded by the video encoding apparatus according to Embodiment 3.
 なお、上記実施の形態3における補間画像生成部43では、上述のW.T.Freeman et al.(2000)に開示された技術を基にした超解像処理によって仮想画素精度の参照画像207を生成したが、超解像処理自体は同技術に限定するものではなく、他の任意の超解像技術を適用して仮想画素精度の参照画像207を生成するように構成してもよい。 In the interpolation image generation unit 43 in the third embodiment, the above-described W.P. T.A. Freeman et al. Although the reference image 207 with virtual pixel accuracy is generated by the super-resolution processing based on the technique disclosed in (2000), the super-resolution processing itself is not limited to this technique, and any other super-resolution An image technique may be applied to generate the reference image 207 with virtual pixel accuracy.
 また、上記実施の形態1~3に係る動画像符号化装置をコンピュータで構成する場合、ブロック分割部2、符号化制御部3、切替部6、イントラ予測部8、動き補償予測部9、動き補償予測フレームメモリ14、変換・量子化部19、逆量子化・逆変換部22、可変長符号化部23、ループフィルタ部27、イントラ予測用メモリ28の処理内容を記述している動画像符号化プログラムをコンピュータのメモリに格納し、コンピュータのCPUがメモリに格納されている動画像符号化プログラムを実行するようにしてもよい。
 同様に、実施の形態1~3に係る動画像復号装置をコンピュータで構成する場合、可変長復号部61、逆量子化・逆変換部66、切替部68、イントラ予測部69、動き補償予測部70、動き補償予測フレームメモリ75、イントラ予測用メモリ77、ループフィルタ部78の処理内容を記述している動画像復号プログラムをコンピュータのメモリに格納し、コンピュータのCPUがメモリに格納されている動画像復号プログラムを実行するようにしてもよい。
Further, when the moving picture coding apparatus according to Embodiments 1 to 3 is configured by a computer, the block dividing unit 2, the coding control unit 3, the switching unit 6, the intra prediction unit 8, the motion compensation prediction unit 9, the motion A moving picture code describing the processing contents of the compensated prediction frame memory 14, the transform / quantization unit 19, the inverse quantization / inverse transform unit 22, the variable length coding unit 23, the loop filter unit 27, and the intra prediction memory 28 The computer program may be stored in the memory of the computer, and the CPU of the computer may execute the moving image encoding program stored in the memory.
Similarly, when the video decoding apparatus according to Embodiments 1 to 3 is configured by a computer, a variable length decoding unit 61, an inverse quantization / inverse transform unit 66, a switching unit 68, an intra prediction unit 69, a motion compensation prediction unit 70, a motion compensation prediction frame memory 75, an intra prediction memory 77, and a moving picture decoding program describing the processing contents of the loop filter unit 78 are stored in the memory of the computer, and the moving image in which the CPU of the computer is stored in the memory An image decoding program may be executed.
 この発明に係る動画像符号化装置および動画像復号装置は、画像内容によらず予め設定されたマクロブロックサイズにおいても符号化モード等のオーバーヘッドに関わる符号量を抑えて効率よく圧縮符号化することのできる動画像符号化装置および動画像復号装置を得ることができるため、動画像を所定領域に分割して、領域単位で符号化を行う動画像符号化装置と、符号化された動画像を所定領域単位で復号する動画像復号装置に適している。 The moving picture coding apparatus and the moving picture decoding apparatus according to the present invention perform compression coding efficiently while suppressing a code amount related to overhead such as a coding mode even in a macro block size set in advance regardless of image contents. Therefore, a moving image encoding device and a moving image decoding device capable of dividing the moving image into predetermined regions and performing encoding in units of regions, and an encoded moving image It is suitable for a moving picture decoding apparatus that decodes in units of a predetermined area.
 1 入力映像信号、2 ブロック分割部、3 符号化制御部、4 マクロブロックサイズ、5 マクロ/サブブロック画像、6 切替部、7 符号化モード、7a 最適符号化モード、8 イントラ予測部、9 動き補償予測部、10 予測パラメータ、10a 最適予測パラメータ、11 予測画像、12 減算部、13 予測差分信号、13a 最適予測差分信号、14 動き補償予測フレームメモリ、15 参照画像、17 予測画像、18 予測パラメータ、18a 最適予測パラメータ、19 変換・量子化部、20 圧縮パラメータ、20a 最適圧縮パラメータ、21 圧縮データ、22 逆量子化・逆変換部、23 可変長符号化部、24 局所復号予測差分信号、25 加算部、26 局所復号画像信号、27 ループフィルタ部、28 イントラ予測用メモリ、29 局所復号画像、30 ビットストリーム、40 動き補償領域分割部、41 動き補償領域ブロック画像、42 動き検出部、43 補間画像生成部、44 動きベクトル、45 予測画像、50 変換ブロックサイズ分割部、51 変換対象ブロック、52 変換部、53 変換係数、54 量子化部、60 ビットストリーム、61 可変長復号部、62 最適符号化モード、63 最適予測パラメータ、64 圧縮データ、65 最適圧縮パラメータ、66 逆量子化・逆変換部、67 予測差分信号復号値、68 切替部、69 イントラ予測部、70 動き補償予測部、71 予測画像、72 予測画像、73 加算部、74,74a 復号画像、75 動き補償予測フレームメモリ、76 参照画像、77 イントラ予測用メモリ、78 ループフィルタ部、79 再生画像、90 初期化部、91 コンテキスト情報初期化フラグ、92 2値化部、93 頻度情報生成部、94 頻度情報、95 2値化テーブル更新部、96 コンテキスト情報メモリ、97 確率テーブルメモリ、98 状態遷移テーブルメモリ、99 コンテキスト生成部、100 種別信号、101 周辺ブロック情報、102 コンテキスト識別情報、103 2値信号、104 算術符号化処理演算部、105 2値化テーブルメモリ、106 コンテキスト情報、107 確率テーブル番号、108 MPS発生確率、109 シンボル値、110 確率テーブル番号、111 符号化ビット列、112 2値化テーブル更新識別情報、113 2値化テーブル更新フラグ、120 初期化部、121 コンテキスト初期化情報、122 コンテキスト生成部、123 種別信号、124 周辺ブロック情報、126 コンテキスト識別情報、127 算術復号処理演算部、128 コンテキスト情報メモリ、129 コンテキスト情報、130 確率テーブル番号、131 確率テーブルメモリ、132 MPS発生確率、133 符号化ビット列、134 シンボル値、135 状態遷移テーブルメモリ、136 確率テーブル番号、137 2値信号、138 逆2値化部、139 2値化テーブル、140 復号値、141 2値化テーブル更新部、142 2値化テーブル更新フラグ、143 2値化テーブルメモリ、144 2値化テーブル更新識別情報、200 画像縮小処理部、201a,201b 高周波特徴抽出部、202 相関計算部、203 高周波成分推定部、204 高周波成分パターンメモリ、205 画像拡大処理部、206 加算部、207 仮想画素精度の参照画像。 1 input video signal, 2 block division unit, 3 encoding control unit, 4 macro block size, 5 macro / sub-block image, 6 switching unit, 7 encoding mode, 7a optimal encoding mode, 8 intra prediction unit, 9 motion Compensation prediction unit, 10 prediction parameter, 10a optimal prediction parameter, 11 prediction image, 12 subtraction unit, 13 prediction difference signal, 13a optimal prediction difference signal, 14 motion compensation prediction frame memory, 15 reference image, 17 prediction image, 18 prediction parameter , 18a optimal prediction parameter, 19 transform / quantization unit, 20 compression parameter, 20a optimal compression parameter, 21 compressed data, 22 inverse quantization / inverse transform unit, 23 variable length coding unit, 24 local decoded prediction difference signal, 25 Adder, 26 locally decoded image signals, 27 loops Filter unit, 28 Intra prediction memory, 29 Local decoded image, 30 Bitstream, 40 Motion compensation region segmentation unit, 41 Motion compensation region block image, 42 Motion detection unit, 43 Interpolation image generation unit, 44 Motion vector, 45 Prediction image , 50 transform block size division unit, 51 transform target block, 52 transform unit, 53 transform coefficient, 54 quantization unit, 60 bit stream, 61 variable length decoding unit, 62 optimum coding mode, 63 optimum prediction parameter, 64 compressed data , 65 Optimal compression parameter, 66 Inverse quantization / inverse transform unit, 67 Predicted differential signal decoded value, 68 Switching unit, 69 Intra prediction unit, 70 Motion compensation prediction unit, 71 Prediction image, 72 Prediction image, 73 Addition unit, 74 74a decoded image, 75 motion compensation prediction frame 76, Reference image, 77 Intra prediction memory, 78 Loop filter unit, 79 Playback image, 90 Initialization unit, 91 Context information initialization flag, 92 Binarization unit, 93 Frequency information generation unit, 94 Frequency information, 95 Binary table update unit, 96 context information memory, 97 probability table memory, 98 state transition table memory, 99 context generation unit, 100 type signal, 101 peripheral block information, 102 context identification information, 103 binary signal, 104 arithmetic code Processing unit, 105 binarization table memory, 106 context information, 107 probability table number, 108 MPS occurrence probability, 109 symbol value, 110 probability table number, 111 encoding bit string, 112 binarization table update identification information , 113 Binary table update flag, 120 initialization unit, 121 context initialization information, 122 context generation unit, 123 type signal, 124 peripheral block information, 126 context identification information, 127 arithmetic decoding processing operation unit, 128 context information memory 129 context information, 130 probability table number, 131 probability table memory, 132 MPS occurrence probability, 133 encoded bit string, 134 symbol value, 135 state transition table memory, 136 probability table number, 137 binary signal, 138 inverse binarization Part, 139 binarization table, 140 decrypted value, 141 binarization table update part, 142 binarization table update flag, 143 binarization table memory, 144 binarization table update identification information, 20 Image reduction process unit, 201a, 201b frequency characteristic extracting unit, 202 correlation calculation unit, 203 high-frequency component estimating unit, 204 high-frequency component pattern memory, 205 image enlargement processing section, 206 adding section, 207 virtual pixel accuracy of the reference image.

Claims (13)

  1.  入力画像に対する動き補償予測またはフレーム内予測の処理単位となる複数のブロック分割タイプの中から、1以上のブロック分割タイプを指定した符号化モードを出力する符号化制御部と、
     前記入力画像を所定サイズの複数ブロックに分割したマクロブロック画像を、前記符号化モードに応じて1以上のブロックに分割したブロック画像を出力するブロック分割部と、
     前記ブロック画像が入力されると、当該ブロック画像に対し、フレーム内の画像信号を用いてフレーム内予測して予測画像を生成するイントラ予測部と、
     前記ブロック画像が入力されると、当該ブロック画像に対し、1フレーム以上の参照画像を用いて動き補償予測を行って予測画像を生成する動き補償予測部と、
     前記ブロック分割部が出力したブロック画像の符号化モードに応じて、当該ブロック画像を前記イントラ予測部または前記動き補償予測部のいずれか一方に入力する切替部と、
     前記ブロック分割部が出力するブロック画像から、前記イントラ予測部または前記動き補償予測部のいずれか一方が出力する予測画像を差し引いて、予測差分信号を生成する減算部と、
     前記予測差分信号に対し、変換および量子化処理を行って圧縮データを生成する変換・量子化部と、
     前記符号化モードおよび前記圧縮データをエントロピ符号化してビットストリームへ多重化する可変長符号化部とを備え、
     前記符号化制御部は、前記符号化モードの中から、符号化効率に基づく所定のブロック分割タイプを指定した符号化モードを選択して多値信号として出力し、
     前記可変長符号化部は、
     前記符号化モードを表す多値信号と2値信号との対応関係を指定した2値化テーブルを用いて、前記符号化制御部が選択した前記多値信号の符号化モードを2値信号へ変換する2値化部と、
     前記2値化部が変換した2値信号を算術符号化して符号化ビット列を出力し、当該符号化ビット列を前記ビットストリームへ多重化させる算術符号化処理演算部と、
     前記符号化モードそれぞれの前記符号化制御部による選択頻度に基づいて、前記2値化テーブルの多値信号と2値信号との対応関係を更新する2値化テーブル更新部とを有することを特徴とする動画像符号化装置。
    An encoding control unit that outputs an encoding mode in which one or more block division types are designated from among a plurality of block division types that are processing units of motion compensation prediction or intra-frame prediction for an input image;
    A block division unit for outputting a block image obtained by dividing the macroblock image obtained by dividing the input image into a plurality of blocks of a predetermined size into one or more blocks according to the encoding mode;
    When the block image is input, an intra prediction unit that predicts the block image using an image signal in the frame and generates a prediction image; and
    When the block image is input, a motion compensation prediction unit that performs motion compensation prediction on the block image using a reference image of one frame or more to generate a prediction image;
    A switching unit that inputs the block image to either the intra prediction unit or the motion compensation prediction unit according to a coding mode of the block image output by the block dividing unit;
    A subtracting unit that generates a prediction difference signal by subtracting a prediction image output by either the intra prediction unit or the motion compensation prediction unit from a block image output by the block dividing unit;
    A transform / quantization unit that performs transform and quantization processing on the prediction difference signal to generate compressed data;
    A variable-length encoding unit that entropy-encodes the encoding mode and the compressed data to multiplex it into a bitstream;
    The coding control unit selects a coding mode in which a predetermined block division type based on coding efficiency is selected from the coding modes and outputs it as a multilevel signal,
    The variable length encoding unit includes:
    The encoding mode of the multilevel signal selected by the encoding control unit is converted into a binary signal using a binarization table that specifies the correspondence between the multilevel signal representing the encoding mode and the binary signal. A binarization unit for
    An arithmetic encoding processing operation unit that arithmetically encodes the binary signal converted by the binarization unit and outputs an encoded bit sequence, and multiplexes the encoded bit sequence into the bit stream;
    And a binarization table update unit that updates a correspondence relationship between the multilevel signal and the binary signal of the binarization table based on a selection frequency by the encoding control unit for each of the encoding modes. A video encoding device.
  2.  前記可変長符号化部は、
     前記2値化テーブル更新部が、2値化テーブルの更新タイミングを示す2値化テーブル更新フラグを出力して、前記2値化テーブル更新フラグを含むヘッダ情報をビットストリームへ多重化させることを特徴とする請求項1記載の動画像符号化装置。
    The variable length encoding unit includes:
    The binarization table update unit outputs a binarization table update flag indicating update timing of the binarization table, and multiplexes header information including the binarization table update flag into a bit stream. The moving picture encoding apparatus according to claim 1.
  3.  前記可変長符号化部は、
     前記2値化部が、圧縮パラメータを表す多値信号と2値信号との対応関係を指定した2値化テーブルを用いて、前記変換・量子化部が変換および量子化処理に用いた、多値信号で表される前記圧縮パラメータを2値信号へ変換し、
     前記2値化テーブル更新部が、前記圧縮パラメータそれぞれの前記変換・量子化部による使用頻度に基づいて、前記2値化テーブルの多値信号と2値信号との対応関係を更新すると共に、更新タイミングを示す2値化テーブル更新フラグを出力し、
     前記算術符号化処理演算部が、前記2値化部で変換した2値信号を算術符号化して符号化ビット列を出力し、当該符号化ビット列を前記2値化テーブル更新フラグと共にビットストリームへ多重化させることを特徴とする請求項1記載の動画像符号化装置。
    The variable length encoding unit includes:
    The binarization unit uses a binarization table in which the correspondence between the multilevel signal representing the compression parameter and the binary signal is specified, and the conversion / quantization unit uses the multilevel signal used for the conversion and quantization processing. Converting the compression parameter represented by a value signal into a binary signal;
    The binarization table update unit updates the correspondence between the multilevel signal and the binary signal in the binarization table based on the frequency of use of the compression parameters by the transform / quantization unit, and updates Output a binarization table update flag indicating the timing,
    The arithmetic coding processing operation unit arithmetically encodes the binary signal converted by the binarization unit and outputs a coded bit string, and multiplexes the coded bit string into a bit stream together with the binarization table update flag. The moving picture coding apparatus according to claim 1, wherein:
  4.  前記可変長符号化部は、
     前記2値化部が、予測パラメータを表す多値信号と2値信号との対応関係を指定した2値化テーブルを用いて、前記イントラ予測部または前記動き補償予測部がフレーム内予測または動き補償予測に用いた、多値信号で表される前記予測パラメータを2値信号へ変換し、
     前記2値化テーブル更新部が、前記予測パラメータそれぞれの前記イントラ予測部または前記動き補償予測部による使用頻度に基づいて、前記2値化テーブルの多値信号と2値信号との対応関係を更新すると共に、更新タイミングを示す2値化テーブル更新フラグを出力し、
     前記算術符号化処理演算部が、前記2値化部の変換した2値信号を算術符号化して符号化ビット列を出力し、当該符号化ビット列を前記2値化テーブル更新フラグと共にビットストリームへ多重化させることを特徴とする請求項1記載の動画像符号化装置。
    The variable length encoding unit includes:
    The intra-prediction unit or the motion compensation prediction unit uses the binarization table in which the correspondence between the multi-level signal representing the prediction parameter and the binary signal is designated, and the intra prediction unit or the motion compensation prediction unit performs intra-frame prediction or motion compensation. Converting the prediction parameter represented by the multilevel signal used for the prediction into a binary signal;
    The binarization table update unit updates a correspondence relationship between the multilevel signal and the binary signal in the binarization table based on the frequency of use of each of the prediction parameters by the intra prediction unit or the motion compensation prediction unit. And output a binarization table update flag indicating the update timing,
    The arithmetic encoding processing arithmetic unit arithmetically encodes the binary signal converted by the binarizing unit and outputs an encoded bit string, and multiplexes the encoded bit string together with the binarization table update flag into a bit stream. The moving picture coding apparatus according to claim 1, wherein:
  5.  前記可変長符号化部は、
     前記2値化テーブル更新部が、2値化テーブルが複数種類ある場合に、更新した2値化テーブルを識別するための2値化テーブル更新識別情報を出力し、ビットストリームへ多重化させることを特徴とする請求項3記載の動画像符号化装置。
    The variable length encoding unit includes:
    The binarization table update unit outputs binarization table update identification information for identifying an updated binarization table when there are a plurality of types of binarization tables, and multiplexes them into a bitstream. 4. The moving picture encoding apparatus according to claim 3, wherein
  6.  前記可変長符号化部は、
     前記2値化テーブル更新部が、2値化テーブルが複数種類ある場合に、更新した2値化テーブルを識別するための2値化テーブル更新識別情報を出力し、ビットストリームへ多重化させることを特徴とする請求項4記載の動画像符号化装置。
    The variable length encoding unit includes:
    The binarization table update unit outputs binarization table update identification information for identifying an updated binarization table when there are a plurality of types of binarization tables, and multiplexes them into a bitstream. 5. The moving picture encoding apparatus according to claim 4, wherein
  7.  画像を所定サイズの複数ブロックに分割したマクロブロック単位に圧縮符号化されたビットストリームを入力として、当該ビットストリームから、前記マクロブロック単位に符号化モードをエントロピ復号すると共に、当該復号された符号化モードに応じて分割されたブロック単位に予測パラメータ、圧縮パラメータおよび圧縮データをエントロピ復号する可変長復号部と、
     前記予測パラメータが入力されると、当該予測パラメータに含まれるイントラ予測モードとフレーム内の復号済み画像信号とを用いて予測画像を生成するイントラ予測部と、
     前記予測パラメータが入力されると、当該予測パラメータに含まれる動きベクトルと、当該予測パラメータに含まれる参照画像インデックスで特定される参照画像とを用いて動き補償予測を行って予測画像を生成する動き補償予測部と、
     前記復号された符号化モードに応じて、前記可変長復号部が復号した予測パラメータを前記イントラ予測部または前記動き補償予測部のいずれか一方に入力する切替部と、
     前記圧縮パラメータを用いて、前記圧縮データに対して逆量子化および逆変換処理を行い、復号予測差分信号を生成する逆量子化・逆変換部と、
     前記復号予測差分信号に、前記イントラ予測部または前記動き補償予測部のいずれか一方が出力する予測画像を加算して復号画像信号を出力する加算部とを備え、
     前記可変長復号部は、
     前記ビットストリームに多重化された前記符号化モードを表す符号化ビット列を算術復号して、2値信号を生成する算術復号処理演算部と、
     前記符号化モードを表す2値信号と多値信号との対応関係を指定した2値化テーブルを用いて、前記算術復号処理演算部が生成した前記2値信号で表される前記符号化モードを多値信号へ変換する逆2値化部とを有することを特徴とする動画像復号装置。
    A bitstream compressed and encoded in units of macroblocks obtained by dividing an image into a plurality of blocks of a predetermined size is input, and the encoding mode is entropy-decoded in units of the macroblocks from the bitstream and the decoded encoding is performed. A variable length decoding unit that performs entropy decoding of a prediction parameter, a compression parameter, and compressed data in block units divided according to a mode;
    When the prediction parameter is input, an intra prediction unit that generates a prediction image using an intra prediction mode included in the prediction parameter and a decoded image signal in the frame;
    When the prediction parameter is input, a motion that generates a prediction image by performing motion compensation prediction using a motion vector included in the prediction parameter and a reference image specified by a reference image index included in the prediction parameter A compensation prediction unit;
    A switching unit that inputs a prediction parameter decoded by the variable length decoding unit to either the intra prediction unit or the motion compensation prediction unit according to the decoded encoding mode;
    Using the compression parameter, an inverse quantization and inverse transform process is performed on the compressed data, and a decoded prediction difference signal is generated, and an inverse quantization / inverse transform unit;
    An addition unit that adds a prediction image output from either the intra prediction unit or the motion compensation prediction unit to the decoded prediction difference signal and outputs a decoded image signal;
    The variable length decoding unit includes:
    An arithmetic decoding unit that arithmetically decodes a coded bit string representing the coding mode multiplexed in the bitstream to generate a binary signal;
    The encoding mode represented by the binary signal generated by the arithmetic decoding processing operation unit is determined using a binarization table that specifies the correspondence between the binary signal representing the encoding mode and the multilevel signal. A moving picture decoding apparatus comprising: an inverse binarization unit for converting into a multilevel signal.
  8.  前記可変長復号部は、
     前記算術復号処理演算部が、ビットストリームに多重化された圧縮パラメータの符号化ビット列を算術復号して2値信号を生成し、
     前記逆2値化部が、前記圧縮パラメータを表す2値信号と多値信号との対応関係を指定した2値化テーブルを用いて、前記算術復号処理演算部が生成した前記2値信号で表される前記圧縮パラメータを多値信号へ変換することを特徴とする請求項7記載の動画像復号装置。
    The variable length decoding unit includes:
    The arithmetic decoding processing arithmetic unit arithmetically decodes the encoded bit string of the compression parameter multiplexed in the bit stream to generate a binary signal,
    The inverse binarization unit uses the binarization table that specifies the correspondence between the binary signal representing the compression parameter and the multilevel signal, and represents the binary signal generated by the arithmetic decoding processing calculation unit. 8. The moving picture decoding apparatus according to claim 7, wherein the compression parameter is converted into a multilevel signal.
  9.  前記可変長復号部は、
     前記算術復号処理演算部が、ビットストリームに多重化された予測パラメータの符号化ビット列を算術復号して2値信号を生成し、
     前記逆2値化部が、前記予測パラメータを表す2値信号と多値信号との対応関係を指定した2値化テーブルを用いて、前記算術復号処理演算部が生成した前記2値信号で表される前記予測パラメータを多値信号へ変換することを特徴とする請求項7記載の動画像復号装置。
    The variable length decoding unit includes:
    The arithmetic decoding processing operation unit arithmetically decodes the encoded bit string of the prediction parameter multiplexed in the bit stream to generate a binary signal,
    The inverse binarization unit uses the binarization table that specifies the correspondence between the binary signal representing the prediction parameter and the multilevel signal, and represents the binary signal generated by the arithmetic decoding processing calculation unit. 8. The moving picture decoding apparatus according to claim 7, wherein the prediction parameter to be converted is converted into a multilevel signal.
  10.  前記可変長復号部は、
     ビットストリームに多重化されたヘッダ情報から復号される2値化テーブル更新フラグに基づいて、2値化テーブルを更新する2値化テーブル更新部を有することを特徴とする請求項7記載の動画像復号装置。
    The variable length decoding unit includes:
    8. The moving image according to claim 7, further comprising: a binarization table updating unit that updates the binarization table based on a binarization table update flag decoded from header information multiplexed in the bitstream. Decoding device.
  11.  前記可変長復号部は、
     前記2値化テーブル更新部が、2値化テーブルが複数種類ある場合に、ビットストリームに多重化されたヘッダ情報から復号される2値化テーブル更新識別情報に基づいて、複数の前記2値化テーブルのうちの所定の2値化テーブルを更新することを特徴とする請求項8記載の動画像復号装置。
    The variable length decoding unit includes:
    When there are a plurality of types of binarization tables, the binarization table update unit performs a plurality of binarizations based on binarization table update identification information decoded from header information multiplexed in the bitstream. 9. The moving picture decoding apparatus according to claim 8, wherein a predetermined binarization table of the tables is updated.
  12.  前記可変長復号部は、
     前記2値化テーブル更新部が、2値化テーブルが複数種類ある場合に、ビットストリームに多重化されたヘッダ情報から復号される2値化テーブル更新識別情報に基づいて、複数の前記2値化テーブルのうちの所定の2値化テーブルを更新することを特徴とする請求項9記載の動画像復号装置。
    The variable length decoding unit includes:
    When there are a plurality of types of binarization tables, the binarization table update unit performs a plurality of binarizations based on binarization table update identification information decoded from header information multiplexed in the bitstream. The moving picture decoding apparatus according to claim 9, wherein a predetermined binarization table of the tables is updated.
  13.  前記可変長復号部は、
     前記2値化テーブル更新部が、2値化テーブルが複数種類ある場合に、ビットストリームに多重化されたヘッダ情報から復号される2値化テーブル更新識別情報に基づいて、複数の前記2値化テーブルのうちの所定の2値化テーブルを更新することを特徴とする請求項10記載の動画像復号装置。
    The variable length decoding unit includes:
    When there are a plurality of types of binarization tables, the binarization table update unit performs a plurality of binarizations based on binarization table update identification information decoded from header information multiplexed in the bitstream. The moving picture decoding apparatus according to claim 10, wherein a predetermined binarization table of the tables is updated.
PCT/JP2011/001955 2010-04-09 2011-03-31 Video coding device and video decoding device WO2011125314A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW100111976A TW201143459A (en) 2010-04-09 2011-04-07 Apparatus for encoding dynamic image and apparatus for decoding dynamic image

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010090536A JP2013131786A (en) 2010-04-09 2010-04-09 Video encoder and video decoder
JP2010-090536 2010-04-09

Publications (1)

Publication Number Publication Date
WO2011125314A1 true WO2011125314A1 (en) 2011-10-13

Family

ID=44762285

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/001955 WO2011125314A1 (en) 2010-04-09 2011-03-31 Video coding device and video decoding device

Country Status (3)

Country Link
JP (1) JP2013131786A (en)
TW (1) TW201143459A (en)
WO (1) WO2011125314A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103339937A (en) * 2011-12-21 2013-10-02 松下电器产业株式会社 Method for encoding image, image-encoding device, method for decoding image, image-decoding device, and image encoding/decoding device
EP3182710A3 (en) * 2015-12-18 2017-06-28 BlackBerry Limited Adaptive binarizer selection for image and video coding
EP3182705A3 (en) * 2015-12-18 2017-06-28 BlackBerry Limited Binarizer selection for image and video coding

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2513110A (en) * 2013-04-08 2014-10-22 Sony Corp Data encoding and decoding
JP6717562B2 (en) * 2015-02-06 2020-07-01 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Image coding method, image decoding method, image coding device, and image decoding device
JP7352364B2 (en) * 2019-03-22 2023-09-28 日本放送協会 Video encoding device, video decoding device and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008042942A (en) * 2007-09-18 2008-02-21 Sony Corp Coding method
JP2008104205A (en) * 2007-10-29 2008-05-01 Sony Corp Encoding device and method
WO2008123254A1 (en) * 2007-03-29 2008-10-16 Kabushiki Kaisha Toshiba Image encoding method, device, image decoding method, and device
JP2009081728A (en) * 2007-09-26 2009-04-16 Canon Inc Moving image coding apparatus, method of controlling moving image coding apparatus, and computer program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008123254A1 (en) * 2007-03-29 2008-10-16 Kabushiki Kaisha Toshiba Image encoding method, device, image decoding method, and device
JP2008042942A (en) * 2007-09-18 2008-02-21 Sony Corp Coding method
JP2009081728A (en) * 2007-09-26 2009-04-16 Canon Inc Moving image coding apparatus, method of controlling moving image coding apparatus, and computer program
JP2008104205A (en) * 2007-10-29 2008-05-01 Sony Corp Encoding device and method

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103339937A (en) * 2011-12-21 2013-10-02 松下电器产业株式会社 Method for encoding image, image-encoding device, method for decoding image, image-decoding device, and image encoding/decoding device
EP3182710A3 (en) * 2015-12-18 2017-06-28 BlackBerry Limited Adaptive binarizer selection for image and video coding
EP3182705A3 (en) * 2015-12-18 2017-06-28 BlackBerry Limited Binarizer selection for image and video coding
CN107018426A (en) * 2015-12-18 2017-08-04 黑莓有限公司 Binarizer for image and video coding is selected
US10142635B2 (en) 2015-12-18 2018-11-27 Blackberry Limited Adaptive binarizer selection for image and video coding

Also Published As

Publication number Publication date
JP2013131786A (en) 2013-07-04
TW201143459A (en) 2011-12-01

Similar Documents

Publication Publication Date Title
JP6605063B2 (en) Moving picture decoding apparatus, moving picture decoding method, moving picture encoding apparatus, and moving picture encoding method
JP6347860B2 (en) Image decoding apparatus, image decoding method, image encoding apparatus, and image encoding method
WO2011125256A1 (en) Image encoding method and image decoding method
WO2011125314A1 (en) Video coding device and video decoding device
JP2011223319A (en) Moving image encoding apparatus and moving image decoding apparatus
JP5367161B2 (en) Image encoding method, apparatus, and program
JP5649701B2 (en) Image decoding method, apparatus, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11765216

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11765216

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP