WO2012176387A1 - Video encoding device, video decoding device, video encoding method and video decoding method - Google Patents

Video encoding device, video decoding device, video encoding method and video decoding method Download PDF

Info

Publication number
WO2012176387A1
WO2012176387A1 PCT/JP2012/003679 JP2012003679W WO2012176387A1 WO 2012176387 A1 WO2012176387 A1 WO 2012176387A1 JP 2012003679 W JP2012003679 W JP 2012003679W WO 2012176387 A1 WO2012176387 A1 WO 2012176387A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
image
unit
coding
block
Prior art date
Application number
PCT/JP2012/003679
Other languages
French (fr)
Japanese (ja)
Inventor
杉本 和夫
彰 峯澤
裕介 伊谷
亮史 服部
関口 俊一
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Publication of WO2012176387A1 publication Critical patent/WO2012176387A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the present invention relates to a moving image encoding apparatus and moving image encoding method for encoding a moving image with high efficiency, a moving image decoding apparatus and a moving image decoding method for decoding a moving image encoded with high efficiency, and It is about.
  • FIG. 18 is a configuration diagram illustrating a luminance correlation use color difference signal prediction unit that performs prediction processing in the Intra_LM mode.
  • the luminance correlation use color difference signal prediction unit in FIG. 18 includes a luminance reference pixel reduction unit 931, a correlation calculation unit 932, and a color difference prediction image generation unit 933.
  • the luminance reference pixel reduction unit 931 for the YUV 4: 2: 0 signal, a decoded luminance signal that is a decoded pixel value in the block corresponding to the prediction block of the color difference signal,
  • the reduced luminance reference pixel Rec ′ L is generated using the decoded luminance signal adjacent to the upper end and the left end of the decoded luminance signal.
  • the reduced luminance reference pixel Rec ′ L is vertical to the luminance reference pixel Rec L so that the YUV 4: 2: 0 signal has the same phase as the pixels of the color difference signal.
  • the sub-sampling is performed only on even columns in the vertical and horizontal directions.
  • the correlation calculation unit 932 When the luminance reference pixel reduction unit 931 generates the reduced luminance reference pixel Rec ′ L , the correlation calculation unit 932 generates the reduced luminance reference pixel Rec ′ L and the color difference signal adjacent to the upper end and the left end of the prediction block of the color difference signal. Correlation parameters ⁇ and ⁇ used for prediction are calculated using the color difference reference pixel Rec C , which is the decoded pixel value, as shown in the following equations (1) and (2). In equations (1) and (2), I is a value twice the number of pixels on one side of the prediction block of the color difference signal to be processed.
  • the color difference predicted image generation unit 933 uses the correlation parameters ⁇ and ⁇ and the reduced luminance reference pixel Rec ′ L as shown in the following equation (3). Then, the color difference prediction image Pred C is generated.
  • JCT-VC Joint Collaborative Team on Video Coding
  • the conventional moving image coding apparatus is configured as described above, a change in the color difference signal that cannot be predicted from the adjacent pixels can be appropriately predicted using the correlation with the luminance signal, with high accuracy. Prediction becomes possible.
  • a strong low-pass filter is applied in the vertical direction of the screen by taking an average value of every two pixels, but simple in the horizontal direction of the screen. because only be subsampled every two pixels, there may aliasing occurs in the horizontal direction of the reduced luminance reference pixel Rec 'L.
  • the color difference prediction image Pred C is obtained by multiplying the reduced luminance reference pixel Rec ′ L by scalar multiplication and offset addition, the aliasing of the reduced luminance reference pixel Rec ′ L is reflected in the color difference prediction image and the prediction error is amplified. As a result, there has been a problem that improvement in coding efficiency is limited.
  • the present invention has been made to solve the above-described problems, and an object of the present invention is to provide a moving picture coding apparatus and a moving picture coding method capable of suppressing the occurrence of aliasing and improving the coding efficiency. And It is another object of the present invention to obtain a moving picture decoding apparatus and a moving picture decoding method that can accurately decode a moving picture from encoded data whose encoding efficiency is improved.
  • the moving picture coding apparatus generates a prediction image by performing a prediction process corresponding to a coding mode corresponding to the coding block on a hierarchically divided coding block.
  • Intra-frame prediction of the luminance component in the hierarchically divided encoded block to generate a prediction image for the luminance component, and among the pixels constituting the encoded block Smoothes the luminance component of a plurality of pixels adjacent in the horizontal and vertical directions, and calculates a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component.
  • the correlation parameter and the luminance component after smoothing in which as and a color difference component intra-prediction unit that generates a predicted image for the color difference components.
  • the predicted image generating means performs intra-frame prediction of the luminance component in the encoded block divided by the block dividing means, and generates a predicted image for the luminance component, the luminance component intra prediction means,
  • a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component is smoothed by smoothing the luminance component of a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block. It is composed of a chrominance component intra-prediction unit that generates a predicted image for the chrominance component using the calculated correlation parameter and the smoothed luminance component, thereby suppressing the occurrence of aliasing in the smoothed luminance component
  • the encoding efficiency can be increased.
  • FIG. 1 is a block diagram showing a moving picture coding apparatus according to Embodiment 1 of the present invention.
  • an encoding control unit 1 determines the maximum size of an encoding block that is a processing unit when intra prediction processing (intraframe prediction processing) or motion compensation prediction processing (interframe prediction processing) is performed. Then, a process of determining the upper limit number of layers when the encoding block of the maximum size is hierarchically divided is performed.
  • the encoding control unit 1 assigns each encoding block divided hierarchically from one or more available encoding modes (one or more intra encoding modes and one or more inter encoding modes). A process of selecting a suitable encoding mode is performed.
  • the encoding control unit 1 determines the quantization parameter and transform block size used when the difference image is compressed for each encoding block, and intra prediction used when the prediction process is performed. A process of determining a parameter or an inter prediction parameter is performed.
  • the quantization parameter and the transform block size are included in the prediction difference coding parameter, and are output to the transform / quantization unit 7, the inverse quantization / inverse transform unit 8, the variable length coding unit 13, and the like.
  • the encoding control unit 1 constitutes an encoding control unit.
  • the block dividing unit 2 divides the input image into encoded blocks of the maximum size determined by the encoding control unit 1 and determined by the encoding control unit 1 The process of dividing the encoded block hierarchically is performed until the upper limit number of hierarchies is reached.
  • the block dividing unit 2 constitutes block dividing means. If the coding mode selected by the coding control unit 1 is the intra coding mode, the changeover switch 3 outputs the coding block divided by the block dividing unit 2 to the intra prediction unit 4, and the coding control unit 1 If the coding mode selected by (2) is the inter coding mode, a process of outputting the coding block divided by the block dividing unit 2 to the motion compensation prediction unit 5 is performed.
  • the intra prediction unit 4 When the intra prediction unit 4 receives the encoded block divided by the block dividing unit 2 from the changeover switch 3, the intra prediction unit 4 is adjacent to the encoded block stored in the intra prediction memory 10 with respect to the encoded block.
  • generates a prediction image is implemented by implementing the intra prediction process based on the intra prediction parameter output from the encoding control part 1 using the decoded pixel which is. That is, the intra prediction unit 4 performs intra-frame prediction of the luminance component of the encoded block divided by the block dividing unit 2 to generate a prediction image for the luminance component.
  • the block dividing unit 2 Intra-frame prediction of the color difference component in the encoded block divided by the above is performed, and a predicted image for the color difference component is generated.
  • the horizontal direction and the pixel in the encoding block are Smooths the luminance component of multiple pixels adjacent in the vertical direction, calculates a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component, and uses the correlation parameter and the luminance component after smoothing.
  • a predicted image for the color difference component is generated.
  • the motion compensated prediction unit 5 is stored by the motion compensated prediction frame memory 12. Based on the inter prediction parameter output from the encoding control unit 1, using a reference image of one or more frames, a process for generating a predicted image is performed by performing a motion compensation prediction process for the encoded block To do.
  • the changeover switch 3, the intra prediction unit 4, and the motion compensation prediction unit 5 constitute a predicted image generation unit.
  • the process to generate is performed.
  • the subtracting unit 6 constitutes a difference image generating unit.
  • the transform / quantization unit 7 performs transform processing (for example, DCT (discrete) of the difference image generated by the subtraction unit 6 in units of transform block size included in the prediction difference encoding parameter output from the encoding control unit 1.
  • the transform / quantization unit 7 constitutes an image compression unit.
  • the inverse quantization / inverse transform unit 8 performs inverse quantization on the compressed data output from the transform / quantization unit 7 using the quantization parameter included in the prediction difference encoding parameter output from the encoding control unit 1. And inverse transform processing of the compressed data after inverse quantization (for example, inverse DCT (inverse discrete cosine transform) or inverse DST (inverse discrete sine transform)) in units of transform block sizes included in the prediction differential encoding parameter. (Inverse transform process such as inverse KL transform) is performed, and the process of outputting the compressed data after the inverse transform process as a local decoded prediction difference signal is performed.
  • inverse DCT inverse discrete cosine transform
  • inverse DST inverse discrete sine transform
  • the adding unit 9 adds the local decoded prediction difference signal output from the inverse quantization / inverse transform unit 8 and the prediction signal indicating the prediction image generated by the intra prediction unit 4 or the motion compensation prediction unit 5 to thereby perform local decoding. A process of generating a locally decoded image signal indicating an image is performed.
  • the intra prediction memory 10 is a recording medium such as a RAM that stores a local decoded image indicated by the local decoded image signal generated by the adding unit 9 as an image used in the next intra prediction process by the intra prediction unit 4.
  • the loop filter unit 11 compensates for the coding distortion included in the locally decoded image signal generated by the adding unit 9, and performs motion compensation prediction using the locally decoded image indicated by the locally decoded image signal after the coding distortion compensation as a reference image.
  • a process of outputting to the frame memory 12 is performed.
  • the motion compensated prediction frame memory 12 is a recording medium such as a RAM that stores a locally decoded image after the filtering process by the loop filter unit 11 as a reference image used in the next motion compensated prediction process by the motion compensated prediction unit 5.
  • the variable length encoding unit 13 includes the compressed data output from the transform / quantization unit 7, the encoding mode and prediction differential encoding parameter output from the encoding control unit 1, and the intra output from the intra prediction unit 4.
  • the prediction parameter or the inter prediction parameter output from the motion compensation prediction unit 5 is variable-length encoded, and the compressed data, the encoding mode, the prediction differential encoding parameter, and the intra prediction parameter / inter prediction parameter encoded data are multiplexed. A process for generating a converted bitstream is performed.
  • the variable length encoding unit 13 constitutes variable length encoding means.
  • a coding control unit 1 a block division unit 2, a changeover switch 3, an intra prediction unit 4, a motion compensation prediction unit 5, a subtraction unit 6, and a transform / quantization unit 7, which are components of the moving image coding apparatus.
  • the inverse quantization / inverse transform unit 8, the adder unit 9, the loop filter unit 11 and the variable length coding unit 13 each have dedicated hardware (for example, a semiconductor integrated circuit on which a CPU is mounted, or a one-chip microcomputer)
  • FIG. 2 is a
  • FIG. 3 is a block diagram showing the intra prediction unit 4 of the moving picture coding apparatus according to Embodiment 1 of the present invention.
  • the luminance signal intra prediction unit 21 performs intra-frame prediction of the luminance component in the encoded block divided by the block dividing unit 2 and performs a process of generating a prediction image for the luminance component. That is, the luminance signal intra prediction unit 21 refers to the decoded luminance reference pixel adjacent to the encoded block stored in the intra prediction memory 10 and outputs the intra prediction output from the encoding control unit 1.
  • the luminance signal intra prediction unit 21 constitutes luminance component intra prediction means.
  • the changeover switch 22 selects a reference pixel used for prediction. If the color difference signal directivity intra prediction unit 23 indicates that the parameter indicating the intra coding mode of the color difference signal is the smoothed luminance correlation use color difference signal prediction mode, the reference pixel used for prediction is used for the luminance correlation use. A process of outputting to the color difference signal prediction unit 24 is performed.
  • the chrominance signal directivity intra prediction unit 23 refers to the decoded chrominance reference pixel adjacent to the encoded block received from the changeover switch 22 and determines the chrominance based on the intra prediction parameter output from the encoding control unit 1. By performing intra-frame prediction of components, a process of generating a predicted image for the color difference component is performed.
  • the luminance correlation utilization color difference signal prediction unit 24, among the decoded pixels received from the changeover switch 22, has decoded luminance reference pixels and color difference reference pixels adjacent to the coding block, and has been decoded within the coding block.
  • a correlation parameter indicating the correlation of the color difference component is calculated, and a process of generating a predicted image for the color difference component is performed using the correlation parameter and the smoothed luminance component.
  • the changeover switch 22, the color difference signal directivity intra prediction unit 23, and the luminance correlation use color difference signal prediction unit 24 constitute a color difference component intra prediction unit.
  • FIG. 4 is a block diagram showing the luminance correlation utilizing color difference signal prediction unit 24 of the moving picture coding apparatus according to Embodiment 1 of the present invention.
  • the smoothed luminance reference pixel reducing unit 31 is adjacent in the horizontal direction and the vertical direction among the decoded luminance reference pixels constituting the encoded block stored in the intra prediction memory 10.
  • a reduced luminance reference pixel Rec ′ L is generated by performing smoothing processing or the like of a plurality of luminance reference pixels.
  • the correlation calculation unit 32 uses the color difference reference pixels stored in the intra prediction memory 10 and the reduced luminance reference pixel Rec ′ L generated by the smoothed luminance reference pixel reduction unit 31 to calculate the correlation between the luminance component and the color difference component. Processing for calculating the correlation parameters ⁇ and ⁇ shown in FIG.
  • the color difference predicted image generation unit 33 uses the correlation parameters ⁇ and ⁇ calculated by the correlation calculation unit 32 and the reduced luminance reference pixel Rec ′ L generated by the smoothed luminance reference pixel reduction unit 31 to generate a predicted image for the color difference component. Perform the process to generate.
  • FIG. 5 is a flowchart showing the processing contents of the intra prediction unit 4 of the moving picture coding apparatus according to Embodiment 1 of the present invention.
  • FIG. 6 is a flowchart showing the processing contents of the luminance correlation utilizing color difference signal prediction unit 24 of the moving picture coding apparatus according to Embodiment 1 of the present invention.
  • FIG. 7 is a block diagram showing a moving picture decoding apparatus according to Embodiment 1 of the present invention.
  • the variable length decoding unit 41 is a code that is hierarchically divided from the maximum size of the coding block that is a processing unit when the intra prediction process or the motion compensation prediction process is performed, and the coding block of the maximum size.
  • the coded data related to the largest size coded block and the hierarchically divided coded block is identified from the coded data multiplexed in the bitstream.
  • variable length decoding unit 41 constitutes variable length decoding means.
  • the changeover switch 42 outputs the intra prediction parameter output from the variable length decoding unit 41 to the intra prediction unit 43 when the coding mode related to the coding block output from the variable length decoding unit 41 is the intra coding mode.
  • the coding mode is the inter coding mode
  • a process of outputting the inter prediction parameter output from the variable length decoding unit 41 to the motion compensation unit 44 is performed.
  • the intra prediction unit 43 uses a decoded pixel adjacent to the coding block stored in the intra prediction memory 47 and uses the decoded prediction frame output from the changeover switch 42 to generate a frame for the coding block.
  • a process for generating a predicted image is performed by performing the intra prediction process. That is, for the luminance component in the encoded block output from the variable length decoding unit 41, the intra prediction unit 43 performs intra-frame prediction of the luminance component to generate a prediction image for the luminance component.
  • the coding mode output from the variable length decoding unit 41 is the directional prediction mode in the intra coding mode, the coding is performed.
  • Intraframe prediction of the color difference component in the block is performed to generate a predicted image for the color difference component.
  • the coding mode output from the variable length decoding unit 41 is the smoothed luminance correlation utilization color difference signal prediction mode in the intra coding mode, among the pixels constituting the coding block, the horizontal direction and Smooths the luminance component of multiple pixels adjacent in the vertical direction, calculates a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component, and uses the correlation parameter and the luminance component after smoothing.
  • a predicted image for the color difference component is generated.
  • the motion compensation unit 44 performs a motion compensation prediction process on the encoded block based on the inter prediction parameter output from the changeover switch 42 using one or more reference images stored in the motion compensation prediction frame memory 49. Thus, a process for generating a predicted image is performed.
  • the changeover switch 42, the intra prediction unit 43, and the motion compensation unit 44 constitute a predicted image generation unit.
  • the inverse quantization / inverse transform unit 45 uses the quantization parameter included in the prediction difference encoding parameter output from the variable length decoding unit 41 to compress the encoded block output from the variable length decoding unit 41
  • Data is inversely quantized, and inverse transform processing (for example, inverse DCT (Inverse Discrete Cosine Transform) or inverse DST (Inverse Discrete Discrete) is performed on the inverse quantization compressed data in units of transform block sizes included in the prediction differential encoding parameter.
  • inverse transform processing for example, inverse DCT (Inverse Discrete Cosine Transform) or inverse DST (Inverse Discrete Discrete) is performed on the inverse quantization compressed data in units of transform block sizes included in the prediction differential encoding parameter.
  • inverse conversion processing such as inverse KL conversion
  • a process of outputting the compressed data after the inverse conversion process as a decoded prediction difference signal (a signal indicating a difference image before compression) is performed.
  • the addition unit 46 adds the decoded prediction difference signal output from the inverse quantization / inverse conversion unit 45 to the prediction signal indicating the prediction image generated by the intra prediction unit 43 or the motion compensation unit 44, thereby indicating a decoded image. A process of generating a decoded image signal is performed.
  • the adding unit 46 constitutes a decoded image generating unit.
  • the intra prediction memory 47 is a recording medium such as a RAM that stores a decoded image indicated by the decoded image signal generated by the addition unit 46 as an image used in the next intra prediction process by the intra prediction unit 43.
  • the loop filter unit 48 compensates for the coding distortion included in the decoded image signal generated by the adding unit 46, and uses the decoded image indicated by the decoded image signal after the coding distortion compensation as a reference image as a motion compensated prediction frame memory 49. And a process of outputting the decoded image as a reproduced image to the outside.
  • the motion compensated prediction frame memory 49 is a recording medium such as a RAM that stores a decoded image after the filtering process by the loop filter unit 48 as a reference image to be used by the motion compensation unit 44 in the next motion compensation prediction process.
  • variable length decoding unit 41 a variable length decoding unit 41, a changeover switch 42, an intra prediction unit 43, a motion compensation unit 44, an inverse quantization / inverse transformation unit 45, an addition unit 46, and a loop filter unit 48, which are components of the video decoding device.
  • a changeover switch 42 an intra prediction unit 43, a motion compensation unit 44, an inverse quantization / inverse transformation unit 45, an addition unit 46, and a loop filter unit 48, which are components of the video decoding device.
  • dedicated hardware for example, a semiconductor integrated circuit on which a CPU is mounted, or a one-chip microcomputer
  • FIG. 8 is a flowchart showing the processing contents of the moving picture decoding apparatus according to Embodiment 1 of the present invention.
  • FIG. 9 is a block diagram showing the intra prediction unit 43 of the moving picture decoding apparatus according to Embodiment 1 of the present invention.
  • the luminance signal intra prediction unit 51 performs intra-frame prediction of the luminance component in the encoded block output from the variable length decoding unit 41, and performs processing to generate a prediction image for the luminance component. That is, the luminance signal intra prediction unit 51 refers to the decoded luminance reference pixel adjacent to the encoded block stored in the intra prediction memory 47, and outputs the intra prediction output from the variable length decoding unit 41.
  • the luminance signal intra prediction unit 51 constitutes luminance component intra prediction means.
  • the changeover switch 52 selects the reference pixel used for prediction. If the parameter indicating the intra coding mode of the color difference signal given to the color difference signal directional intra prediction unit 53 indicates that the color difference signal prediction mode uses the smoothed luminance correlation, the reference pixel used for the prediction uses the luminance correlation. A process of outputting to the color difference signal prediction unit 54 is performed.
  • the chrominance signal directivity intra prediction unit 53 refers to the decoded chrominance reference pixel adjacent to the encoded block received from the changeover switch 52, and the chrominance based on the intra prediction parameter output from the variable length decoding unit 41 By performing intra-frame prediction of components, a process of generating a predicted image for the color difference component is performed.
  • the luminance correlation use color difference signal prediction unit 54 has decoded luminance reference pixels and color difference reference pixels adjacent to the encoded block, and has been decoded in the encoded block.
  • a correlation parameter indicating the correlation of the color difference component is calculated, and a process of generating a predicted image for the color difference component is performed using the correlation parameter and the smoothed luminance component.
  • the changeover switch 52, the color difference signal directivity intra prediction unit 53, and the luminance correlation utilization color difference signal prediction unit 54 constitute a color difference component intra prediction unit.
  • FIG. 10 is a block diagram showing the luminance correlation utilization color difference signal prediction unit 54 of the moving picture decoding apparatus according to Embodiment 1 of the present invention.
  • the smoothed luminance reference pixel reduction unit 61 is adjacent in the horizontal direction and the vertical direction among the decoded luminance reference pixels constituting the encoded block stored in the intra prediction memory 47.
  • a reduced luminance reference pixel Rec ′ L is generated by performing smoothing processing or the like of a plurality of luminance reference pixels.
  • the correlation calculation unit 62 uses the color difference reference pixel stored in the intra prediction memory 47 and the reduced luminance reference pixel Rec ′ L generated by the smoothed luminance reference pixel reduction unit 61 to calculate the correlation between the luminance component and the color difference component.
  • the color difference predicted image generation unit 63 uses the correlation parameters ⁇ and ⁇ calculated by the correlation calculation unit 62 and the reduced luminance reference pixel Rec ′ L generated by the smoothed luminance reference pixel reduction unit 61 to generate a predicted image for the color difference component. Perform the process to generate.
  • the moving picture coding apparatus adapts to local changes in the spatial and temporal directions of a video signal, divides the video signal into regions of various sizes, and performs intraframe / interframe adaptive coding. It is characterized by performing.
  • video signals have the characteristic that the complexity of the signal changes locally in space and time, and when viewed spatially, comparison is made on a specific video frame, for example, sky or wall.
  • a picture having uniform signal characteristics in a wide image area and a picture having a complicated texture pattern in a small image area may be mixed in a picture including a person or a fine texture.
  • the encoding process generates a prediction difference signal with low signal power and entropy by temporal and spatial prediction, and performs a process of reducing the overall code amount.
  • the prediction parameter used for the prediction process is as large as possible. If it can be applied uniformly to the region, the code amount of the prediction parameter can be reduced. On the other hand, if the same prediction parameter is applied to a large image region with respect to an image signal pattern having a large temporal and spatial change, a prediction error increases, and therefore the code amount of the prediction difference signal cannot be reduced. .
  • the moving image encoding apparatus In order to perform encoding processing adapted to the general characteristics of such a video signal, the moving image encoding apparatus according to the first embodiment hierarchically divides the video signal area from a predetermined maximum block size. And the structure which adapts a prediction process and the encoding process of a prediction difference for every division area is employ
  • the video signal to be processed by the moving image coding apparatus is a color in an arbitrary color space such as a YUV signal composed of a luminance signal and two color difference signals, or an RGB signal output from a digital image sensor.
  • the video frame is an arbitrary video signal such as a monochrome image signal or an infrared image signal, in which the video frame is composed of a horizontal and vertical two-dimensional digital sample (pixel) sequence.
  • the gradation of each pixel may be 8 bits, or may be gradation such as 10 bits or 12 bits.
  • the input video signal is a YUV signal unless otherwise specified.
  • picture The processing data unit corresponding to each frame of the video is referred to as “picture”.
  • “picture” is described as a signal of a video frame that has been sequentially scanned (progressive scan).
  • the “picture” may be a field image signal which is a unit constituting a video frame.
  • the encoding control unit 1 determines the maximum size of an encoding block that is a processing unit when intra prediction processing (intraframe prediction processing) or motion compensation prediction processing (interframe prediction processing) is performed, The upper limit number of hierarchies when the coding block of the maximum size is divided hierarchically is determined (step ST1 in FIG. 2).
  • a method of determining the maximum size of the encoded block for example, a method of determining a size corresponding to the resolution of the input image for all the pictures can be considered.
  • the difference in complexity of local motion of the input image is quantified as a parameter, and the maximum size is determined to be a small value for pictures with intense motion, and the maximum size is determined to be a large value for pictures with little motion. Etc. are considered.
  • the upper limit of the number of hierarchies is set so that, for example, the more the input image moves, the deeper the number of hierarchies, so that finer motion can be detected, and the less the input image moves, the lower the number of hierarchies. A way to do this is conceivable.
  • the encoding control unit 1 includes each encoding block divided hierarchically from one or more available encoding modes (M types of intra encoding modes and N types of inter encoding modes). Is selected (step ST2).
  • M types of intra coding modes prepared in advance will be described later.
  • each encoded block is further divided into partitions. Since the encoding mode selection method by the encoding control unit 1 is a known technique, a detailed description thereof will be omitted. For example, an encoding process for an encoding block is performed using any available encoding mode. There is a method in which coding efficiency is verified by performing and a coding mode having the best coding efficiency is selected from among a plurality of available coding modes.
  • the encoding control unit 1 determines a quantization parameter and a transform block size used when the difference image is compressed for each partition included in each encoding block, and a prediction process is performed. Intra prediction parameters or inter prediction parameters used in the determination are determined.
  • the encoding control unit 1 outputs the prediction difference encoding parameter including the quantization parameter and the transform block size to the transform / quantization unit 7, the inverse quantization / inverse transform unit 8, and the variable length encoding unit 13.
  • a prediction difference encoding parameter is output to the intra estimation part 4 as needed.
  • FIG. 11 is an explanatory diagram showing a state in which the maximum-size encoded block is hierarchically divided into a plurality of encoded blocks.
  • the coding block of the maximum size is the coding block B 0 of the 0th layer, and has a size of (L 0 , M 0 ) as a luminance component.
  • the encoding block B n is obtained by performing hierarchical division to a predetermined depth determined separately in a quadtree structure with the encoding block B 0 having the maximum size as a starting point. Yes.
  • the coding block B n is an image area of size (L n , M n ).
  • the size of the encoded block B n is defined as the size of the luminance component of the encoded block B n (L n, M n ).
  • the encoding mode m (B n ) may be configured to use an individual mode for each color component, but hereinafter, unless otherwise specified, YUV The description will be made on the assumption that it indicates the coding mode for the luminance component of the coding block of the signal 4: 2: 0 format.
  • the coding mode m (B n ) includes one or more intra coding modes (collectively “INTRA”), one or more inter coding modes (collectively “INTER”), As described above, the encoding control unit 1 selects an encoding mode having the highest encoding efficiency for the encoding block B n from all the encoding modes available for the picture or a subset thereof. .
  • the encoded block Bn is further divided into one or more prediction processing units (partitions).
  • the partition belonging to the coding block B n is denoted as P i n (i: the partition number in the nth layer).
  • FIG. 12 is an explanatory diagram showing partitions P i n belonging to the coding block B n . How the partition P i n belonging to the coding block B n is divided is included as information in the coding mode m (B n ). All partitions P i n are subjected to prediction processing according to the coding mode m (B n ), but individual prediction parameters can be selected for each partition P i n .
  • the encoding control unit 1 generates a block division state as illustrated in FIG. 13 for the encoding block of the maximum size, and specifies the encoding block Bn .
  • the shaded area in FIG. 13 (a) shows the distribution of the partitions after the division
  • FIG. 13 (b) shows the situation where the encoding mode m (B n ) is assigned to the partition after the hierarchical division in a quadtree graph. Show.
  • nodes surrounded by squares indicate nodes (encoded blocks B n ) to which the encoding mode m (B n ) is assigned.
  • the changeover switch 3 performs intra prediction on the partition P i n belonging to the coding block B n divided by the block dividing unit 2.
  • the coding control unit 1 selects the inter coding mode (m (B n ) ⁇ INTER)
  • the partition P i n belonging to the coding block B n is output to the motion compensation prediction unit 5. .
  • intra prediction processing (P i n ) is generated by performing intra prediction processing for each partition P i n (step ST5).
  • P i n indicates a partition
  • P i n indicates a predicted image of the partition P i n .
  • Intra prediction parameters used to generate the intra-prediction image (P i n) is also the moving picture decoding apparatus, it is necessary to generate exactly the same intra prediction image (P i n), the variable length coding unit 13 Multiplexed into a bitstream.
  • the number of intra prediction directions that can be selected as the intra prediction parameter may be configured to differ depending on the size of the block to be processed. Since the efficiency of intra prediction decreases in a large size partition, the number of intra prediction directions that can be selected can be reduced, and the number of intra prediction directions that can be selected in a small size partition can be increased. For example, a 4 ⁇ 4 pixel partition or an 8 ⁇ 8 pixel partition may be configured in 34 directions, a 16 ⁇ 16 pixel partition in 17 directions, a 32 ⁇ 32 pixel partition in 9 directions, or the like.
  • each of the partitions P i is based on the inter prediction parameter determined by the coding control unit 1.
  • the inter prediction image (P i n ) is generated by performing the inter prediction process for n (step ST6). That is, the motion compensation prediction unit 5 uses the reference image of one or more frames stored in the motion compensation prediction frame memory 12, and based on the inter prediction parameter output from the encoding control unit 1, By performing the motion compensated prediction process for, an inter predicted image (P i n ) is generated.
  • Inter prediction parameters used to generate the inter prediction image (P i n) is also the moving picture decoding apparatus, it is necessary to generate exactly the same inter-prediction image (P i n), the variable length coding unit 13 Multiplexed into a bitstream.
  • the subtraction unit 6 When the subtraction unit 6 receives the predicted image (P i n ) from the intra prediction unit 4 or the motion compensation prediction unit 5, the subtraction unit 6 performs prediction from the partition P i n belonging to the encoded block B n divided by the block division unit 2. By subtracting the image (P i n ), a prediction difference signal e i n indicating the difference image is generated (step ST7).
  • the transform / quantization unit 7 generates the prediction difference in units of transform block size included in the prediction difference encoding parameter output from the encoding control unit 1.
  • the conversion processing for the signal e i n e.g., DCT (discrete cosine transform) or DST (discrete sine transform), the orthogonal transform for KL conversion and the base design have been made in advance to the particular learning sequence
  • DCT discrete cosine transform
  • DST discrete sine transform
  • the compressed data of the differential image is a transform coefficient after quantization
  • the data is output to the inverse quantization / inverse transform unit 8 and the variable length coding unit 13 (step ST8).
  • the inverse quantization / inverse transform unit 8 uses the quantization parameter included in the prediction difference coding parameter output from the coding control unit 1 to The compressed data is inversely quantized, and the inverse quantization processing (for example, inverse DCT (inverse discrete cosine transform) or inverse DST (discrete) is performed on the basis of the transform block size included in the prediction differential encoding parameter.
  • the inverse quantization processing for example, inverse DCT (inverse discrete cosine transform) or inverse DST (discrete) is performed on the basis of the transform block size included in the prediction differential encoding parameter.
  • inverse transformation processing such as inverse KL transformation
  • the adder 9 Upon receiving the local decoded prediction difference signal from the inverse quantization / inverse transform unit 8, the adder 9 receives the local decoded prediction difference signal and the predicted image (P i ) generated by the intra prediction unit 4 or the motion compensated prediction unit 5. n ) to generate a locally decoded image signal indicating a locally decoded partition image or a locally decoded block image (hereinafter referred to as “locally decoded image”) as a collection thereof.
  • the locally decoded image signal is output to the loop filter unit 11 (step ST10).
  • the intra prediction memory 10 stores the local decoded image for use in intra prediction.
  • the loop filter unit 11 When the loop filter unit 11 receives the local decoded image signal from the adder unit 9, the loop filter unit 11 compensates for the encoding distortion included in the local decoded image signal, and the local decoded image indicated by the local decoded image signal after the encoding distortion compensation Is stored in the motion compensated prediction frame memory 12 as a reference image (step ST11).
  • the filtering process by the loop filter unit 11 may be performed in units of the maximum encoded block or individual encoded blocks of the input local decoded image signal, or a local decoded image signal corresponding to a macroblock for one screen. It may be performed for one screen after the input.
  • steps ST4 to ST10 are repeated until the processes for the partitions P i n belonging to all the encoded blocks B n divided by the block dividing unit 2 are completed (step ST12).
  • the variable length coding unit 13 the compressed data output from the transform / quantization unit 7, the coding mode and prediction differential coding parameter output from the coding control unit 1, and the intra prediction unit 4 output
  • the intra-prediction parameter or the inter-prediction parameter output from the motion compensated prediction unit 5 is variable-length coded, and the compressed data, the coding mode, the prediction differential coding parameter, and the intra-prediction parameter / inter-prediction parameter encoded data are obtained.
  • a multiplexed bit stream is generated (step ST13).
  • FIG. 14 is an explanatory diagram showing an example of intra prediction parameters (intra prediction mode) that can be selected in each partition P i n belonging to the coding block B n .
  • intra prediction parameters intra prediction mode
  • the prediction direction vector corresponding to the intra prediction mode is shown, and the relative angle between the prediction direction vectors is designed to be smaller as the number of selectable intra prediction modes increases.
  • the luminance signal intra prediction unit 21 of the intra prediction unit 4 performs intra-frame prediction of the luminance component in the encoded block divided by the block dividing unit 2, and generates a prediction image for the luminance component (FIG. 5). Step ST21).
  • the processing content of the luminance signal intra prediction unit 21 will be specifically described.
  • the luminance signal intra prediction unit 21 of the intra prediction unit 4 based on the intra prediction parameters (intra prediction mode) for the luminance signal partitions P i n, the intra processing for generating an intra prediction signal of the luminance signal described To do.
  • the size of the partition P i n a l i n ⁇ m i n pixels.
  • the partition P i n in the pixel on the partition already coded that is adjacent to ((2 ⁇ l i n +1 ) pixels) the left partition pixel ((2 ⁇ m i n) Number of pixels) is used as a reference pixel for prediction, but the number of pixels used for prediction may be more or less than that shown in FIG.
  • pixels for one row or one column adjacent to each other are used for prediction, but pixels for two rows or two columns or more may be used for prediction. .
  • k is a positive scalar value.
  • the integer pixel When the reference pixel is at the integer pixel position, the integer pixel is set as the prediction value of the prediction target pixel. When the reference pixel is not located at the integer pixel position, an interpolation pixel generated from the integer pixel adjacent to the reference pixel is set as the predicted value. In the example of FIG. 15, since the reference pixel is not located at the integer pixel position, the average value of two pixels adjacent to the reference pixel is used as the predicted value. Note that an interpolation pixel may be generated not only from two adjacent pixels but also from two or more adjacent pixels, and used as a predicted value.
  • the luminance signal intra prediction unit 21 generates prediction pixels for all the pixels of the luminance signal in the partition P i n in the same procedure, and outputs the generated intra prediction image (P i n ). As described above, the intra-prediction parameters used for generating the intra-predicted image (P i n ) are output to the variable-length encoding unit 13 for multiplexing into the bitstream.
  • the changeover switch 22 of the intra prediction unit 4 determines whether the parameter indicating the intra coding mode of the color difference signal among the intra prediction parameters output from the coding control unit 1 is the directionality prediction mode, or uses smoothed luminance correlation. It is determined whether the color difference signal prediction mode is set (step ST22). If the parameter indicating the intra coding mode of the chrominance signal indicates that it is the directional prediction mode, the changeover switch 22 gives the reference pixel used for prediction to the chrominance signal directional intra prediction unit 23, and If the parameter indicating the intra coding mode indicates that it is the smoothed luminance correlation utilization color difference signal prediction mode, the reference pixel used for the prediction is given to the luminance correlation utilization color difference signal prediction unit 24.
  • 16 is an explanatory diagram showing a correspondence example between the intra prediction parameters of the color difference signal and the color difference intra prediction modes.
  • the color difference signal intra prediction parameter is “34”
  • a reference pixel used for prediction is given to the luminance correlation utilization color difference signal prediction unit 24, and the color difference signal intra prediction parameter is other than “34”.
  • reference pixels used for prediction are given to the color difference signal directional intra prediction unit 23.
  • Color difference signal directional intra-prediction unit 23 receives the reference pixels used for prediction from the changeover switch 22, with reference to the decoded chrominance reference pixels adjacent to the partition P i n, output from the coding controller 1 By performing intra-frame prediction of the color difference component based on the intra prediction parameter thus generated, a predicted image for the color difference component is generated (step ST23).
  • the intra prediction target in the chrominance signal directional intra prediction unit 23 is a color difference signal, and the intra prediction target is different from the luminance signal intra prediction unit 21 which is a luminance signal, but the processing content of the intra prediction itself is the luminance signal intra prediction unit. 21. Therefore, by performing directional prediction, horizontal prediction, vertical prediction, DC prediction, etc., an intra prediction image of a color difference signal is generated.
  • Brightness correlation utilizing the color difference signal prediction section 24 receives the reference pixels used for prediction from the changeover switch 22, the decoded luminance reference pixel and the chrominance reference pixels adjacent to the partition P i n is a coding block, partition decoded brightness reference pixels in P i n (brightness reference pixels in the local decoded in an image obtained from the intra prediction image of the partition P i n previously generated by the luminance signal intra prediction unit 21 (P i n)) are used to smooth the luminance component of a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block, and the smoothed luminance component and color difference component A correlation parameter indicating the correlation is calculated, and a predicted image for the color difference component is generated using the correlation parameter and the smoothed luminance component (step ST24).
  • the processing content of the luminance correlation utilization color difference signal prediction unit 24 will be specifically described.
  • the correlation calculating unit 32 of the luminance correlation using color difference signal predicting unit 24 and the reduced luminance reference pixel Rec ′ L and the prediction block of the color difference signal Using the color difference reference pixel Rec C , which is the decoded pixel value of the color difference signal adjacent to the upper end and the left end of the image, as shown in the following equations (4) and (5), ⁇ and ⁇ are calculated (step ST32).
  • I is a value twice the number of pixels on one side of the prediction block of the color difference signal to be processed.
  • the color difference predicted image generation unit 33 uses the correlation parameters ⁇ and ⁇ and the reduced luminance reference pixel Rec ′ L as shown in the following equation (6). Then, the color difference prediction image Pred C is generated (step ST33).
  • Intra prediction is a means for predicting an unknown area in the screen from a known area, but the texture of the luminance signal and the color difference signal are correlated, and in the spatial direction, neighboring pixels change pixel values.
  • Predictive efficiency by calculating the correlation parameter between the luminance signal and the color difference signal using the decoded luminance signal and the color difference signal adjacent to the prediction block and predicting the color difference signal from the luminance signal and the correlation parameter. Can be improved. In this case, since the resolution of the luminance signal and the color difference signal is different in the YUV 4: 2: 0 signal, it is necessary to subsample the luminance signal. However, the occurrence of aliasing can be suppressed by applying a low-pass filter, and the prediction efficiency Can be improved.
  • variable length coding unit 13 performs variable length coding on the intra prediction parameter output from the intra prediction unit 4 and multiplexes the codeword of the intra prediction parameter into the bitstream.
  • a representative prediction direction vector (prediction direction representative vector) is selected from prediction direction vectors of a plurality of directional predictions, and an intra prediction parameter is used as an index of the prediction direction representative vector (prediction direction representative). Index) and an index (prediction direction difference index) representing the difference from the prediction direction representative vector, and by performing Huffman coding such as arithmetic coding according to the probability model for each index, the code amount is You may comprise so that it may reduce and encode.
  • variable length decoding unit 41 receives the bit stream generated by the moving picture encoding device in FIG. 1, the variable length decoding unit 41 performs variable length decoding processing on the bit stream (step ST41 in FIG. 8), and the picture of one frame or more The frame size is decoded in sequence units or picture units.
  • the variable length decoding unit 41 determines the maximum coding block size determined by the moving image coding apparatus in FIG. 1 (a code that is a processing unit when the intra prediction process or the motion compensation prediction process is performed).
  • the maximum size of the encoded block and the upper limit of the number of division layers are determined in the same procedure as that of the video encoding device (step ST42).
  • the moving image shown in FIG. 1 is based on the previously decoded frame size.
  • the maximum size of the encoded block is determined by the same procedure as that of the image encoding apparatus.
  • the maximum size of the encoded block and the number of layers of the encoded block are multiplexed in the bitstream by the moving image encoding device, the maximum size of the encoded block and the number of layers of the encoded block from the bitstream Is decrypted.
  • the variable length decoding unit 41 determines the maximum size of the encoded block and the number of layers of the encoded block, and grasps the hierarchical division state of each encoded block from the maximum encoded block as a starting point. Among the encoded data multiplexed in the bit stream, the encoded data related to each encoded block is specified, and the encoding mode assigned to each encoded block is decoded from the encoded data. Then, the variable length decoding unit 41 refers to the partition information of the partition P i n belonging to the coding block B n included in the coding mode, and encodes the encoded data that is multiplexed in the bit stream. in, it identifies the coded data according to each partition P i n (step ST43).
  • Variable-length decoding unit 41 the compressed data from each partition P i n encoded data according to the predictive differential coding parameters, intra prediction parameters / inter prediction parameters by variable length decoding, the compressed data and predictive differential coding parameters Is output to the inverse quantization / inverse transform unit 45, and the encoding mode and the intra prediction parameter / inter prediction parameter are output to the changeover switch 42 (step ST44).
  • the prediction direction representative index and the prediction direction difference index are multiplexed in the bitstream, the prediction direction representative index and the prediction direction difference index are entropy decoded by arithmetic decoding or the like according to each probability model, An intra prediction parameter is specified from the prediction direction representative index and the prediction direction difference index. Thereby, even when the code amount of the intra prediction parameter is reduced on the moving image encoding device side, the intra prediction parameter can be correctly decoded.
  • the changeover switch 42 When the coding mode of the partition P i n belonging to the coding block B n output from the variable length decoding unit 41 is the intra coding mode, the changeover switch 42 outputs the intra prediction parameter output from the variable length decoding unit 41. Is output to the intra prediction unit 43, and when the encoding mode is the inter encoding mode, the inter prediction parameter output from the variable length decoding unit 41 is output to the motion compensation unit 44.
  • the intra prediction unit 43 receives the intra prediction parameter from the variable length decoding unit 41 (step ST45), like the intra prediction unit 4 in FIG. 1, on the basis of the intra prediction parameters, intra prediction for each partition P i n By performing the process, an intra-predicted image (P i n ) is generated (step ST46).
  • the processing content of the intra estimation part 43 is demonstrated concretely.
  • the changeover switch 52 of the intra-prediction unit 43 is similar to the changeover switch 22 of the video encoding device, and the parameter indicating the intra-coding mode of the color difference signal among the intra-prediction parameters output from the variable-length decoding unit 41 is It is determined whether the mode is the directionality prediction mode or the smoothed luminance correlation utilization color difference signal prediction mode.
  • the changeover switch 52 provides the reference pixel used for prediction to the chrominance signal directional intra prediction unit 53, and If the parameter indicating the intra coding mode indicates that it is the smoothed luminance correlation utilization color difference signal prediction mode, the reference pixel used for the prediction is given to the luminance correlation utilization color difference signal prediction unit 54.
  • Color difference signal directional intra-prediction unit 53 receives the reference pixels used for prediction from the changeover switch 52, similarly to the color difference signals directional intra prediction unit 23 of the moving picture coding apparatus, adjacent to the partition P i n By referring to the decoded chrominance reference pixel and performing intra-frame prediction of the chrominance component based on the intra prediction parameter output from the variable length decoding unit 41, a predicted image for the chrominance component is generated.
  • the intra prediction target in the chrominance signal directional intra prediction unit 53 is a color difference signal, and the intra prediction target is different from the luminance signal intra prediction unit 51 which is a luminance signal, but the processing content of the intra prediction itself is the luminance signal intra prediction unit. Same as 51. Therefore, by performing directional prediction, horizontal prediction, vertical prediction, DC prediction, etc., an intra prediction image of a color difference signal is generated.
  • the luminance correlation use color difference signal prediction unit 54 When the luminance correlation use color difference signal prediction unit 54 receives a reference pixel used for prediction from the changeover switch 52, the luminance correlation use color difference signal prediction unit 54 is adjacent to the partition P i n , similarly to the luminance correlation use color difference signal prediction unit 24 of the moving image encoding device. decoded luminance reference pixel and the chrominance reference pixels, the partition P i decoded brightness reference pixels in n (partition previously generated by the luminance signal intra prediction unit 51 P i n intra predicted image (P i n) Luminance reference pixels in the decoded image obtained from (1)) and smoothing the luminance component of a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block.
  • the smoothed luminance reference pixel reducing unit 61 of the luminance correlation using color difference signal predicting unit 54 stores the partition P i stored in the intra prediction memory 47.
  • Decoded luminance reference pixels constituting n luminance reference pixels in the decoded image obtained from the intra-predicted image (P i n ) of the partition P i n previously generated by the luminance signal intra prediction unit 51
  • the reduced luminance reference pixel Rec ′ L is generated by performing a smoothing process or the like of a plurality of luminance reference pixels adjacent in the horizontal direction and the vertical direction.
  • the smoothed luminance reference pixel reducing unit 61 as shown in FIG. 19, (in the figure, the block on the left side of N ⁇ N) prediction block of the color difference signals in the partition P i n in the block (drawing corresponding to, A reduced luminance reference pixel using a decoded luminance signal which is a decoded pixel value in the right 2N ⁇ 2N block) and a decoded luminance signal adjacent to the upper end and the left end of the decoded luminance signal. Rec ′ L is generated.
  • FIG. 19 in the figure, the block on the left side of N ⁇ N prediction block of the color difference signals in the partition P i n in the block (drawing corresponding to, A reduced luminance reference pixel using a decoded luminance signal which is a decoded pixel value in the right 2N ⁇ 2N block) and a decoded luminance signal adjacent to the upper end and the left end of the decoded luminance signal.
  • Rec ′ L is generated.
  • the reduced luminance reference pixel Rec ′ L is arranged in the horizontal direction with respect to the luminance reference pixel Rec L so that the YUV 4: 2: 0 signal has the same phase as that of the color difference signal pixel.
  • a 1: 2: 1 low-pass filter and a 1: 1 low-pass filter are applied in the vertical direction, only an even column is sub-sampled in the vertical and horizontal directions.
  • the reduced luminance reference pixel Rec' smoothed luminance reference pixel reducing unit 61 reduces the luminance reference pixels Rec and L, As shown in the above equations (4) and (5), prediction is performed using the color difference reference pixel Rec C that is a decoded pixel value of the color difference signal adjacent to the upper end and the left end of the prediction block of the color difference signal. Correlation parameters ⁇ and ⁇ used in the above are calculated.
  • the color difference predicted image generation unit 63 calculates the correlation parameters ⁇ and ⁇ and the reduced luminance reference pixel Rec as in the color difference prediction image generation unit 33 of the moving image encoding device. 'Using L , a color difference prediction image Pred C is generated as shown in the above equation (6).
  • the motion compensation unit 44 When receiving the inter prediction parameter from the changeover switch 42, the motion compensation unit 44 performs inter prediction processing for each partition P i n based on the inter prediction parameter, similarly to the motion compensation prediction unit 5 of the moving image encoding device. By performing, an inter prediction image (P i n ) is generated (step ST47). That is, motion compensation unit 44, using one or more frames of reference images stored by the motion compensated prediction frame memory 49, by performing the motion compensation prediction processing on partition P i n based on the inter prediction parameters, An inter prediction image (P i n ) is generated.
  • the inverse quantization / inverse conversion unit 45 When the inverse quantization / inverse transform unit 45 receives the prediction difference encoding parameter from the variable length decoding unit 41, the inverse quantization / inverse conversion unit 45 outputs the prediction difference encoding parameter from the variable length decoding unit 41 using the quantization parameter included in the prediction difference encoding parameter.
  • the compressed data related to the encoded block is dequantized, and the inverse transformed process of the compressed data after inverse quantization (for example, inverse DCT (inverse discrete) is performed in the transform block size unit included in the prediction differential encoding parameter. (Cosine transform), inverse DST (inverse discrete sine transform), inverse transform process such as inverse KL transform), and the like.
  • inverse DCT inverse discrete
  • inverse transform process such as inverse KL transform
  • the addition unit 46 adds the decoded prediction difference signal output from the inverse quantization / inverse conversion unit 45 and the prediction signal indicating the prediction image (P i n ) generated by the intra prediction unit 43 or the motion compensation unit 44.
  • a decoded image signal indicating a decoded partition image or a decoded image as a collection thereof is generated, and the decoded image signal is output to the loop filter unit 48 (step ST49).
  • the intra prediction memory 47 stores the decoded image for use in intra prediction.
  • the loop filter unit 48 compensates for the encoding distortion included in the decoded image signal, and uses the decoded image indicated by the decoded image signal after the encoding distortion compensation as a reference image. While storing in the motion compensation prediction frame memory 49, the decoded image is output as a reproduced image (step ST50).
  • the filtering processing by the loop filter unit 48 may be performed for the maximum encoded block of the input decoded image signal or for each individual encoded block, or a decoded image signal corresponding to a macroblock for one screen is input. After being done, it may be performed for one screen at a time.
  • the processes in steps ST43 to ST49 are repeatedly performed until the processes for the partitions P i n belonging to all the coding blocks B n are completed (step ST51).
  • the intra prediction unit 4 of the moving image encoding device uses the horizontal direction among the pixels constituting the encoded block divided by the block dividing unit 2. And smoothing the luminance component of a plurality of pixels adjacent in the vertical direction, calculating a correlation parameter indicating a correlation between the smoothed luminance component and the color difference component, and calculating the correlation parameter and the luminance component after smoothing.
  • the prediction image for the color difference component is generated, it is possible to suppress the occurrence of aliasing in the luminance component after smoothing and increase the encoding efficiency.
  • the encoding mode selected by the encoding control unit 1 is the intra prediction mode
  • the parameter indicating the intra prediction mode of the color difference signal indicates that the color difference signal prediction mode is a smoothed luminance correlation use color mode
  • Reduced luminance reference pixels are generated by smoothing the luminance reference pixels in the horizontal and vertical directions and sub-sampling, and an intra prediction image of the color difference signal is generated using the correlation between the luminance signal and the color difference signal. Therefore, a prediction image in which amplification of a prediction error due to aliasing, which has been conventionally generated, is suppressed and the prediction efficiency is improved, and as a result, encoding efficiency can be increased.
  • the intra prediction unit 43 of the video decoding device is adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block output from the variable length decoding unit 41. Smoothing luminance components related to a plurality of pixels, calculating a correlation parameter indicating a correlation between the smoothed luminance component and the color difference component, and using the correlation parameter and the luminance component after smoothing, the color difference component Therefore, it is possible to accurately decode a moving image from encoded data whose encoding efficiency is improved. That is, the coding mode that is variable-length decoded by the variable-length decoding unit 41 is the intra prediction mode, and the parameter indicating the intra prediction mode of the color difference signal indicates that the color difference signal prediction mode uses the smoothed luminance correlation.
  • the luminance reference pixel is smoothed in the horizontal direction and the vertical direction and sub-sampled to generate a reduced luminance reference pixel, and the intra prediction image of the color difference signal using the correlation between the luminance signal and the color difference signal Therefore, it is possible to accurately decode a moving image from encoded data whose encoding efficiency is improved.
  • a 1: 2: 1 smoothing filter is applied in the horizontal direction.
  • the filter coefficient is not limited to this.
  • a filter such as 3: 2: 3, 7: 2: 7, 1: 0: 1, a smoothing filter with a larger number of taps, or a 1: 1 filter can achieve the same effect.
  • a common one is used as a smoothing filter for the luminance reference pixel of the block to be predicted and the luminance reference pixel adjacent thereto.
  • the 1: 0: 1 filter is applied to the luminance reference pixels of the block to be predicted, and the 1: 2: 1 filter is applied to the adjacent luminance reference pixels.
  • the effect of can be obtained. If a 1: 0: 1 filter or a 1: 1 filter is applied, the amount of calculation can be reduced.
  • a smoothing filter having a large number of taps is applied, the encoding efficiency is improved. Can be obtained. Further, the smoothing in the horizontal direction and the vertical direction may be performed simultaneously. For example, the average value may be calculated after weighted addition of six target pixels according to the filter coefficient.
  • any component of the embodiment can be modified or any component of the embodiment can be omitted within the scope of the invention.
  • the moving picture coding apparatus, the moving picture decoding apparatus, the moving picture coding method, and the moving picture decoding method according to the present invention are such that the prediction image generating means performs intra-frame prediction of the luminance component in the coded block divided by the block dividing means Luminance component intra prediction means for generating a prediction image for the luminance component, and luminance components relating to a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block To calculate a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component, and use the correlation parameter and the smoothed luminance component to generate a prediction image for the color difference component. And the encoding efficiency can be improved by suppressing the occurrence of aliasing in the luminance component after smoothing. It is suitable for use in the video decoding apparatus.
  • Encoding control unit encoding control unit
  • Block division unit block division unit
  • Changeover switch prediction image generation unit
  • Intra prediction unit prediction image generation unit
  • Motion compensation prediction unit prediction
  • Image generation means 6 subtraction section
  • 7 transform / quantization section image compression means
  • 8 inverse quantization / inverse transform section 9 addition section
  • 10 intra prediction memory 11 loop filter section
  • 12 motion compensated prediction frame memory 13 variable length coding unit (variable length coding unit), 21 luminance signal intra prediction unit (luminance component intra prediction unit), 22 changeover switch (color difference component intra prediction unit), 23 color difference signal Directional intra prediction unit (color difference component intra prediction means), 24 Luminance correlation utilization color difference signal prediction unit (color difference component intra prediction means), 31 Smoothing luminance reference pixel reduction unit, 32 correlation calculation unit, 33 color difference prediction image generation unit, 41 variable length decoding unit (variable length decoding unit), 42 changeover switch (prediction image generation unit), 43 intra prediction unit (prediction image generation) Means), 44

Abstract

In the present invention, an intra-prediction unit (4) of a video encoding device smoothes luminance components of a plurality of pixels that constitute encoded blocks divided by a block dividing unit (2), such pixels being adjacent in horizontal and vertical directions; calculates a correlation parameter that indicates the correlation between the smoothed luminance components and color-difference components; and generates prediction images for color- difference components using such correlation parameter and the smoothed luminance components.

Description

動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法Moving picture encoding apparatus, moving picture decoding apparatus, moving picture encoding method, and moving picture decoding method
 この発明は、動画像を高効率で符号化を行う動画像符号化装置及び動画像符号化方法と、高効率で符号化されている動画像を復号する動画像復号装置及び動画像復号方法とに関するものである。 The present invention relates to a moving image encoding apparatus and moving image encoding method for encoding a moving image with high efficiency, a moving image decoding apparatus and a moving image decoding method for decoding a moving image encoded with high efficiency, and It is about.
 例えば、以下の非特許文献1に記載されている動画像符号化装置では、色差信号のフレーム内予測を行う予測手段として、輝度信号と色差信号の相関を利用して、縮小後の輝度信号に数値演算を施すことで、色差信号を予測するIntra_LMモードが採用されている。
 図18はIntra_LMモードの予測処理を実施する輝度相関利用色差信号予測部を示す構成図である。
 図18の輝度相関利用色差信号予測部は、輝度参照画素縮小部931、相関算出部932及び色差予測画像生成部933から構成されている。
For example, in the moving image encoding apparatus described in Non-Patent Document 1 below, as a prediction unit that performs intra-frame prediction of a color difference signal, a correlation between the luminance signal and the color difference signal is used to generate a reduced luminance signal. An Intra_LM mode that predicts a color difference signal by performing a numerical operation is employed.
FIG. 18 is a configuration diagram illustrating a luminance correlation use color difference signal prediction unit that performs prediction processing in the Intra_LM mode.
The luminance correlation use color difference signal prediction unit in FIG. 18 includes a luminance reference pixel reduction unit 931, a correlation calculation unit 932, and a color difference prediction image generation unit 933.
 輝度参照画素縮小部931は、図19に示すように、YUV4:2:0信号に対して、色差信号の予測ブロックに対応するブロック内の復号済の画素値である復号済輝度信号と、その復号済輝度信号の上端と左端に隣接している復号済輝度信号とを用いて、縮小輝度参照画素Rec’を生成する。
 ここで、縮小輝度参照画素Rec’は、図20に示すように、YUV4:2:0信号において、色差信号の画素と同位相になるように、輝度参照画素Recに対して、縦方向に1:1のローパスフィルタを施した後に、縦横に偶数列のみをサブサンプリングすることで得られる。
As shown in FIG. 19, the luminance reference pixel reduction unit 931, for the YUV 4: 2: 0 signal, a decoded luminance signal that is a decoded pixel value in the block corresponding to the prediction block of the color difference signal, The reduced luminance reference pixel Rec ′ L is generated using the decoded luminance signal adjacent to the upper end and the left end of the decoded luminance signal.
Here, as shown in FIG. 20, the reduced luminance reference pixel Rec ′ L is vertical to the luminance reference pixel Rec L so that the YUV 4: 2: 0 signal has the same phase as the pixels of the color difference signal. After applying a low-pass filter of 1: 1, the sub-sampling is performed only on even columns in the vertical and horizontal directions.
 相関算出部932は、輝度参照画素縮小部931が縮小輝度参照画素Rec’を生成すると、その縮小輝度参照画素Rec’と、色差信号の予測ブロックの上端及び左端に隣接している色差信号の復号済の画素値である色差参照画素Recとを用いて、下記の式(1)及び式(2)に示すように、予測に用いる相関パラメータα,βを算出する。
Figure JPOXMLDOC01-appb-I000001

Figure JPOXMLDOC01-appb-I000002
 式(1),(2)において、Iは処理対象となる色差信号の予測ブロックの1辺の画素数の2倍の値である。
When the luminance reference pixel reduction unit 931 generates the reduced luminance reference pixel Rec ′ L , the correlation calculation unit 932 generates the reduced luminance reference pixel Rec ′ L and the color difference signal adjacent to the upper end and the left end of the prediction block of the color difference signal. Correlation parameters α and β used for prediction are calculated using the color difference reference pixel Rec C , which is the decoded pixel value, as shown in the following equations (1) and (2).
Figure JPOXMLDOC01-appb-I000001

Figure JPOXMLDOC01-appb-I000002
In equations (1) and (2), I is a value twice the number of pixels on one side of the prediction block of the color difference signal to be processed.
 色差予測画像生成部933は、相関算出部932が相関パラメータα,βを算出すると、その相関パラメータα,βと縮小輝度参照画素Rec’を用いて、下記の式(3)に示すように、色差予測画像Predを生成する。
Figure JPOXMLDOC01-appb-I000003
When the correlation calculation unit 932 calculates the correlation parameters α and β, the color difference predicted image generation unit 933 uses the correlation parameters α and β and the reduced luminance reference pixel Rec ′ L as shown in the following equation (3). Then, the color difference prediction image Pred C is generated.
Figure JPOXMLDOC01-appb-I000003
 従来の動画像符号化装置は以上のように構成されているので、隣接画素から予測できない色差信号の変化も、輝度信号との相関を利用して適切に予測することができ、高い精度での予測が可能になる。しかし、縮小輝度参照画素Rec’を生成する際、2画素ずつの平均値を取ることで、画面の垂直方向には、強いローパスフィルタが施されているが、画面の水平方向には、単純に2画素おきにサブサンプリングされるだけであるため、縮小輝度参照画素Rec’の水平方向にはエイリアシングが発生することがある。色差予測画像Predは、縮小輝度参照画素Rec’をスカラ倍してオフセット加算することで得られるため、縮小輝度参照画素Rec’のエイリアシングが、色差予測画像に反映されて予測誤差が増幅されてしまい、符号化効率の改善が限定的になるなどの課題があった。 Since the conventional moving image coding apparatus is configured as described above, a change in the color difference signal that cannot be predicted from the adjacent pixels can be appropriately predicted using the correlation with the luminance signal, with high accuracy. Prediction becomes possible. However, when the reduced luminance reference pixel Rec ′ L is generated, a strong low-pass filter is applied in the vertical direction of the screen by taking an average value of every two pixels, but simple in the horizontal direction of the screen. because only be subsampled every two pixels, there may aliasing occurs in the horizontal direction of the reduced luminance reference pixel Rec 'L. Since the color difference prediction image Pred C is obtained by multiplying the reduced luminance reference pixel Rec ′ L by scalar multiplication and offset addition, the aliasing of the reduced luminance reference pixel Rec ′ L is reflected in the color difference prediction image and the prediction error is amplified. As a result, there has been a problem that improvement in coding efficiency is limited.
 この発明は上記のような課題を解決するためになされたもので、エイリアシングの発生を抑制して、符号化効率を高めることができる動画像符号化装置及び動画像符号化方法を得ることを目的とする。
 また、この発明は、符号化効率の改善が図られている符号化データから正確に動画像を復号することができる動画像復号装置及び動画像復号方法を得ることを目的とする。
The present invention has been made to solve the above-described problems, and an object of the present invention is to provide a moving picture coding apparatus and a moving picture coding method capable of suppressing the occurrence of aliasing and improving the coding efficiency. And
It is another object of the present invention to obtain a moving picture decoding apparatus and a moving picture decoding method that can accurately decode a moving picture from encoded data whose encoding efficiency is improved.
 この発明に係る動画像符号化装置は、階層的に分割された符号化ブロックに対して、前記符号化ブロックに対応する符号化モードに対応して予測処理を実施することで予測画像を生成する予測画像生成手段と、上記符号化モードを可変長符号化して、上記符号化モードの符号化データが多重化されているビットストリームを生成する可変長符号化手段とを備え、予測画像生成手段が、上記階層的に分割された符号化ブロックにおける輝度成分のフレーム内予測を実施して、輝度成分に対する予測画像を生成する輝度成分イントラ予測手段と、上記符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、その相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する色差成分イントラ予測手段とから構成されているようにしたものである。 The moving picture coding apparatus according to the present invention generates a prediction image by performing a prediction process corresponding to a coding mode corresponding to the coding block on a hierarchically divided coding block. A prediction image generating unit; and a variable length encoding unit that performs variable length encoding on the encoding mode and generates a bit stream in which encoded data of the encoding mode is multiplexed. Intra-frame prediction of the luminance component in the hierarchically divided encoded block to generate a prediction image for the luminance component, and among the pixels constituting the encoded block Smoothes the luminance component of a plurality of pixels adjacent in the horizontal and vertical directions, and calculates a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component. As with the correlation parameter and the luminance component after smoothing, in which as and a color difference component intra-prediction unit that generates a predicted image for the color difference components.
 この発明によれば、予測画像生成手段が、ブロック分割手段により分割された符号化ブロックにおける輝度成分のフレーム内予測を実施して、輝度成分に対する予測画像を生成する輝度成分イントラ予測手段と、上記符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、その相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する色差成分イントラ予測手段とから構成されているので、平滑化後の輝度成分におけるエイリアシングの発生を抑制して、符号化効率を高めることができる効果がある。 According to this invention, the predicted image generating means performs intra-frame prediction of the luminance component in the encoded block divided by the block dividing means, and generates a predicted image for the luminance component, the luminance component intra prediction means, A correlation parameter indicating the correlation between the smoothed luminance component and the color difference component is smoothed by smoothing the luminance component of a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block. It is composed of a chrominance component intra-prediction unit that generates a predicted image for the chrominance component using the calculated correlation parameter and the smoothed luminance component, thereby suppressing the occurrence of aliasing in the smoothed luminance component Thus, the encoding efficiency can be increased.
この発明の実施の形態1による動画像符号化装置を示す構成図である。It is a block diagram which shows the moving image encoder by Embodiment 1 of this invention. この発明の実施の形態1による動画像符号化装置の処理内容を示すフローチャートである。It is a flowchart which shows the processing content of the moving image encoder by Embodiment 1 of this invention. この発明の実施の形態1による動画像符号化装置のイントラ予測部4を示す構成図である。It is a block diagram which shows the intra estimation part 4 of the moving image encoder by Embodiment 1 of this invention. この発明の実施の形態1による動画像符号化装置の輝度相関利用色差信号予測部24を示す構成図である。It is a block diagram which shows the brightness | luminance correlation utilization color difference signal prediction part 24 of the moving image encoder by Embodiment 1 of this invention. この発明の実施の形態1による動画像符号化装置のイントラ予測部4の処理内容を示すフローチャートである。It is a flowchart which shows the processing content of the intra estimation part 4 of the moving image encoder by Embodiment 1 of this invention. この発明の実施の形態1による動画像符号化装置の輝度相関利用色差信号予測部24の処理内容を示すフローチャートである。It is a flowchart which shows the processing content of the brightness | luminance correlation utilization color difference signal prediction part 24 of the moving image encoder by Embodiment 1 of this invention. この発明の実施の形態1による動画像復号装置を示す構成図である。It is a block diagram which shows the moving image decoding apparatus by Embodiment 1 of this invention. この発明の実施の形態1による動画像復号装置の処理内容を示すフローチャートである。It is a flowchart which shows the processing content of the moving image decoding apparatus by Embodiment 1 of this invention. この発明の実施の形態1による動画像復号装置のイントラ予測部43を示す構成図である。It is a block diagram which shows the intra estimation part 43 of the moving image decoding apparatus by Embodiment 1 of this invention. この発明の実施の形態1による動画像復号装置の輝度相関利用色差信号予測部54を示す構成図である。It is a block diagram which shows the brightness | luminance correlation utilization color difference signal prediction part 54 of the moving image decoding apparatus by Embodiment 1 of this invention. 最大サイズの符号化ブロックが階層的に複数の符号化ブロックに分割される様子を示す説明図である。It is explanatory drawing which shows a mode that the encoding block of the largest size is divided | segmented into a some encoding block hierarchically. 符号化ブロックBに属するパーティションP を示す説明図である。It is an explanatory view showing a partition P i n that belong to the coding block B n. 分割後のパーティションの分布や、階層分割後のパーティションに符号化モードm(B)が割り当てられる状況を4分木グラフで示す説明図である。It is explanatory drawing which shows the distribution of the partition after a division | segmentation, and the condition where the encoding mode m ( Bn ) is allocated to the partition after a hierarchy division | segmentation with a quadtree graph. 符号化ブロックBに属する各パーティションP において選択可能なイントラ予測パラメータ(イントラ予測モード)の一例を示す説明図である。Is an explanatory diagram showing an example of the coding block B n belonging selectable intra prediction parameters in each partition P i n (intra prediction mode). =m =4の場合において、パーティションP 内の画素の予測値を生成する際に用いる画素の一例を示す説明図である。It is explanatory drawing which shows an example of the pixel used when producing | generating the predicted value of the pixel in partition P i n in the case of l i n = m i n = 4. 色差信号のイントラ予測パラメータと色差イントラ予測モードの対応例を示す説明図である。It is explanatory drawing which shows the example of a response | compatibility of the intra prediction parameter of a color difference signal, and a color difference intra prediction mode. 縮小輝度参照画素Rec’の生成方法を示す説明図である。It is an explanatory view showing a method of generating a reduced luminance reference pixel Rec 'L. Intra_LMモードの予測処理を実施する輝度相関利用色差信号予測部を示す構成図である。It is a block diagram which shows the brightness | luminance correlation utilization color difference signal estimation part which implements the prediction process of Intra_LM mode. 色差信号の予測ブロックと復号済輝度信号などを示す説明図である。It is explanatory drawing which shows the prediction block of a color difference signal, a decoded luminance signal, etc. 縮小輝度参照画素Rec’の生成方法を示す説明図である。It is an explanatory view showing a method of generating a reduced luminance reference pixel Rec 'L.
 以下、この発明をより詳細に説明するために、この発明を実施するための形態について、添付の図面に従って説明する。 Hereinafter, in order to explain the present invention in more detail, modes for carrying out the present invention will be described with reference to the accompanying drawings.
実施の形態1.
 図1はこの発明の実施の形態1による動画像符号化装置を示す構成図である。
 図1において、符号化制御部1はイントラ予測処理(フレーム内予測処理)又は動き補償予測処理(フレーム間予測処理)が実施される際の処理単位となる符号化ブロックの最大サイズを決定するとともに、最大サイズの符号化ブロックが階層的に分割される際の上限の階層数を決定する処理を実施する。
 また、符号化制御部1は利用可能な1以上の符号化モード(1以上のイントラ符号化モード、1以上のインター符号化モード)の中から、階層的に分割される各々の符号化ブロックに適する符号化モードを選択する処理を実施する。
 また、符号化制御部1は各々の符号化ブロック毎に、差分画像が圧縮される際に用いられる量子化パラメータ及び変換ブロックサイズを決定するとともに、予測処理が実施される際に用いられるイントラ予測パラメータ又はインター予測パラメータを決定する処理を実施する。量子化パラメータ及び変換ブロックサイズは、予測差分符号化パラメータに含まれて、変換・量子化部7、逆量子化・逆変換部8及び可変長符号化部13等に出力される。
 なお、符号化制御部1は符号化制御手段を構成している。
Embodiment 1 FIG.
1 is a block diagram showing a moving picture coding apparatus according to Embodiment 1 of the present invention.
In FIG. 1, an encoding control unit 1 determines the maximum size of an encoding block that is a processing unit when intra prediction processing (intraframe prediction processing) or motion compensation prediction processing (interframe prediction processing) is performed. Then, a process of determining the upper limit number of layers when the encoding block of the maximum size is hierarchically divided is performed.
In addition, the encoding control unit 1 assigns each encoding block divided hierarchically from one or more available encoding modes (one or more intra encoding modes and one or more inter encoding modes). A process of selecting a suitable encoding mode is performed.
In addition, the encoding control unit 1 determines the quantization parameter and transform block size used when the difference image is compressed for each encoding block, and intra prediction used when the prediction process is performed. A process of determining a parameter or an inter prediction parameter is performed. The quantization parameter and the transform block size are included in the prediction difference coding parameter, and are output to the transform / quantization unit 7, the inverse quantization / inverse transform unit 8, the variable length coding unit 13, and the like.
The encoding control unit 1 constitutes an encoding control unit.
 ブロック分割部2は入力画像(カレントピクチャ)を示す映像信号を入力すると、その入力画像を符号化制御部1により決定された最大サイズの符号化ブロックに分割するとともに、符号化制御部1により決定された上限の階層数に至るまで、その符号化ブロックを階層的に分割する処理を実施する。なお、ブロック分割部2はブロック分割手段を構成している。
 切替スイッチ3は符号化制御部1により選択された符号化モードがイントラ符号化モードであれば、ブロック分割部2により分割された符号化ブロックをイントラ予測部4に出力し、符号化制御部1により選択された符号化モードがインター符号化モードであれば、ブロック分割部2により分割された符号化ブロックを動き補償予測部5に出力する処理を実施する。
When a video signal indicating an input image (current picture) is input, the block dividing unit 2 divides the input image into encoded blocks of the maximum size determined by the encoding control unit 1 and determined by the encoding control unit 1 The process of dividing the encoded block hierarchically is performed until the upper limit number of hierarchies is reached. The block dividing unit 2 constitutes block dividing means.
If the coding mode selected by the coding control unit 1 is the intra coding mode, the changeover switch 3 outputs the coding block divided by the block dividing unit 2 to the intra prediction unit 4, and the coding control unit 1 If the coding mode selected by (2) is the inter coding mode, a process of outputting the coding block divided by the block dividing unit 2 to the motion compensation prediction unit 5 is performed.
 イントラ予測部4は切替スイッチ3からブロック分割部2により分割された符号化ブロックを受けると、その符号化ブロックに対して、イントラ予測用メモリ10により格納されている上記符号化ブロックに隣接している復号済みの画素を用いて、符号化制御部1から出力されたイントラ予測パラメータに基づくフレーム内予測処理を実施することで予測画像を生成する処理を実施する。
 即ち、イントラ予測部4は、ブロック分割部2により分割された符号化ブロックにおける輝度成分については、その輝度成分のフレーム内予測を実施して、輝度成分に対する予測画像を生成する。
 一方、ブロック分割部2により分割された符号化ブロックにおける色差成分については、符号化制御部1により選択された符号化モードが、イントラ符号化モードにおける方向性予測モードであれば、ブロック分割部2により分割された符号化ブロックにおける色差成分のフレーム内予測を実施して、色差成分に対する予測画像を生成する。
 また、符号化制御部1により選択された符号化モードが、イントラ符号化モードにおける平滑化輝度相関利用色差信号予測モードであれば、その符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、その相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する。
When the intra prediction unit 4 receives the encoded block divided by the block dividing unit 2 from the changeover switch 3, the intra prediction unit 4 is adjacent to the encoded block stored in the intra prediction memory 10 with respect to the encoded block. The process which produces | generates a prediction image is implemented by implementing the intra prediction process based on the intra prediction parameter output from the encoding control part 1 using the decoded pixel which is.
That is, the intra prediction unit 4 performs intra-frame prediction of the luminance component of the encoded block divided by the block dividing unit 2 to generate a prediction image for the luminance component.
On the other hand, for the color difference component in the coding block divided by the block dividing unit 2, if the coding mode selected by the coding control unit 1 is the directional prediction mode in the intra coding mode, the block dividing unit 2 Intra-frame prediction of the color difference component in the encoded block divided by the above is performed, and a predicted image for the color difference component is generated.
In addition, if the encoding mode selected by the encoding control unit 1 is the smoothed luminance correlation-use color difference signal prediction mode in the intra encoding mode, the horizontal direction and the pixel in the encoding block are Smooths the luminance component of multiple pixels adjacent in the vertical direction, calculates a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component, and uses the correlation parameter and the luminance component after smoothing. Thus, a predicted image for the color difference component is generated.
 動き補償予測部5はブロック分割部2により分割された符号化ブロックに対応する符号化モードとして、符号化制御部1によりインター符号化モードが選択された場合、動き補償予測フレームメモリ12により格納されている1フレーム以上の参照画像を用いて、符号化制御部1から出力されたインター予測パラメータに基づいて、その符号化ブロックに対する動き補償予測処理を実施することで予測画像を生成する処理を実施する。
 なお、切替スイッチ3、イントラ予測部4及び動き補償予測部5から予測画像生成手段が構成されている。
When the inter coding mode is selected by the coding control unit 1 as the coding mode corresponding to the coding block divided by the block dividing unit 2, the motion compensated prediction unit 5 is stored by the motion compensated prediction frame memory 12. Based on the inter prediction parameter output from the encoding control unit 1, using a reference image of one or more frames, a process for generating a predicted image is performed by performing a motion compensation prediction process for the encoded block To do.
The changeover switch 3, the intra prediction unit 4, and the motion compensation prediction unit 5 constitute a predicted image generation unit.
 減算部6はブロック分割部2により分割された符号化ブロックから、イントラ予測部4又は動き補償予測部5により生成された予測画像を減算することで、差分画像(=符号化ブロック-予測画像)を生成する処理を実施する。なお、減算部6は差分画像生成手段を構成している。
 変換・量子化部7は符号化制御部1から出力された予測差分符号化パラメータに含まれている変換ブロックサイズ単位で、減算部6により生成された差分画像の変換処理(例えば、DCT(離散コサイン変換)やDST(離散サイン変換)、予め特定の学習系列に対して基底設計がなされているKL変換等の直交変換処理)を実施するとともに、その予測差分符号化パラメータに含まれている量子化パラメータを用いて、その差分画像の変換係数を量子化することで、量子化後の変換係数を差分画像の圧縮データとして出力する処理を実施する。なお、変換・量子化部7は画像圧縮手段を構成している。
The subtracting unit 6 subtracts the prediction image generated by the intra prediction unit 4 or the motion compensation prediction unit 5 from the encoded block divided by the block dividing unit 2, thereby obtaining a difference image (= encoded block−predicted image). The process to generate is performed. The subtracting unit 6 constitutes a difference image generating unit.
The transform / quantization unit 7 performs transform processing (for example, DCT (discrete) of the difference image generated by the subtraction unit 6 in units of transform block size included in the prediction difference encoding parameter output from the encoding control unit 1. (Cosine transform), DST (discrete sine transform), orthogonal transform processing such as KL transform in which a base design is made in advance for a specific learning sequence), and the quantum included in the prediction differential encoding parameter By quantizing the transform coefficient of the difference image using the quantization parameter, a process of outputting the quantized transform coefficient as compressed data of the difference image is performed. The transform / quantization unit 7 constitutes an image compression unit.
 逆量子化・逆変換部8は符号化制御部1から出力された予測差分符号化パラメータに含まれている量子化パラメータを用いて、変換・量子化部7から出力された圧縮データを逆量子化し、その予測差分符号化パラメータに含まれている変換ブロックサイズ単位で、逆量子化後の圧縮データの逆変換処理(例えば、逆DCT(逆離散コサイン変換)や逆DST(逆離散サイン変換)、逆KL変換等の逆変換処理)を実施することで、逆変換処理後の圧縮データを局所復号予測差分信号として出力する処理を実施する。 The inverse quantization / inverse transform unit 8 performs inverse quantization on the compressed data output from the transform / quantization unit 7 using the quantization parameter included in the prediction difference encoding parameter output from the encoding control unit 1. And inverse transform processing of the compressed data after inverse quantization (for example, inverse DCT (inverse discrete cosine transform) or inverse DST (inverse discrete sine transform)) in units of transform block sizes included in the prediction differential encoding parameter. (Inverse transform process such as inverse KL transform) is performed, and the process of outputting the compressed data after the inverse transform process as a local decoded prediction difference signal is performed.
 加算部9は逆量子化・逆変換部8から出力された局所復号予測差分信号とイントラ予測部4又は動き補償予測部5により生成された予測画像を示す予測信号を加算することで、局所復号画像を示す局所復号画像信号を生成する処理を実施する。
 イントラ予測用メモリ10はイントラ予測部4により次回のイントラ予測処理で用いられる画像として、加算部9により生成された局所復号画像信号が示す局所復号画像を格納するRAMなどの記録媒体である。
The adding unit 9 adds the local decoded prediction difference signal output from the inverse quantization / inverse transform unit 8 and the prediction signal indicating the prediction image generated by the intra prediction unit 4 or the motion compensation prediction unit 5 to thereby perform local decoding. A process of generating a locally decoded image signal indicating an image is performed.
The intra prediction memory 10 is a recording medium such as a RAM that stores a local decoded image indicated by the local decoded image signal generated by the adding unit 9 as an image used in the next intra prediction process by the intra prediction unit 4.
 ループフィルタ部11は加算部9により生成された局所復号画像信号に含まれている符号化歪みを補償し、符号化歪み補償後の局所復号画像信号が示す局所復号画像を参照画像として動き補償予測フレームメモリ12に出力する処理を実施する。
 動き補償予測フレームメモリ12は動き補償予測部5により次回の動き補償予測処理で用いられる参照画像として、ループフィルタ部11によるフィルタリング処理後の局所復号画像を格納するRAMなどの記録媒体である。
The loop filter unit 11 compensates for the coding distortion included in the locally decoded image signal generated by the adding unit 9, and performs motion compensation prediction using the locally decoded image indicated by the locally decoded image signal after the coding distortion compensation as a reference image. A process of outputting to the frame memory 12 is performed.
The motion compensated prediction frame memory 12 is a recording medium such as a RAM that stores a locally decoded image after the filtering process by the loop filter unit 11 as a reference image used in the next motion compensated prediction process by the motion compensated prediction unit 5.
 可変長符号化部13は変換・量子化部7から出力された圧縮データと、符号化制御部1から出力された符号化モード及び予測差分符号化パラメータと、イントラ予測部4から出力されたイントラ予測パラメータ又は動き補償予測部5から出力されたインター予測パラメータとを可変長符号化して、その圧縮データ、符号化モード、予測差分符号化パラメータ、イントラ予測パラメータ/インター予測パラメータの符号化データが多重化されているビットストリームを生成する処理を実施する。なお、可変長符号化部13は可変長符号化手段を構成している。 The variable length encoding unit 13 includes the compressed data output from the transform / quantization unit 7, the encoding mode and prediction differential encoding parameter output from the encoding control unit 1, and the intra output from the intra prediction unit 4. The prediction parameter or the inter prediction parameter output from the motion compensation prediction unit 5 is variable-length encoded, and the compressed data, the encoding mode, the prediction differential encoding parameter, and the intra prediction parameter / inter prediction parameter encoded data are multiplexed. A process for generating a converted bitstream is performed. The variable length encoding unit 13 constitutes variable length encoding means.
 図1では、動画像符号化装置の構成要素である符号化制御部1、ブロック分割部2、切替スイッチ3、イントラ予測部4、動き補償予測部5、減算部6、変換・量子化部7、逆量子化・逆変換部8、加算部9、ループフィルタ部11及び可変長符号化部13のそれぞれが専用のハードウェア(例えば、CPUを実装している半導体集積回路、あるいは、ワンチップマイコンなど)で構成されているものを想定しているが、動画像符号化装置がコンピュータなどで構成される場合、符号化制御部1、ブロック分割部2、切替スイッチ3、イントラ予測部4、動き補償予測部5、減算部6、変換・量子化部7、逆量子化・逆変換部8、加算部9、ループフィルタ部11及び可変長符号化部13の処理内容を記述しているプログラムの全部又は一部を当該コンピュータのメモリに格納し、当該コンピュータのCPUが当該メモリに格納されているプログラムを実行するようにしてもよい。
 図2はこの発明の実施の形態1による動画像符号化装置の処理内容を示すフローチャートである。
In FIG. 1, a coding control unit 1, a block division unit 2, a changeover switch 3, an intra prediction unit 4, a motion compensation prediction unit 5, a subtraction unit 6, and a transform / quantization unit 7, which are components of the moving image coding apparatus. , The inverse quantization / inverse transform unit 8, the adder unit 9, the loop filter unit 11 and the variable length coding unit 13 each have dedicated hardware (for example, a semiconductor integrated circuit on which a CPU is mounted, or a one-chip microcomputer) However, when the moving image encoding apparatus is configured by a computer or the like, the encoding control unit 1, the block division unit 2, the changeover switch 3, the intra prediction unit 4, the motion A program describing the processing contents of the compensation prediction unit 5, the subtraction unit 6, the transformation / quantization unit 7, the inverse quantization / inverse transformation unit 8, the addition unit 9, the loop filter unit 11 and the variable length coding unit 13. All or part Stored in the memory of the computer, CPU of the computer may execute a program stored in the memory.
FIG. 2 is a flowchart showing the processing contents of the moving picture coding apparatus according to Embodiment 1 of the present invention.
 図3はこの発明の実施の形態1による動画像符号化装置のイントラ予測部4を示す構成図である。
 図3において、輝度信号イントラ予測部21はブロック分割部2により分割された符号化ブロックにおける輝度成分のフレーム内予測を実施して、輝度成分に対する予測画像を生成する処理を実施する。
 即ち、輝度信号イントラ予測部21はイントラ予測用メモリ10により格納されている上記符号化ブロックに隣接している復号済みの輝度参照画素を参照して、符号化制御部1から出力されたイントラ予測パラメータに基づく輝度成分のフレーム内予測を実施することで、輝度成分に対する予測画像を生成する処理を実施する。なお、輝度信号イントラ予測部21は輝度成分イントラ予測手段を構成している。
FIG. 3 is a block diagram showing the intra prediction unit 4 of the moving picture coding apparatus according to Embodiment 1 of the present invention.
In FIG. 3, the luminance signal intra prediction unit 21 performs intra-frame prediction of the luminance component in the encoded block divided by the block dividing unit 2 and performs a process of generating a prediction image for the luminance component.
That is, the luminance signal intra prediction unit 21 refers to the decoded luminance reference pixel adjacent to the encoded block stored in the intra prediction memory 10 and outputs the intra prediction output from the encoding control unit 1. By performing intra-frame prediction of the luminance component based on the parameter, a process for generating a predicted image for the luminance component is performed. The luminance signal intra prediction unit 21 constitutes luminance component intra prediction means.
 切替スイッチ22は符号化制御部1から出力されたイントラ予測パラメータのうち、色差信号のイントラ符号化モードを示すパラメータが、方向性予測モードである旨を示していれば、予測に用いる参照画素を色差信号方向性イントラ予測部23に与え、色差信号のイントラ符号化モードを示すパラメータが、平滑化輝度相関利用色差信号予測モードである旨を示していれば、予測に用いる参照画素を輝度相関利用色差信号予測部24に出力する処理を実施する。 If the parameter indicating the intra coding mode of the chrominance signal indicates the directional prediction mode among the intra prediction parameters output from the coding control unit 1, the changeover switch 22 selects a reference pixel used for prediction. If the color difference signal directivity intra prediction unit 23 indicates that the parameter indicating the intra coding mode of the color difference signal is the smoothed luminance correlation use color difference signal prediction mode, the reference pixel used for prediction is used for the luminance correlation use. A process of outputting to the color difference signal prediction unit 24 is performed.
 色差信号方向性イントラ予測部23は切替スイッチ22から受け取った上記符号化ブロックに隣接している復号済みの色差参照画素を参照して、符号化制御部1から出力されたイントラ予測パラメータに基づく色差成分のフレーム内予測を実施することで、色差成分に対する予測画像を生成する処理を実施する。
 輝度相関利用色差信号予測部24は切替スイッチ22から受け取った復号済みの画素のうち、符号化ブロックに隣接している復号済みの輝度参照画素及び色差参照画素と、当該符号化ブロック内の復号済みの輝度参照画素を用いて、その符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、その相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する処理を実施する。
 なお、切替スイッチ22、色差信号方向性イントラ予測部23及び輝度相関利用色差信号予測部24から色差成分イントラ予測手段が構成されている。
The chrominance signal directivity intra prediction unit 23 refers to the decoded chrominance reference pixel adjacent to the encoded block received from the changeover switch 22 and determines the chrominance based on the intra prediction parameter output from the encoding control unit 1. By performing intra-frame prediction of components, a process of generating a predicted image for the color difference component is performed.
The luminance correlation utilization color difference signal prediction unit 24, among the decoded pixels received from the changeover switch 22, has decoded luminance reference pixels and color difference reference pixels adjacent to the coding block, and has been decoded within the coding block. Using the luminance reference pixel, smoothing the luminance component related to a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block, A correlation parameter indicating the correlation of the color difference component is calculated, and a process of generating a predicted image for the color difference component is performed using the correlation parameter and the smoothed luminance component.
The changeover switch 22, the color difference signal directivity intra prediction unit 23, and the luminance correlation use color difference signal prediction unit 24 constitute a color difference component intra prediction unit.
 図4はこの発明の実施の形態1による動画像符号化装置の輝度相関利用色差信号予測部24を示す構成図である。
 図4において、平滑化輝度参照画素縮小部31はイントラ予測用メモリ10により格納されている符号化ブロックを構成している復号済みの輝度参照画素のうち、水平方向及び垂直方向に隣接している複数の輝度参照画素の平滑化処理等を実施することで、縮小輝度参照画素Rec’を生成する。
 相関算出部32はイントラ予測用メモリ10により格納されている色差参照画素と平滑化輝度参照画素縮小部31により生成された縮小輝度参照画素Rec’を用いて、輝度成分と色差成分の相関を示す相関パラメータα,βを算出する処理を実施する。
FIG. 4 is a block diagram showing the luminance correlation utilizing color difference signal prediction unit 24 of the moving picture coding apparatus according to Embodiment 1 of the present invention.
In FIG. 4, the smoothed luminance reference pixel reducing unit 31 is adjacent in the horizontal direction and the vertical direction among the decoded luminance reference pixels constituting the encoded block stored in the intra prediction memory 10. A reduced luminance reference pixel Rec ′ L is generated by performing smoothing processing or the like of a plurality of luminance reference pixels.
The correlation calculation unit 32 uses the color difference reference pixels stored in the intra prediction memory 10 and the reduced luminance reference pixel Rec ′ L generated by the smoothed luminance reference pixel reduction unit 31 to calculate the correlation between the luminance component and the color difference component. Processing for calculating the correlation parameters α and β shown in FIG.
 色差予測画像生成部33は相関算出部32により算出された相関パラメータα,βと平滑化輝度参照画素縮小部31により生成された縮小輝度参照画素Rec’を用いて、色差成分に対する予測画像を生成する処理を実施する。
 なお、図5はこの発明の実施の形態1による動画像符号化装置のイントラ予測部4の処理内容を示すフローチャートである。
 図6はこの発明の実施の形態1による動画像符号化装置の輝度相関利用色差信号予測部24の処理内容を示すフローチャートである。
The color difference predicted image generation unit 33 uses the correlation parameters α and β calculated by the correlation calculation unit 32 and the reduced luminance reference pixel Rec ′ L generated by the smoothed luminance reference pixel reduction unit 31 to generate a predicted image for the color difference component. Perform the process to generate.
FIG. 5 is a flowchart showing the processing contents of the intra prediction unit 4 of the moving picture coding apparatus according to Embodiment 1 of the present invention.
FIG. 6 is a flowchart showing the processing contents of the luminance correlation utilizing color difference signal prediction unit 24 of the moving picture coding apparatus according to Embodiment 1 of the present invention.
 図7はこの発明の実施の形態1による動画像復号装置を示す構成図である。
 図7において、可変長復号部41はイントラ予測処理又は動き補償予測処理が実施される際の処理単位となる符号化ブロックの最大サイズ及び最大サイズの符号化ブロックから階層的に分割されている符号化ブロックの階層数を特定することで、ビットストリームに多重化されている符号化データの中で、最大サイズの符号化ブロック及び階層的に分割されている符号化ブロックに係る符号化データを特定し、各々の符号化データから符号化ブロックに係る圧縮データ、符号化モード、予測差分符号化パラメータ、イントラ予測パラメータ/インター予測パラメータを可変長復号して、その圧縮データ及び予測差分符号化パラメータを逆量子化・逆変換部45に出力するとともに、その符号化モード及びイントラ予測パラメータ/インター予測パラメータを切替スイッチ42に出力する処理を実施する。なお、可変長復号部41は可変長復号手段を構成している。
FIG. 7 is a block diagram showing a moving picture decoding apparatus according to Embodiment 1 of the present invention.
In FIG. 7, the variable length decoding unit 41 is a code that is hierarchically divided from the maximum size of the coding block that is a processing unit when the intra prediction process or the motion compensation prediction process is performed, and the coding block of the maximum size. By specifying the number of hierarchized block levels, the coded data related to the largest size coded block and the hierarchically divided coded block is identified from the coded data multiplexed in the bitstream. Then, from each encoded data, the compressed data, the encoding mode, the prediction differential encoding parameter, and the intra prediction parameter / inter prediction parameter related to the encoding block are subjected to variable length decoding, and the compressed data and the prediction differential encoding parameter are obtained. In addition to the output to the inverse quantization / inverse transform unit 45, the encoding mode and intra prediction parameters / inter prediction And it carries out a process of outputting the parameter to the changeover switch 42. The variable length decoding unit 41 constitutes variable length decoding means.
 切替スイッチ42は可変長復号部41から出力された符号化ブロックに係る符号化モードがイントラ符号化モードである場合、可変長復号部41から出力されたイントラ予測パラメータをイントラ予測部43に出力し、その符号化モードがインター符号化モードである場合、可変長復号部41から出力されたインター予測パラメータを動き補償部44に出力する処理を実施する。 The changeover switch 42 outputs the intra prediction parameter output from the variable length decoding unit 41 to the intra prediction unit 43 when the coding mode related to the coding block output from the variable length decoding unit 41 is the intra coding mode. When the coding mode is the inter coding mode, a process of outputting the inter prediction parameter output from the variable length decoding unit 41 to the motion compensation unit 44 is performed.
 イントラ予測部43はイントラ予測用メモリ47により格納されている符号化ブロックに隣接している復号済みの画素を用いて、切替スイッチ42から出力されたイントラ予測パラメータに基づいて、符号化ブロックに対するフレーム内予測処理を実施することで予測画像を生成する処理を実施する。
 即ち、イントラ予測部43は、可変長復号部41から出力された符号化ブロックにおける輝度成分については、その輝度成分のフレーム内予測を実施して、輝度成分に対する予測画像を生成する。
 一方、可変長復号部41から出力された符号化ブロックにおける色差成分については、可変長復号部41から出力された符号化モードが、イントラ符号化モードにおける方向性予測モードであれば、その符号化ブロックにおける色差成分のフレーム内予測を実施して、色差成分に対する予測画像を生成する。
 また、可変長復号部41から出力された符号化モードが、イントラ符号化モードにおける平滑化輝度相関利用色差信号予測モードであれば、その符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、その相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する。
The intra prediction unit 43 uses a decoded pixel adjacent to the coding block stored in the intra prediction memory 47 and uses the decoded prediction frame output from the changeover switch 42 to generate a frame for the coding block. A process for generating a predicted image is performed by performing the intra prediction process.
That is, for the luminance component in the encoded block output from the variable length decoding unit 41, the intra prediction unit 43 performs intra-frame prediction of the luminance component to generate a prediction image for the luminance component.
On the other hand, for the color difference component in the coding block output from the variable length decoding unit 41, if the coding mode output from the variable length decoding unit 41 is the directional prediction mode in the intra coding mode, the coding is performed. Intraframe prediction of the color difference component in the block is performed to generate a predicted image for the color difference component.
Further, if the coding mode output from the variable length decoding unit 41 is the smoothed luminance correlation utilization color difference signal prediction mode in the intra coding mode, among the pixels constituting the coding block, the horizontal direction and Smooths the luminance component of multiple pixels adjacent in the vertical direction, calculates a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component, and uses the correlation parameter and the luminance component after smoothing. Thus, a predicted image for the color difference component is generated.
 動き補償部44は動き補償予測フレームメモリ49により格納されている1フレーム以上の参照画像を用いて、切替スイッチ42から出力されたインター予測パラメータに基づいて、符号化ブロックに対する動き補償予測処理を実施することで予測画像を生成する処理を実施する。
 なお、切替スイッチ42、イントラ予測部43及び動き補償部44から予測画像生成手段が構成されている。
The motion compensation unit 44 performs a motion compensation prediction process on the encoded block based on the inter prediction parameter output from the changeover switch 42 using one or more reference images stored in the motion compensation prediction frame memory 49. Thus, a process for generating a predicted image is performed.
The changeover switch 42, the intra prediction unit 43, and the motion compensation unit 44 constitute a predicted image generation unit.
 逆量子化・逆変換部45は可変長復号部41から出力された予測差分符号化パラメータに含まれている量子化パラメータを用いて、可変長復号部41から出力された符号化ブロックに係る圧縮データを逆量子化し、その予測差分符号化パラメータに含まれている変換ブロックサイズ単位で、逆量子化の圧縮データの逆変換処理(例えば、逆DCT(逆離散コサイン変換)や逆DST(逆離散サイン変換)、逆KL変換等の逆変換処理)を実施することで、逆変換処理後の圧縮データを復号予測差分信号(圧縮前の差分画像を示す信号)として出力する処理を実施する。なお、逆量子化・逆変換部45は差分画像生成手段を構成している。 The inverse quantization / inverse transform unit 45 uses the quantization parameter included in the prediction difference encoding parameter output from the variable length decoding unit 41 to compress the encoded block output from the variable length decoding unit 41 Data is inversely quantized, and inverse transform processing (for example, inverse DCT (Inverse Discrete Cosine Transform) or inverse DST (Inverse Discrete Discrete) is performed on the inverse quantization compressed data in units of transform block sizes included in the prediction differential encoding parameter. By performing (sine conversion), inverse conversion processing such as inverse KL conversion), a process of outputting the compressed data after the inverse conversion process as a decoded prediction difference signal (a signal indicating a difference image before compression) is performed. The inverse quantization / inverse transform unit 45 constitutes a difference image generation unit.
 加算部46は逆量子化・逆変換部45から出力された復号予測差分信号とイントラ予測部43又は動き補償部44により生成された予測画像を示す予測信号を加算することで、復号画像を示す復号画像信号を生成する処理を実施する。なお、加算部46は復号画像生成手段を構成している。
 イントラ予測用メモリ47はイントラ予測部43により次回のイントラ予測処理で用いられる画像として、加算部46により生成された復号画像信号が示す復号画像を格納するRAMなどの記録媒体である。
The addition unit 46 adds the decoded prediction difference signal output from the inverse quantization / inverse conversion unit 45 to the prediction signal indicating the prediction image generated by the intra prediction unit 43 or the motion compensation unit 44, thereby indicating a decoded image. A process of generating a decoded image signal is performed. The adding unit 46 constitutes a decoded image generating unit.
The intra prediction memory 47 is a recording medium such as a RAM that stores a decoded image indicated by the decoded image signal generated by the addition unit 46 as an image used in the next intra prediction process by the intra prediction unit 43.
 ループフィルタ部48は加算部46により生成された復号画像信号に含まれている符号化歪みを補償し、符号化歪み補償後の復号画像信号が示す復号画像を参照画像として動き補償予測フレームメモリ49に出力するとともに、その復号画像を再生画像として外部に出力する処理を実施する。
 動き補償予測フレームメモリ49は動き補償部44により次回の動き補償予測処理で用いられる参照画像として、ループフィルタ部48によるフィルタリング処理後の復号画像を格納するRAMなどの記録媒体である。
The loop filter unit 48 compensates for the coding distortion included in the decoded image signal generated by the adding unit 46, and uses the decoded image indicated by the decoded image signal after the coding distortion compensation as a reference image as a motion compensated prediction frame memory 49. And a process of outputting the decoded image as a reproduced image to the outside.
The motion compensated prediction frame memory 49 is a recording medium such as a RAM that stores a decoded image after the filtering process by the loop filter unit 48 as a reference image to be used by the motion compensation unit 44 in the next motion compensation prediction process.
 図7では、動画像復号装置の構成要素である可変長復号部41、切替スイッチ42、イントラ予測部43、動き補償部44、逆量子化・逆変換部45、加算部46及びループフィルタ部48のそれぞれが専用のハードウェア(例えば、CPUを実装している半導体集積回路、あるいは、ワンチップマイコンなど)で構成されているものを想定しているが、動画像復号装置がコンピュータなどで構成される場合、可変長復号部41、切替スイッチ42、イントラ予測部43、動き補償部44、逆量子化・逆変換部45、加算部46及びループフィルタ部48の処理内容を記述しているプログラムの全部又は一部を当該コンピュータのメモリに格納し、当該コンピュータのCPUが当該メモリに格納されているプログラムを実行するようにしてもよい。
 図8はこの発明の実施の形態1による動画像復号装置の処理内容を示すフローチャートである。
In FIG. 7, a variable length decoding unit 41, a changeover switch 42, an intra prediction unit 43, a motion compensation unit 44, an inverse quantization / inverse transformation unit 45, an addition unit 46, and a loop filter unit 48, which are components of the video decoding device. Are assumed to be configured with dedicated hardware (for example, a semiconductor integrated circuit on which a CPU is mounted, or a one-chip microcomputer), but the moving picture decoding device is configured with a computer or the like. Of the program describing the processing contents of the variable length decoding unit 41, the changeover switch 42, the intra prediction unit 43, the motion compensation unit 44, the inverse quantization / inverse transformation unit 45, the addition unit 46, and the loop filter unit 48. All or part of the information is stored in the memory of the computer, and the CPU of the computer executes the program stored in the memory. There.
FIG. 8 is a flowchart showing the processing contents of the moving picture decoding apparatus according to Embodiment 1 of the present invention.
 図9はこの発明の実施の形態1による動画像復号装置のイントラ予測部43を示す構成図である。
 図9において、輝度信号イントラ予測部51は可変長復号部41から出力された符号化ブロックにおける輝度成分のフレーム内予測を実施して、輝度成分に対する予測画像を生成する処理を実施する。
 即ち、輝度信号イントラ予測部51はイントラ予測用メモリ47により格納されている上記符号化ブロックに隣接している復号済みの輝度参照画素を参照して、可変長復号部41から出力されたイントラ予測パラメータに基づく輝度成分のフレーム内予測を実施することで、輝度成分に対する予測画像を生成する処理を実施する。なお、輝度信号イントラ予測部51は輝度成分イントラ予測手段を構成している。
FIG. 9 is a block diagram showing the intra prediction unit 43 of the moving picture decoding apparatus according to Embodiment 1 of the present invention.
In FIG. 9, the luminance signal intra prediction unit 51 performs intra-frame prediction of the luminance component in the encoded block output from the variable length decoding unit 41, and performs processing to generate a prediction image for the luminance component.
That is, the luminance signal intra prediction unit 51 refers to the decoded luminance reference pixel adjacent to the encoded block stored in the intra prediction memory 47, and outputs the intra prediction output from the variable length decoding unit 41. By performing intra-frame prediction of the luminance component based on the parameter, a process for generating a predicted image for the luminance component is performed. Note that the luminance signal intra prediction unit 51 constitutes luminance component intra prediction means.
 切替スイッチ52は可変長復号部41から出力されたイントラ予測パラメータのうち、色差信号のイントラ符号化モードを示すパラメータが、方向性予測モードである旨を示していれば、予測に用いる参照画素を色差信号方向性イントラ予測部53に与え、色差信号のイントラ符号化モードを示すパラメータが、平滑化輝度相関利用色差信号予測モードである旨を示していれば、予測に用いる参照画素を輝度相関利用色差信号予測部54に出力する処理を実施する。 If the parameter indicating the intra coding mode of the color difference signal indicates the directional prediction mode among the intra prediction parameters output from the variable length decoding unit 41, the changeover switch 52 selects the reference pixel used for prediction. If the parameter indicating the intra coding mode of the color difference signal given to the color difference signal directional intra prediction unit 53 indicates that the color difference signal prediction mode uses the smoothed luminance correlation, the reference pixel used for the prediction uses the luminance correlation. A process of outputting to the color difference signal prediction unit 54 is performed.
 色差信号方向性イントラ予測部53は切替スイッチ52から受け取った上記符号化ブロックに隣接している復号済みの色差参照画素を参照して、可変長復号部41から出力されたイントラ予測パラメータに基づく色差成分のフレーム内予測を実施することで、色差成分に対する予測画像を生成する処理を実施する。
 輝度相関利用色差信号予測部54は切替スイッチ52から受け取った復号済みの画素のうち、符号化ブロックに隣接している復号済みの輝度参照画素及び色差参照画素と、当該符号化ブロック内の復号済みの輝度参照画素を用いて、その符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、その相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する処理を実施する。
 なお、切替スイッチ52、色差信号方向性イントラ予測部53及び輝度相関利用色差信号予測部54から色差成分イントラ予測手段が構成されている。
The chrominance signal directivity intra prediction unit 53 refers to the decoded chrominance reference pixel adjacent to the encoded block received from the changeover switch 52, and the chrominance based on the intra prediction parameter output from the variable length decoding unit 41 By performing intra-frame prediction of components, a process of generating a predicted image for the color difference component is performed.
Of the decoded pixels received from the changeover switch 52, the luminance correlation use color difference signal prediction unit 54 has decoded luminance reference pixels and color difference reference pixels adjacent to the encoded block, and has been decoded in the encoded block. Using the luminance reference pixel, smoothing the luminance component related to a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block, A correlation parameter indicating the correlation of the color difference component is calculated, and a process of generating a predicted image for the color difference component is performed using the correlation parameter and the smoothed luminance component.
The changeover switch 52, the color difference signal directivity intra prediction unit 53, and the luminance correlation utilization color difference signal prediction unit 54 constitute a color difference component intra prediction unit.
 図10はこの発明の実施の形態1による動画像復号装置の輝度相関利用色差信号予測部54を示す構成図である。
 図10において、平滑化輝度参照画素縮小部61はイントラ予測用メモリ47により格納されている符号化ブロックを構成している復号済みの輝度参照画素のうち、水平方向及び垂直方向に隣接している複数の輝度参照画素の平滑化処理等を実施することで、縮小輝度参照画素Rec’を生成する。
 相関算出部62はイントラ予測用メモリ47により格納されている色差参照画素と平滑化輝度参照画素縮小部61により生成された縮小輝度参照画素Rec’を用いて、輝度成分と色差成分の相関を示す相関パラメータα,βを算出する処理を実施する。
 色差予測画像生成部63は相関算出部62により算出された相関パラメータα,βと平滑化輝度参照画素縮小部61により生成された縮小輝度参照画素Rec’を用いて、色差成分に対する予測画像を生成する処理を実施する。
FIG. 10 is a block diagram showing the luminance correlation utilization color difference signal prediction unit 54 of the moving picture decoding apparatus according to Embodiment 1 of the present invention.
In FIG. 10, the smoothed luminance reference pixel reduction unit 61 is adjacent in the horizontal direction and the vertical direction among the decoded luminance reference pixels constituting the encoded block stored in the intra prediction memory 47. A reduced luminance reference pixel Rec ′ L is generated by performing smoothing processing or the like of a plurality of luminance reference pixels.
The correlation calculation unit 62 uses the color difference reference pixel stored in the intra prediction memory 47 and the reduced luminance reference pixel Rec ′ L generated by the smoothed luminance reference pixel reduction unit 61 to calculate the correlation between the luminance component and the color difference component. Processing for calculating the correlation parameters α and β shown in FIG.
The color difference predicted image generation unit 63 uses the correlation parameters α and β calculated by the correlation calculation unit 62 and the reduced luminance reference pixel Rec ′ L generated by the smoothed luminance reference pixel reduction unit 61 to generate a predicted image for the color difference component. Perform the process to generate.
 この実施の形態1の動画像符号化装置は、映像信号の空間・時間方向の局所的な変化に適応して、映像信号を多様なサイズの領域に分割してフレーム内・フレーム間適応符号化を行うことを特徴としている。
 一般に映像信号は、空間・時間的に信号の複雑さが局所的に変化する特性を有し、空間的に見ると、ある特定の映像フレーム上では、例えば、空や壁などのように、比較的広い画像領域中で均一な信号特性を有する絵柄もあれば、人物や細かいテクスチャを含む絵画などでは、小さい画像領域内で複雑なテクスチャパターンを有する絵柄も混在することがある。
The moving picture coding apparatus according to the first embodiment adapts to local changes in the spatial and temporal directions of a video signal, divides the video signal into regions of various sizes, and performs intraframe / interframe adaptive coding. It is characterized by performing.
In general, video signals have the characteristic that the complexity of the signal changes locally in space and time, and when viewed spatially, comparison is made on a specific video frame, for example, sky or wall. A picture having uniform signal characteristics in a wide image area and a picture having a complicated texture pattern in a small image area may be mixed in a picture including a person or a fine texture.
 時間的に見ても、空や壁は局所的に時間方向の絵柄の変化が小さいが、動く人物や物体は、その輪郭が時間的に剛体・非剛体の運動をするため、時間的な変化が大きい。
 符号化処理は、時間・空間的な予測によって、信号電力やエントロピーが小さい予測差分信号を生成して、全体の符号量を削減する処理を行うが、予測処理に用いる予測パラメータをできるだけ大きな画像信号領域に対して均一に適用することができれば、予測パラメータの符号量を小さくすることができる。
 一方、時間的・空間的に変化が大きい画像信号パターンに対して、同一の予測パラメータを大きな画像領域に適用すると、予測誤りが増えてしまうため、予測差分信号の符号量を削減することができない。
 したがって、時間的・空間的に変化が大きい領域では、予測対象の領域を小さくして、予測処理に用いる予測パラメータのデータ量を増やしても、予測差分信号の電力・エントロピーを低減する方が望ましい。
 このような映像信号の一般的な性質に適応している符号化処理を行うため、この実施の形態1の動画像符号化装置では、所定の最大ブロックサイズから映像信号の領域を階層的に分割し、分割領域毎に予測処理や予測差分の符号化処理を適応化する構成を採用している。
Even in terms of time, the sky and the wall have small changes in the local pattern in the time direction, but moving people and objects have temporal and rigid movements, so the time changes. Is big.
The encoding process generates a prediction difference signal with low signal power and entropy by temporal and spatial prediction, and performs a process of reducing the overall code amount. However, the prediction parameter used for the prediction process is as large as possible. If it can be applied uniformly to the region, the code amount of the prediction parameter can be reduced.
On the other hand, if the same prediction parameter is applied to a large image region with respect to an image signal pattern having a large temporal and spatial change, a prediction error increases, and therefore the code amount of the prediction difference signal cannot be reduced. .
Therefore, it is desirable to reduce the power / entropy of the prediction difference signal even if the prediction target area is reduced and the data amount of the prediction parameter used for the prediction process is increased in an area where the temporal and spatial changes are large. .
In order to perform encoding processing adapted to the general characteristics of such a video signal, the moving image encoding apparatus according to the first embodiment hierarchically divides the video signal area from a predetermined maximum block size. And the structure which adapts a prediction process and the encoding process of a prediction difference for every division area is employ | adopted.
 この実施の形態1の動画像符号化装置が処理対象とする映像信号は、輝度信号と2つの色差信号からなるYUV信号や、ディジタル撮像素子から出力されるRGB信号等の任意の色空間のカラー映像信号のほか、モノクロ画像信号や赤外線画像信号など、映像フレームが水平・垂直2次元のディジタルサンプル(画素)列から構成される任意の映像信号である。
 各画素の諧調は8ビットでもよいし、10ビット、12ビットなどの諧調であってもよい。
 ただし、以下の説明においては、特に断らない限り、入力される映像信号がYUV信号であるものとする。また、2つの色差成分U,Vが輝度成分Yに対して、サブサンプルされた4:2:0フォーマットの信号であるものとする。
 なお、映像の各フレームに対応する処理データ単位を「ピクチャ」と称し、この実施の形態1では、「ピクチャ」は順次走査(プログレッシブスキャン)された映像フレームの信号として説明を行う。ただし、映像信号がインタレース信号である場合、「ピクチャ」は映像フレームを構成する単位であるフィールド画像信号であってもよい。
The video signal to be processed by the moving image coding apparatus according to the first embodiment is a color in an arbitrary color space such as a YUV signal composed of a luminance signal and two color difference signals, or an RGB signal output from a digital image sensor. In addition to the video signal, the video frame is an arbitrary video signal such as a monochrome image signal or an infrared image signal, in which the video frame is composed of a horizontal and vertical two-dimensional digital sample (pixel) sequence.
The gradation of each pixel may be 8 bits, or may be gradation such as 10 bits or 12 bits.
However, in the following description, it is assumed that the input video signal is a YUV signal unless otherwise specified. In addition, it is assumed that the two color difference components U and V are subsampled 4: 2: 0 format signals with respect to the luminance component Y.
The processing data unit corresponding to each frame of the video is referred to as “picture”. In the first embodiment, “picture” is described as a signal of a video frame that has been sequentially scanned (progressive scan). However, when the video signal is an interlace signal, the “picture” may be a field image signal which is a unit constituting a video frame.
 次に動作について説明する。
 最初に、図1の動画像符号化装置の処理内容を説明する。
 まず、符号化制御部1は、イントラ予測処理(フレーム内予測処理)又は動き補償予測処理(フレーム間予測処理)が実施される際の処理単位となる符号化ブロックの最大サイズを決定するとともに、最大サイズの符号化ブロックが階層的に分割される際の上限の階層数を決定する(図2のステップST1)。
Next, the operation will be described.
First, the processing contents of the moving picture encoding apparatus in FIG. 1 will be described.
First, the encoding control unit 1 determines the maximum size of an encoding block that is a processing unit when intra prediction processing (intraframe prediction processing) or motion compensation prediction processing (interframe prediction processing) is performed, The upper limit number of hierarchies when the coding block of the maximum size is divided hierarchically is determined (step ST1 in FIG. 2).
 符号化ブロックの最大サイズの決め方として、例えば、全てのピクチャに対して、入力画像の解像度に応じたサイズに決定する方法が考えられる。
 また、入力画像の局所的な動きの複雑さの違いをパラメータとして定量化しておき、動きの激しいピクチャでは最大サイズを小さな値に決定し、動きが少ないピクチャでは最大サイズを大きな値に決定する方法などが考えられる。
 上限の階層数については、例えば、入力画像の動きが激しい程、階層数を深くして、より細かい動きが検出できるように設定し、入力画像の動きが少なければ、階層数を抑えるように設定する方法が考えられる。
As a method of determining the maximum size of the encoded block, for example, a method of determining a size corresponding to the resolution of the input image for all the pictures can be considered.
In addition, the difference in complexity of local motion of the input image is quantified as a parameter, and the maximum size is determined to be a small value for pictures with intense motion, and the maximum size is determined to be a large value for pictures with little motion. Etc. are considered.
The upper limit of the number of hierarchies is set so that, for example, the more the input image moves, the deeper the number of hierarchies, so that finer motion can be detected, and the less the input image moves, the lower the number of hierarchies. A way to do this is conceivable.
 また、符号化制御部1は、利用可能な1以上の符号化モード(M種類のイントラ符号化モード、N種類のインター符号化モード)の中から、階層的に分割される各々の符号化ブロックに対応する符号化モードを選択する(ステップST2)。予め用意されているM種類のイントラ符号化モードについては後述する。
 ただし、後述するブロック分割部2により階層的に分割された各々の符号化ブロックが更にパーティション単位に分割される場合は、各々のパーティションに対応する符号化モードを選択することが可能である。
 以下、この実施の形態1では、各々の符号化ブロックが更にパーティション単位に分割されるものとして説明する。
 符号化制御部1による符号化モードの選択方法は、公知の技術であるため詳細な説明を省略するが、例えば、利用可能な任意の符号化モードを用いて、符号化ブロックに対する符号化処理を実施して符号化効率を検証し、利用可能な複数の符号化モードの中で、最も符号化効率がよい符号化モードを選択する方法などがある。
In addition, the encoding control unit 1 includes each encoding block divided hierarchically from one or more available encoding modes (M types of intra encoding modes and N types of inter encoding modes). Is selected (step ST2). The M types of intra coding modes prepared in advance will be described later.
However, when each coding block hierarchically divided by the block division unit 2 described later is further divided into partitions, it is possible to select a coding mode corresponding to each partition.
In the following description of the first embodiment, each encoded block is further divided into partitions.
Since the encoding mode selection method by the encoding control unit 1 is a known technique, a detailed description thereof will be omitted. For example, an encoding process for an encoding block is performed using any available encoding mode. There is a method in which coding efficiency is verified by performing and a coding mode having the best coding efficiency is selected from among a plurality of available coding modes.
 また、符号化制御部1は、各々の符号化ブロックに含まれているパーティション毎に、差分画像が圧縮される際に用いられる量子化パラメータ及び変換ブロックサイズを決定するとともに、予測処理が実施される際に用いられるイントラ予測パラメータ又はインター予測パラメータを決定する。
 符号化制御部1は、量子化パラメータ及び変換ブロックサイズを含む予測差分符号化パラメータを変換・量子化部7、逆量子化・逆変換部8及び可変長符号化部13に出力する。また、予測差分符号化パラメータを必要に応じてイントラ予測部4に出力する。
Further, the encoding control unit 1 determines a quantization parameter and a transform block size used when the difference image is compressed for each partition included in each encoding block, and a prediction process is performed. Intra prediction parameters or inter prediction parameters used in the determination are determined.
The encoding control unit 1 outputs the prediction difference encoding parameter including the quantization parameter and the transform block size to the transform / quantization unit 7, the inverse quantization / inverse transform unit 8, and the variable length encoding unit 13. Moreover, a prediction difference encoding parameter is output to the intra estimation part 4 as needed.
 ブロック分割部2は、入力画像を示す映像信号を入力すると、その入力画像を符号化制御部1により決定された最大サイズの符号化ブロックに分割するとともに、符号化制御部1により決定された上限の階層数に至るまで、その符号化ブロックを階層的に分割する。また、その符号化ブロックをパーティション単位に分割する(ステップST3)。
 ここで、図11は最大サイズの符号化ブロックが階層的に複数の符号化ブロックに分割される様子を示す説明図である。
 図11の例では、最大サイズの符号化ブロックは、第0階層の符号化ブロックBであり、輝度成分で(L,M)のサイズを有している。
 また、図11の例では、最大サイズの符号化ブロックBを出発点として、4分木構造で、別途定める所定の深さまで階層的に分割を行うことによって、符号化ブロックBを得ている。
When a video signal indicating an input image is input, the block dividing unit 2 divides the input image into encoded blocks of the maximum size determined by the encoding control unit 1 and the upper limit determined by the encoding control unit 1 The encoded block is divided hierarchically until the number of layers is reached. Further, the encoded block is divided into partitions (step ST3).
Here, FIG. 11 is an explanatory diagram showing a state in which the maximum-size encoded block is hierarchically divided into a plurality of encoded blocks.
In the example of FIG. 11, the coding block of the maximum size is the coding block B 0 of the 0th layer, and has a size of (L 0 , M 0 ) as a luminance component.
In the example of FIG. 11, the encoding block B n is obtained by performing hierarchical division to a predetermined depth determined separately in a quadtree structure with the encoding block B 0 having the maximum size as a starting point. Yes.
 深さnにおいては、符号化ブロックBはサイズ(L,M)の画像領域である。
 ただし、LとMは同じであってもよいし異なっていてもよいが、図11の例ではL=Mのケースを示している。
 以降、符号化ブロックBのサイズは、符号化ブロックBの輝度成分におけるサイズ(L,M)と定義する。
At the depth n, the coding block B n is an image area of size (L n , M n ).
However, L n and M n may be the same or different, but the example of FIG. 11 shows the case of L n = M n .
Later, the size of the encoded block B n is defined as the size of the luminance component of the encoded block B n (L n, M n ).
 ブロック分割部2では、4分木分割を行うため、常に(Ln+1,Mn+1)=(L/2,M/2)が成立する。
 ただし、RGB信号などのように、全ての色成分が同一サンプル数を有するカラー映像信号(4:4:4フォーマット)では、全ての色成分のサイズが(L,M)になるが、4:2:0フォーマットを扱う場合、対応する色差成分の符号化ブロックのサイズは(L/2,M/2)である。
 以降、第n階層の符号化ブロックBで選択しうる符号化モードをm(B)と表記する。
Since the block division unit 2 performs quadtree division, (L n + 1 , M n + 1 ) = (L n / 2, M n / 2) always holds.
However, in a color video signal (4: 4: 4 format) in which all color components have the same number of samples, such as RGB signals, the size of all color components is (L n , M n ). When the 4: 2: 0 format is handled, the size of the corresponding color difference component coding block is (L n / 2, M n / 2).
Hereinafter, an encoding mode that can be selected in the encoding block B n in the n-th layer is denoted as m (B n ).
 複数の色成分からなるカラー映像信号の場合、符号化モードm(B)は、色成分ごとに、それぞれ個別のモードを用いるように構成されてもよいが、以降、特に断らない限り、YUV信号、4:2:0フォーマットの符号化ブロックの輝度成分に対する符号化モードのことを指すものとして説明を行う。
 符号化モードm(B)には、1つないし複数のイントラ符号化モード(総称して「INTRA」)、1つないし複数のインター符号化モード(総称して「INTER」)があり、符号化制御部1は、上述したように、当該ピクチャで利用可能な全ての符号化モードないしは、そのサブセットの中から、符号化ブロックBに対して最も符号化効率がよい符号化モードを選択する。
In the case of a color video signal composed of a plurality of color components, the encoding mode m (B n ) may be configured to use an individual mode for each color component, but hereinafter, unless otherwise specified, YUV The description will be made on the assumption that it indicates the coding mode for the luminance component of the coding block of the signal 4: 2: 0 format.
The coding mode m (B n ) includes one or more intra coding modes (collectively “INTRA”), one or more inter coding modes (collectively “INTER”), As described above, the encoding control unit 1 selects an encoding mode having the highest encoding efficiency for the encoding block B n from all the encoding modes available for the picture or a subset thereof. .
 符号化ブロックBは、図11に示すように、更に1つないし複数の予測処理単位(パーティション)に分割される。
 以降、符号化ブロックBに属するパーティションをP (i: 第n階層におけるパーティション番号)と表記する。図12は符号化ブロックBに属するパーティションP を示す説明図である。
 符号化ブロックBに属するパーティションP の分割がどのようになされているかは符号化モードm(B)の中に情報として含まれる。
 パーティションP は、すべて符号化モードm(B)に従って予測処理が行われるが、パーティションP 毎に、個別の予測パラメータを選択することができる。
As shown in FIG. 11, the encoded block Bn is further divided into one or more prediction processing units (partitions).
Hereinafter, the partition belonging to the coding block B n is denoted as P i n (i: the partition number in the nth layer). FIG. 12 is an explanatory diagram showing partitions P i n belonging to the coding block B n .
How the partition P i n belonging to the coding block B n is divided is included as information in the coding mode m (B n ).
All partitions P i n are subjected to prediction processing according to the coding mode m (B n ), but individual prediction parameters can be selected for each partition P i n .
 符号化制御部1は、最大サイズの符号化ブロックに対して、例えば、図13に示すようなブロック分割状態を生成して、符号化ブロックBを特定する。
 図13(a)の斜線部分は分割後のパーティションの分布を示し、また、図13(b)は階層分割後のパーティションに符号化モードm(B)が割り当てられる状況を4分木グラフで示している。
 図13(b)において、□で囲まれているノードが、符号化モードm(B)が割り当てられたノード(符号化ブロックB)を示している。
For example, the encoding control unit 1 generates a block division state as illustrated in FIG. 13 for the encoding block of the maximum size, and specifies the encoding block Bn .
The shaded area in FIG. 13 (a) shows the distribution of the partitions after the division, and FIG. 13 (b) shows the situation where the encoding mode m (B n ) is assigned to the partition after the hierarchical division in a quadtree graph. Show.
In FIG. 13B, nodes surrounded by squares indicate nodes (encoded blocks B n ) to which the encoding mode m (B n ) is assigned.
 切替スイッチ3は、符号化制御部1がイントラ符号化モードを選択すると(m(B)∈INTRA)、ブロック分割部2により分割された符号化ブロックBに属するパーティションP をイントラ予測部4に出力し、符号化制御部1がインター符号化モードを選択すると(m(B)∈INTER)、その符号化ブロックBに属するパーティションP を動き補償予測部5に出力する。 When the coding control unit 1 selects the intra coding mode (m (B n ) εINTRA), the changeover switch 3 performs intra prediction on the partition P i n belonging to the coding block B n divided by the block dividing unit 2. When the coding control unit 1 selects the inter coding mode (m (B n ) εINTER), the partition P i n belonging to the coding block B n is output to the motion compensation prediction unit 5. .
 イントラ予測部4は、切替スイッチ3から符号化ブロックBに属するパーティションP を受けると(ステップST4)、具体的な処理内容は後述するが、符号化制御部1により決定されたイントラ予測パラメータに基づいて、各パーティションP に対するイントラ予測処理を実施することにより、イントラ予測画像(P )を生成する(ステップST5)。
 以下、この明細書では、P はパーティションを示し、(P )はパーティションP の予測画像を示すものとする。
When the intra prediction unit 4 receives the partition P i n belonging to the coding block B n from the changeover switch 3 (step ST4), the specific processing contents will be described later, but the intra prediction determined by the coding control unit 1 is performed. Based on the parameters, intra prediction processing (P i n ) is generated by performing intra prediction processing for each partition P i n (step ST5).
Hereinafter, in this specification, P i n indicates a partition, and (P i n ) indicates a predicted image of the partition P i n .
 イントラ予測画像(P )の生成に用いられるイントラ予測パラメータは、動画像復号装置側でも、全く同じイントラ予測画像(P )を生成する必要があるため、可変長符号化部13によってビットストリームに多重化される。
 なお、イントラ予測パラメータとして選択できるイントラ予測方向数は、処理対象となるブロックのサイズに応じて異なるよう構成してもよい。
 大きいサイズのパーティションでは、イントラ予測の効率が低下するため、選択できるイントラ予測方向数を少なくし、小さいサイズのパーティションでは、選択できるイントラ予測方向数を多くするように構成することができる。
 例えば、4×4画素パーティションや8×8画素パーティションでは34方向、16×16画素パーティションでは17方向、32×32画素パーティションでは9方向などのように構成してもよい。
Intra prediction parameters used to generate the intra-prediction image (P i n) is also the moving picture decoding apparatus, it is necessary to generate exactly the same intra prediction image (P i n), the variable length coding unit 13 Multiplexed into a bitstream.
Note that the number of intra prediction directions that can be selected as the intra prediction parameter may be configured to differ depending on the size of the block to be processed.
Since the efficiency of intra prediction decreases in a large size partition, the number of intra prediction directions that can be selected can be reduced, and the number of intra prediction directions that can be selected in a small size partition can be increased.
For example, a 4 × 4 pixel partition or an 8 × 8 pixel partition may be configured in 34 directions, a 16 × 16 pixel partition in 17 directions, a 32 × 32 pixel partition in 9 directions, or the like.
 動き補償予測部5は、切替スイッチ3から符号化ブロックBに属するパーティションP を受けると(ステップST4)、符号化制御部1により決定されたインター予測パラメータに基づいて、各パーティションP に対するインター予測処理を実施することにより、インター予測画像(P )を生成する(ステップST6)。
 即ち、動き補償予測部5は、動き補償予測フレームメモリ12により格納されている1フレーム以上の参照画像を用いて、符号化制御部1から出力されたインター予測パラメータに基づいて、その符号化ブロックに対する動き補償予測処理を実施することで、インター予測画像(P )を生成する。
 インター予測画像(P )の生成に用いられるインター予測パラメータは、動画像復号装置側でも、全く同じインター予測画像(P )を生成する必要があるため、可変長符号化部13によってビットストリームに多重化される。
When the motion compensated prediction unit 5 receives the partition P i n belonging to the coding block B n from the changeover switch 3 (step ST4), each of the partitions P i is based on the inter prediction parameter determined by the coding control unit 1. The inter prediction image (P i n ) is generated by performing the inter prediction process for n (step ST6).
That is, the motion compensation prediction unit 5 uses the reference image of one or more frames stored in the motion compensation prediction frame memory 12, and based on the inter prediction parameter output from the encoding control unit 1, By performing the motion compensated prediction process for, an inter predicted image (P i n ) is generated.
Inter prediction parameters used to generate the inter prediction image (P i n) is also the moving picture decoding apparatus, it is necessary to generate exactly the same inter-prediction image (P i n), the variable length coding unit 13 Multiplexed into a bitstream.
 減算部6は、イントラ予測部4又は動き補償予測部5から予測画像(P )を受けると、ブロック分割部2により分割された符号化ブロックBに属するパーティションP から、その予測画像(P )を減算することで、その差分画像を示す予測差分信号e を生成する(ステップST7)。
 変換・量子化部7は、減算部6が予測差分信号e を生成すると、符号化制御部1から出力された予測差分符号化パラメータに含まれている変換ブロックサイズ単位で、その予測差分信号e に対する変換処理(例えば、DCT(離散コサイン変換)やDST(離散サイン変換)、予め特定の学習系列に対して基底設計がなされているKL変換等の直交変換処理)を実施するとともに、その予測差分符号化パラメータに含まれている量子化パラメータを用いて、その予測差分信号e の変換係数を量子化することで、量子化後の変換係数である差分画像の圧縮データを逆量子化・逆変換部8及び可変長符号化部13に出力する(ステップST8)。
When the subtraction unit 6 receives the predicted image (P i n ) from the intra prediction unit 4 or the motion compensation prediction unit 5, the subtraction unit 6 performs prediction from the partition P i n belonging to the encoded block B n divided by the block division unit 2. By subtracting the image (P i n ), a prediction difference signal e i n indicating the difference image is generated (step ST7).
When the subtraction unit 6 generates the prediction difference signal e i n , the transform / quantization unit 7 generates the prediction difference in units of transform block size included in the prediction difference encoding parameter output from the encoding control unit 1. conversion processing for the signal e i n (e.g., DCT (discrete cosine transform) or DST (discrete sine transform), the orthogonal transform for KL conversion and the base design have been made in advance to the particular learning sequence) with carrying out the , using the quantization parameter included in the prediction difference coding parameters, by quantizing the transform coefficients of the prediction difference signal e i n, the compressed data of the differential image is a transform coefficient after quantization The data is output to the inverse quantization / inverse transform unit 8 and the variable length coding unit 13 (step ST8).
 逆量子化・逆変換部8は、変換・量子化部7から圧縮データを受けると、符号化制御部1から出力された予測差分符号化パラメータに含まれている量子化パラメータを用いて、その圧縮データを逆量子化し、その予測差分符号化パラメータに含まれている変換ブロックサイズ単位で、逆量子化の圧縮データの逆変換処理(例えば、逆DCT(逆離散コサイン変換)や逆DST(離散サイン変換)、逆KL変換等の逆変換処理)を実施することで、逆変換処理後の圧縮データを局所復号予測差分信号として加算部9に出力する(ステップST9)。 When receiving the compressed data from the transform / quantization unit 7, the inverse quantization / inverse transform unit 8 uses the quantization parameter included in the prediction difference coding parameter output from the coding control unit 1 to The compressed data is inversely quantized, and the inverse quantization processing (for example, inverse DCT (inverse discrete cosine transform) or inverse DST (discrete) is performed on the basis of the transform block size included in the prediction differential encoding parameter. By performing (sine transformation), inverse transformation processing such as inverse KL transformation), the compressed data after the inverse transformation processing is output to the adding unit 9 as a local decoded prediction difference signal (step ST9).
 加算部9は、逆量子化・逆変換部8から局所復号予測差分信号を受けると、その局所復号予測差分信号と、イントラ予測部4又は動き補償予測部5により生成された予測画像(P )を示す予測信号とを加算することで、局所復号パーティション画像ないしはその集まりとしての局所復号符号化ブロック画像(以下、「局所復号画像」と称する)を示す局所復号画像信号を生成し、その局所復号画像信号をループフィルタ部11に出力する(ステップST10)。
 また、イントラ予測用メモリ10には、イントラ予測に用いるために、当該局所復号画像が格納される。
Upon receiving the local decoded prediction difference signal from the inverse quantization / inverse transform unit 8, the adder 9 receives the local decoded prediction difference signal and the predicted image (P i ) generated by the intra prediction unit 4 or the motion compensated prediction unit 5. n ) to generate a locally decoded image signal indicating a locally decoded partition image or a locally decoded block image (hereinafter referred to as “locally decoded image”) as a collection thereof. The locally decoded image signal is output to the loop filter unit 11 (step ST10).
The intra prediction memory 10 stores the local decoded image for use in intra prediction.
 ループフィルタ部11は、加算部9から局所復号画像信号を受けると、その局所復号画像信号に含まれている符号化歪みを補償し、符号化歪み補償後の局所復号画像信号が示す局所復号画像を参照画像として動き補償予測フレームメモリ12に格納する(ステップST11)。
 なお、ループフィルタ部11によるフィルタリング処理は、入力される局所復号画像信号の最大符号化ブロックあるいは個々の符号化ブロック単位で行ってもよいし、1画面分のマクロブロックに相当する局所復号画像信号が入力された後に1画面分まとめて行ってもよい。
When the loop filter unit 11 receives the local decoded image signal from the adder unit 9, the loop filter unit 11 compensates for the encoding distortion included in the local decoded image signal, and the local decoded image indicated by the local decoded image signal after the encoding distortion compensation Is stored in the motion compensated prediction frame memory 12 as a reference image (step ST11).
Note that the filtering process by the loop filter unit 11 may be performed in units of the maximum encoded block or individual encoded blocks of the input local decoded image signal, or a local decoded image signal corresponding to a macroblock for one screen. It may be performed for one screen after the input.
 ステップST4~ST10の処理は、ブロック分割部2により分割された全ての符号化ブロックBに属するパーティションP に対する処理が完了するまで繰り返し実施される(ステップST12)。
 可変長符号化部13は、変換・量子化部7から出力された圧縮データと、符号化制御部1から出力された符号化モード及び予測差分符号化パラメータと、イントラ予測部4から出力されたイントラ予測パラメータ又は動き補償予測部5から出力されたインター予測パラメータとを可変長符号化して、その圧縮データ、符号化モード、予測差分符号化パラメータ、イントラ予測パラメータ/インター予測パラメータの符号化データが多重化されているビットストリームを生成する(ステップST13)。
The processes of steps ST4 to ST10 are repeated until the processes for the partitions P i n belonging to all the encoded blocks B n divided by the block dividing unit 2 are completed (step ST12).
The variable length coding unit 13, the compressed data output from the transform / quantization unit 7, the coding mode and prediction differential coding parameter output from the coding control unit 1, and the intra prediction unit 4 output The intra-prediction parameter or the inter-prediction parameter output from the motion compensated prediction unit 5 is variable-length coded, and the compressed data, the coding mode, the prediction differential coding parameter, and the intra-prediction parameter / inter-prediction parameter encoded data are obtained. A multiplexed bit stream is generated (step ST13).
 次に、イントラ予測部4の処理内容を具体的に説明する。
 図14は符号化ブロックBに属する各パーティションP において選択可能なイントラ予測パラメータ(イントラ予測モード)の一例を示す説明図である。
 図14の例では、イントラ予測モードに対応する予測方向ベクトルを示しており、選択可能なイントラ予測モードの個数が増えるに従って、予測方向ベクトル同士の相対角度が小さくなるように設計されている。
Next, the processing content of the intra estimation part 4 is demonstrated concretely.
FIG. 14 is an explanatory diagram showing an example of intra prediction parameters (intra prediction mode) that can be selected in each partition P i n belonging to the coding block B n .
In the example of FIG. 14, the prediction direction vector corresponding to the intra prediction mode is shown, and the relative angle between the prediction direction vectors is designed to be smaller as the number of selectable intra prediction modes increases.
 まず、イントラ予測部4の輝度信号イントラ予測部21は、ブロック分割部2により分割された符号化ブロックにおける輝度成分のフレーム内予測を実施して、輝度成分に対する予測画像を生成する(図5のステップST21)。
 以下、輝度信号イントラ予測部21の処理内容を具体的に説明する。
 ここでは、イントラ予測部4の輝度信号イントラ予測部21が、パーティションP の輝度信号に対するイントラ予測パラメータ(イントラ予測モード)に基づいて、その輝度信号のイントラ予測信号を生成するイントラ処理について説明する。
 説明の便宜上、パーティションP のサイズをl ×m 画素とする。
First, the luminance signal intra prediction unit 21 of the intra prediction unit 4 performs intra-frame prediction of the luminance component in the encoded block divided by the block dividing unit 2, and generates a prediction image for the luminance component (FIG. 5). Step ST21).
Hereinafter, the processing content of the luminance signal intra prediction unit 21 will be specifically described.
Here, the luminance signal intra prediction unit 21 of the intra prediction unit 4, based on the intra prediction parameters (intra prediction mode) for the luminance signal partitions P i n, the intra processing for generating an intra prediction signal of the luminance signal described To do.
For convenience of explanation, the size of the partition P i n a l i n × m i n pixels.
 図15はl =m =4の場合において、パーティションP 内の画素の予測値を生成する際に用いる画素の一例を示す説明図である。
 図15の例では、パーティションP に隣接している符号化済みの上パーティションの画素((2×l +1)個の画素)と、左パーティションの画素((2×m )個の画素)を予測に用いる参照画素としているが、予測に用いる画素は、図15に示す画素より多くても少なくてもよい。
 また、図15の例では、隣接している1行又は1列分の画素を予測に用いているが、2行又は2列分の画素、あるいは、それ以上の画素を予測に用いてもよい。
FIG. 15 is an explanatory diagram illustrating an example of pixels used when generating predicted values of pixels in the partition P i n when l i n = m i n = 4.
In the example of FIG. 15, the partition P i n in the pixel on the partition already coded that is adjacent to ((2 × l i n +1 ) pixels), the left partition pixel ((2 × m i n) Number of pixels) is used as a reference pixel for prediction, but the number of pixels used for prediction may be more or less than that shown in FIG.
Further, in the example of FIG. 15, pixels for one row or one column adjacent to each other are used for prediction, but pixels for two rows or two columns or more may be used for prediction. .
 輝度信号イントラ予測部21は、例えば、パーティションP に対するイントラ予測モードのインデックス値が2(平均値予測)である場合、上パーティションの隣接画素と左パーティションの隣接画素の平均値をパーティションP 内の画素の予測値として予測画像を生成する。
 イントラ予測モードのインデックス値が2(平均値予測)以外の場合には、インデックス値が示す予測方向ベクトルv=(dx,dy)に基づいて、パーティションP 内の画素の予測値を生成する。
 予測値を生成する画素(予測対象画素)のパーティションP 内の相対座標(パーティションの左上画素を原点とする)を(x,y)とすると、予測に用いる参照画素の位置は、下記に示すLと、隣接画素の交点となる。
Figure JPOXMLDOC01-appb-I000004
   ただし、kは正のスカラ値である。
Luminance signal intra prediction unit 21, for example, the partition P i if the index value of the intra prediction mode for n is 2 (average prediction), the average value of neighboring pixels of the adjacent pixel and the left partition of the upper partition partition P i A predicted image is generated as a predicted value of the pixels in n .
If the index value of the intra prediction mode is other than 2 (average prediction), the prediction direction vector index value indicates v p = (dx, dy) on the basis of, generating a prediction value of the pixel in the partition P i n To do.
Partitioning P i n in the relative coordinates of the pixels for generating the prediction value (prediction target pixel) a (an origin at the upper left pixel of the partition) as (x, y), the position of the reference pixels used for prediction, the following This is the intersection of L and adjacent pixels.
Figure JPOXMLDOC01-appb-I000004
However, k is a positive scalar value.
 参照画素が整数画素位置にある場合、その整数画素を予測対象画素の予測値とする。参照画素が整数画素位置にない場合、参照画素に隣接する整数画素から生成される補間画素を予測値とする。
 図15の例では、参照画素が整数画素位置にないので、参照画素に隣接する2画素の平均値を予測値としている。
 なお、隣接する2画素のみではなく、隣接する2画素以上の画素から補間画素を生成して予測値としてもよい。
When the reference pixel is at the integer pixel position, the integer pixel is set as the prediction value of the prediction target pixel. When the reference pixel is not located at the integer pixel position, an interpolation pixel generated from the integer pixel adjacent to the reference pixel is set as the predicted value.
In the example of FIG. 15, since the reference pixel is not located at the integer pixel position, the average value of two pixels adjacent to the reference pixel is used as the predicted value.
Note that an interpolation pixel may be generated not only from two adjacent pixels but also from two or more adjacent pixels, and used as a predicted value.
 輝度信号イントラ予測部21は、同様の手順で、パーティションP 内の輝度信号のすべての画素に対する予測画素を生成し、その生成したイントラ予測画像(P )を出力する。
 イントラ予測画像(P )の生成に用いているイントラ予測パラメータは、上述したように、ビットストリームに多重化するために可変長符号化部13に出力される。
The luminance signal intra prediction unit 21 generates prediction pixels for all the pixels of the luminance signal in the partition P i n in the same procedure, and outputs the generated intra prediction image (P i n ).
As described above, the intra-prediction parameters used for generating the intra-predicted image (P i n ) are output to the variable-length encoding unit 13 for multiplexing into the bitstream.
 イントラ予測部4の切替スイッチ22は、符号化制御部1から出力されたイントラ予測パラメータのうち、色差信号のイントラ符号化モードを示すパラメータが、方向性予測モードであるのか、平滑化輝度相関利用色差信号予測モードであるのかを判定する(ステップST22)。
 切替スイッチ22は、色差信号のイントラ符号化モードを示すパラメータが、方向性予測モードである旨を示していれば、予測に用いる参照画素を色差信号方向性イントラ予測部23に与え、色差信号のイントラ符号化モードを示すパラメータが、平滑化輝度相関利用色差信号予測モードである旨を示していれば、予測に用いる参照画素を輝度相関利用色差信号予測部24に与える。
 ここで、図16は色差信号のイントラ予測パラメータと色差イントラ予測モードの対応例を示す説明図である。
 図16の例では、色差信号イントラ予測パラメータが“34”である場合には、予測に用いる参照画素が輝度相関利用色差信号予測部24に与えられ、色差信号イントラ予測パラメータが“34”以外である場合には、予測に用いる参照画素が色差信号方向性イントラ予測部23に与えられることになる。
The changeover switch 22 of the intra prediction unit 4 determines whether the parameter indicating the intra coding mode of the color difference signal among the intra prediction parameters output from the coding control unit 1 is the directionality prediction mode, or uses smoothed luminance correlation. It is determined whether the color difference signal prediction mode is set (step ST22).
If the parameter indicating the intra coding mode of the chrominance signal indicates that it is the directional prediction mode, the changeover switch 22 gives the reference pixel used for prediction to the chrominance signal directional intra prediction unit 23, and If the parameter indicating the intra coding mode indicates that it is the smoothed luminance correlation utilization color difference signal prediction mode, the reference pixel used for the prediction is given to the luminance correlation utilization color difference signal prediction unit 24.
Here, FIG. 16 is an explanatory diagram showing a correspondence example between the intra prediction parameters of the color difference signal and the color difference intra prediction modes.
In the example of FIG. 16, when the color difference signal intra prediction parameter is “34”, a reference pixel used for prediction is given to the luminance correlation utilization color difference signal prediction unit 24, and the color difference signal intra prediction parameter is other than “34”. In some cases, reference pixels used for prediction are given to the color difference signal directional intra prediction unit 23.
 色差信号方向性イントラ予測部23は、切替スイッチ22から予測に用いる参照画素を受けると、パーティションP に隣接している復号済みの色差参照画素を参照して、符号化制御部1から出力されたイントラ予測パラメータに基づく色差成分のフレーム内予測を実施することで、色差成分に対する予測画像を生成する(ステップST23)。
 色差信号方向性イントラ予測部23におけるイントラ予測の対象が色差信号であり、イントラ予測の対象が輝度信号である輝度信号イントラ予測部21と異なるが、イントラ予測の処理内容自体は輝度信号イントラ予測部21と同様である。よって、方向性予測、水平予測、垂直予測、DC予測などを行うことにより、色差信号のイントラ予測画像が生成される。
Color difference signal directional intra-prediction unit 23 receives the reference pixels used for prediction from the changeover switch 22, with reference to the decoded chrominance reference pixels adjacent to the partition P i n, output from the coding controller 1 By performing intra-frame prediction of the color difference component based on the intra prediction parameter thus generated, a predicted image for the color difference component is generated (step ST23).
The intra prediction target in the chrominance signal directional intra prediction unit 23 is a color difference signal, and the intra prediction target is different from the luminance signal intra prediction unit 21 which is a luminance signal, but the processing content of the intra prediction itself is the luminance signal intra prediction unit. 21. Therefore, by performing directional prediction, horizontal prediction, vertical prediction, DC prediction, etc., an intra prediction image of a color difference signal is generated.
 輝度相関利用色差信号予測部24は、切替スイッチ22から予測に用いる参照画素を受けると、符号化ブロックであるパーティションP に隣接している復号済みの輝度参照画素及び色差参照画素と、パーティションP 内の復号済みの輝度参照画素(輝度信号イントラ予測部21により先に生成されたパーティションP のイントラ予測画像(P )から得られた局所復号画像内の輝度参照画素)とを用いて、その符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、その相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する(ステップST24)。
 以下、輝度相関利用色差信号予測部24の処理内容を具体的に説明する。
Brightness correlation utilizing the color difference signal prediction section 24 receives the reference pixels used for prediction from the changeover switch 22, the decoded luminance reference pixel and the chrominance reference pixels adjacent to the partition P i n is a coding block, partition decoded brightness reference pixels in P i n (brightness reference pixels in the local decoded in an image obtained from the intra prediction image of the partition P i n previously generated by the luminance signal intra prediction unit 21 (P i n)) Are used to smooth the luminance component of a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block, and the smoothed luminance component and color difference component A correlation parameter indicating the correlation is calculated, and a predicted image for the color difference component is generated using the correlation parameter and the smoothed luminance component (step ST24).
Hereinafter, the processing content of the luminance correlation utilization color difference signal prediction unit 24 will be specifically described.
 輝度相関利用色差信号予測部24の平滑化輝度参照画素縮小部31は、イントラ予測用メモリ10により格納されているパーティションP を構成している復号済みの輝度参照画素(輝度信号イントラ予測部21により先に生成されたパーティションP のイントラ予測画像(P )から得られた局所復号画像内の輝度参照画素)のうち、水平方向及び垂直方向に隣接している複数の輝度参照画素の平滑化処理等を実施することで、縮小輝度参照画素Rec’を生成する(図6のステップST31)。
 即ち、平滑化輝度参照画素縮小部31は、図19に示すように、パーティションP 内の色差信号の予測ブロック(図中、左側のN×Nのブロック)に対応するブロック(図中、右側の2N×2Nのブロック)内の復号済の画素値である復号済輝度信号と、その復号済輝度信号の上端と左端に隣接している復号済輝度信号とを用いて、縮小輝度参照画素Rec’を生成する。
 ここで、縮小輝度参照画素Rec’は、図17に示すように、YUV4:2:0信号において、色差信号画素と同位相になるように、輝度参照画素Recに対して、横方向に1:2:1のローパスフィルタ、縦方向に1:1のローパスフィルタを施した後に、縦横に偶数列のみをサブサンプリングすることで得られる。
Smoothing luminance correlation utilizing the color difference signal prediction section 24 brightness reference pixel reducing unit 31, partition P i n structure to have decoded luminance reference pixels stored by the intra prediction memory 10 (the luminance signal intra prediction unit 21, among the luminance reference pixels in the local decoded image obtained from the intra-predicted image (P i n ) of the partition P i n previously generated by the image 21, in the horizontal direction and the vertical direction. By performing pixel smoothing processing and the like, a reduced luminance reference pixel Rec ′ L is generated (step ST31 in FIG. 6).
That is, the smoothed luminance reference pixel reducing unit 31, as shown in FIG. 19, (in the figure, the block on the left side of N × N) prediction block of the color difference signals in the partition P i n in the block (drawing corresponding to, A reduced luminance reference pixel using a decoded luminance signal which is a decoded pixel value in the right 2N × 2N block) and a decoded luminance signal adjacent to the upper end and the left end of the decoded luminance signal. Rec ′ L is generated.
Here, as shown in FIG. 17, the reduced luminance reference pixel Rec ′ L is arranged in the horizontal direction with respect to the luminance reference pixel Rec L so that the YUV 4: 2: 0 signal has the same phase as that of the color difference signal pixel. After a 1: 2: 1 low-pass filter and a 1: 1 low-pass filter are applied in the vertical direction, only an even column is sub-sampled in the vertical and horizontal directions.
 輝度相関利用色差信号予測部24の相関算出部32は、平滑化輝度参照画素縮小部31が縮小輝度参照画素Rec’を生成すると、その縮小輝度参照画素Rec’と、色差信号の予測ブロックの上端及び左端に隣接している色差信号の復号済の画素値である色差参照画素Recとを用いて、下記の式(4)及び式(5)に示すように、予測に用いる相関パラメータα,βを算出する(ステップST32)。
Figure JPOXMLDOC01-appb-I000005

Figure JPOXMLDOC01-appb-I000006
 式(4),(5)において、Iは処理対象となる色差信号の予測ブロックの1辺の画素数の2倍の値である。
When the smoothed luminance reference pixel reducing unit 31 generates the reduced luminance reference pixel Rec ′ L , the correlation calculating unit 32 of the luminance correlation using color difference signal predicting unit 24 and the reduced luminance reference pixel Rec ′ L and the prediction block of the color difference signal Using the color difference reference pixel Rec C , which is the decoded pixel value of the color difference signal adjacent to the upper end and the left end of the image, as shown in the following equations (4) and (5), α and β are calculated (step ST32).
Figure JPOXMLDOC01-appb-I000005

Figure JPOXMLDOC01-appb-I000006
In equations (4) and (5), I is a value twice the number of pixels on one side of the prediction block of the color difference signal to be processed.
 色差予測画像生成部33は、相関算出部32が相関パラメータα,βを算出すると、その相関パラメータα,βと縮小輝度参照画素Rec’を用いて、下記の式(6)に示すように、色差予測画像Predを生成する(ステップST33)。
Figure JPOXMLDOC01-appb-I000007
When the correlation calculation unit 32 calculates the correlation parameters α and β, the color difference predicted image generation unit 33 uses the correlation parameters α and β and the reduced luminance reference pixel Rec ′ L as shown in the following equation (6). Then, the color difference prediction image Pred C is generated (step ST33).
Figure JPOXMLDOC01-appb-I000007
 なお、イントラ予測は、画面内の未知の領域を既知の領域から予測する手段であるが、輝度信号と色差信号のテクスチャには相関があり、空間方向については、近傍画素同士は画素値の変化が小さいため、予測ブロックに隣接する復号済の輝度信号と色差信号を利用して輝度信号と色差信号の相関パラメータを算出し、その輝度信号と相関パラメータから色差信号を予測することにより、予測効率を向上させることができる。
 この際、YUV4:2:0信号では、輝度信号と色差信号の解像度が異なるため、輝度信号をサブサンプリングする必要があるが、ローパスフィルタを施すことによりエイリアシングの発生を抑えることができ、予測効率を向上させることができる。
Intra prediction is a means for predicting an unknown area in the screen from a known area, but the texture of the luminance signal and the color difference signal are correlated, and in the spatial direction, neighboring pixels change pixel values. Predictive efficiency by calculating the correlation parameter between the luminance signal and the color difference signal using the decoded luminance signal and the color difference signal adjacent to the prediction block and predicting the color difference signal from the luminance signal and the correlation parameter. Can be improved.
In this case, since the resolution of the luminance signal and the color difference signal is different in the YUV 4: 2: 0 signal, it is necessary to subsample the luminance signal. However, the occurrence of aliasing can be suppressed by applying a low-pass filter, and the prediction efficiency Can be improved.
 可変長符号化部13は、上述したように、イントラ予測部4から出力されたイントラ予測パラメータを可変長符号化して、そのイントラ予測パラメータの符号語をビットストリームに多重化するが、イントラ予測パラメータを符号化する際に、複数の方向性予測の予測方向ベクトルの中から、代表的な予測方向ベクトル(予測方向代表ベクトル)を選択し、イントラ予測パラメータを予測方向代表ベクトルのインデックス(予測方向代表インデックス)と予測方向代表ベクトルからの差分を表すインデックス(予測方向差分インデックス)で表して、それぞれのインデックス毎に、確率モデルに応じた算術符号化などのハフマン符号化を行うことで、符号量を削減して符号化するよう構成してもよい。 As described above, the variable length coding unit 13 performs variable length coding on the intra prediction parameter output from the intra prediction unit 4 and multiplexes the codeword of the intra prediction parameter into the bitstream. Is encoded, a representative prediction direction vector (prediction direction representative vector) is selected from prediction direction vectors of a plurality of directional predictions, and an intra prediction parameter is used as an index of the prediction direction representative vector (prediction direction representative). Index) and an index (prediction direction difference index) representing the difference from the prediction direction representative vector, and by performing Huffman coding such as arithmetic coding according to the probability model for each index, the code amount is You may comprise so that it may reduce and encode.
 次に、図7の動画像復号装置の処理内容を説明する。
 可変長復号部41は、図1の動画像符号化装置により生成されたビットストリームを入力すると、そのビットストリームに対する可変長復号処理を実施して(図8のステップST41)、1フレーム以上のピクチャから構成されるシーケンス単位あるいはピクチャ単位にフレームサイズを復号する。
 可変長復号部41は、フレームサイズを復号すると、図1の動画像符号化装置で決定された最大符号化ブロックサイズ(イントラ予測処理又は動き補償予測処理が実施される際の処理単位となる符号化ブロックの最大サイズ)と、分割階層数の上限(最大サイズの符号化ブロックから階層的に分割されている符号化ブロックの階層数)を動画像符号化装置と同様の手順で決定する(ステップST42)。
Next, processing contents of the moving picture decoding apparatus in FIG. 7 will be described.
When the variable length decoding unit 41 receives the bit stream generated by the moving picture encoding device in FIG. 1, the variable length decoding unit 41 performs variable length decoding processing on the bit stream (step ST41 in FIG. 8), and the picture of one frame or more The frame size is decoded in sequence units or picture units.
When the frame size is decoded, the variable length decoding unit 41 determines the maximum coding block size determined by the moving image coding apparatus in FIG. 1 (a code that is a processing unit when the intra prediction process or the motion compensation prediction process is performed). (The maximum size of the encoded block) and the upper limit of the number of division layers (the number of layers of the encoded block divided hierarchically from the encoded block of the maximum size) are determined in the same procedure as that of the video encoding device (step ST42).
 例えば、符号化ブロックの最大サイズが、全てのピクチャに対して、入力画像の解像度に応じたサイズに決定されている場合には、先に復号しているフレームサイズに基づいて、図1の動画像符号化装置と同様の手順で、符号化ブロックの最大サイズを決定する。
 動画像符号化装置によって、符号化ブロックの最大サイズ及び符号化ブロックの階層数がビットストリームに多重化されている場合には、そのビットストリームから符号化ブロックの最大サイズ及び符号化ブロックの階層数を復号する。
For example, when the maximum size of the encoded block is determined to be the size corresponding to the resolution of the input image for all the pictures, the moving image shown in FIG. 1 is based on the previously decoded frame size. The maximum size of the encoded block is determined by the same procedure as that of the image encoding apparatus.
When the maximum size of the encoded block and the number of layers of the encoded block are multiplexed in the bitstream by the moving image encoding device, the maximum size of the encoded block and the number of layers of the encoded block from the bitstream Is decrypted.
 可変長復号部41は、符号化ブロックの最大サイズ及び符号化ブロックの階層数を決定すると、最大符号化ブロックを出発点にして、各符号化ブロックの階層的な分割状態を把握することで、ビットストリームに多重化されている符号化データの中で、各符号化ブロックに係る符号化データを特定し、その符号化データから各符号化ブロックに割り当てられている符号化モードを復号する。
 そして、可変長復号部41は、その符号化モードに含まれている符号化ブロックBに属するパーティションP の分割情報を参照して、ビットストリームに多重化されている符号化データの中で、各パーティションP に係る符号化データを特定する(ステップST43)。
 可変長復号部41は、各パーティションP に係る符号化データから圧縮データ、予測差分符号化パラメータ、イントラ予測パラメータ/インター予測パラメータを可変長復号して、その圧縮データ及び予測差分符号化パラメータを逆量子化・逆変換部45に出力するとともに、符号化モード及びイントラ予測パラメータ/インター予測パラメータを切替スイッチ42に出力する(ステップST44)。
The variable length decoding unit 41 determines the maximum size of the encoded block and the number of layers of the encoded block, and grasps the hierarchical division state of each encoded block from the maximum encoded block as a starting point. Among the encoded data multiplexed in the bit stream, the encoded data related to each encoded block is specified, and the encoding mode assigned to each encoded block is decoded from the encoded data.
Then, the variable length decoding unit 41 refers to the partition information of the partition P i n belonging to the coding block B n included in the coding mode, and encodes the encoded data that is multiplexed in the bit stream. in, it identifies the coded data according to each partition P i n (step ST43).
Variable-length decoding unit 41, the compressed data from each partition P i n encoded data according to the predictive differential coding parameters, intra prediction parameters / inter prediction parameters by variable length decoding, the compressed data and predictive differential coding parameters Is output to the inverse quantization / inverse transform unit 45, and the encoding mode and the intra prediction parameter / inter prediction parameter are output to the changeover switch 42 (step ST44).
 例えば、予測方向代表インデックスと予測方向差分インデックスがビットストリームに多重化されている場合には、その予測方向代表インデックスと予測方向差分インデックスをそれぞれの確率モデルに応じた算術復号などによりエントロピー復号し、その予測方向代表インデックスと予測方向差分インデックスからイントラ予測パラメータを特定するようにする。
 これにより、動画像符号化装置側で、イントラ予測パラメータの符号量を削減している場合でも、イントラ予測パラメータを正しく復号することができる。
For example, when the prediction direction representative index and the prediction direction difference index are multiplexed in the bitstream, the prediction direction representative index and the prediction direction difference index are entropy decoded by arithmetic decoding or the like according to each probability model, An intra prediction parameter is specified from the prediction direction representative index and the prediction direction difference index.
Thereby, even when the code amount of the intra prediction parameter is reduced on the moving image encoding device side, the intra prediction parameter can be correctly decoded.
 切替スイッチ42は、可変長復号部41から出力された符号化ブロックBに属するパーティションP の符号化モードがイントラ符号化モードである場合、可変長復号部41から出力されたイントラ予測パラメータをイントラ予測部43に出力し、その符号化モードがインター符号化モードである場合、可変長復号部41から出力されたインター予測パラメータを動き補償部44に出力する。 When the coding mode of the partition P i n belonging to the coding block B n output from the variable length decoding unit 41 is the intra coding mode, the changeover switch 42 outputs the intra prediction parameter output from the variable length decoding unit 41. Is output to the intra prediction unit 43, and when the encoding mode is the inter encoding mode, the inter prediction parameter output from the variable length decoding unit 41 is output to the motion compensation unit 44.
 イントラ予測部43は、可変長復号部41からイントラ予測パラメータを受けると(ステップST45)、図1のイントラ予測部4と同様に、そのイントラ予測パラメータに基づいて、各パーティションP に対するイントラ予測処理を実施することにより、イントラ予測画像(P )を生成する(ステップST46)。
 以下、イントラ予測部43の処理内容を具体的に説明する。
The intra prediction unit 43 receives the intra prediction parameter from the variable length decoding unit 41 (step ST45), like the intra prediction unit 4 in FIG. 1, on the basis of the intra prediction parameters, intra prediction for each partition P i n By performing the process, an intra-predicted image (P i n ) is generated (step ST46).
Hereinafter, the processing content of the intra estimation part 43 is demonstrated concretely.
 イントラ予測部43の輝度信号イントラ予測部51は、可変長復号部41からイントラ予測パラメータを受けると、動画像符号化装置の輝度信号イントラ予測部21と同様に、例えば、パーティションP に対するイントラ予測モードのインデックス値が2(平均値予測)である場合、上パーティションの隣接画素と左パーティションの隣接画素の平均値をパーティションP 内の画素の予測値として予測画像を生成する。
 イントラ予測モードのインデックス値が2(平均値予測)以外の場合には、インデックス値が示す予測方向ベクトルv=(dx,dy)に基づいて、パーティションP 内の画素の予測値を生成する。
 輝度信号イントラ予測部51は、同様の手順で、パーティションP 内の輝度信号のすべての画素に対する予測画素を生成し、その生成したイントラ予測画像(P )を出力する。
Luminance signal intra prediction unit 51 of the intra prediction unit 43 receives the intra prediction parameter from the variable length decoding unit 41, similarly to the luminance signal intra prediction unit 21 of the video encoding apparatus, for example, intra against the partition P i n If the index value of the prediction mode is 2 (average prediction), and generates a prediction image the mean value of neighboring pixels of the adjacent pixel and the left partition of the upper partition as the predicted value of the pixel in the partition P i n.
If the index value of the intra prediction mode is other than 2 (average prediction), the prediction direction vector index value indicates v p = (dx, dy) on the basis of, generating a prediction value of the pixel in the partition P i n To do.
The luminance signal intra prediction unit 51 generates prediction pixels for all the pixels of the luminance signal in the partition P i n in the same procedure, and outputs the generated intra prediction image (P i n ).
 イントラ予測部43の切替スイッチ52は、動画像符号化装置の切替スイッチ22と同様に、可変長復号部41から出力されたイントラ予測パラメータのうち、色差信号のイントラ符号化モードを示すパラメータが、方向性予測モードであるのか、平滑化輝度相関利用色差信号予測モードであるのかを判定する。
 切替スイッチ52は、色差信号のイントラ符号化モードを示すパラメータが、方向性予測モードである旨を示していれば、予測に用いる参照画素を色差信号方向性イントラ予測部53に与え、色差信号のイントラ符号化モードを示すパラメータが、平滑化輝度相関利用色差信号予測モードである旨を示していれば、予測に用いる参照画素を輝度相関利用色差信号予測部54に与える。
The changeover switch 52 of the intra-prediction unit 43 is similar to the changeover switch 22 of the video encoding device, and the parameter indicating the intra-coding mode of the color difference signal among the intra-prediction parameters output from the variable-length decoding unit 41 is It is determined whether the mode is the directionality prediction mode or the smoothed luminance correlation utilization color difference signal prediction mode.
If the parameter indicating the intra coding mode of the chrominance signal indicates that it is the directional prediction mode, the changeover switch 52 provides the reference pixel used for prediction to the chrominance signal directional intra prediction unit 53, and If the parameter indicating the intra coding mode indicates that it is the smoothed luminance correlation utilization color difference signal prediction mode, the reference pixel used for the prediction is given to the luminance correlation utilization color difference signal prediction unit 54.
 色差信号方向性イントラ予測部53は、切替スイッチ52から予測に用いる参照画素を受けると、動画像符号化装置の色差信号方向性イントラ予測部23と同様に、パーティションP に隣接している復号済みの色差参照画素を参照して、可変長復号部41から出力されたイントラ予測パラメータに基づく色差成分のフレーム内予測を実施することで、色差成分に対する予測画像を生成する。
 色差信号方向性イントラ予測部53におけるイントラ予測の対象が色差信号であり、イントラ予測の対象が輝度信号である輝度信号イントラ予測部51と異なるが、イントラ予測の処理内容自体は輝度信号イントラ予測部51と同様である。よって、方向性予測、水平予測、垂直予測、DC予測などを行うことにより、色差信号のイントラ予測画像が生成される。
Color difference signal directional intra-prediction unit 53 receives the reference pixels used for prediction from the changeover switch 52, similarly to the color difference signals directional intra prediction unit 23 of the moving picture coding apparatus, adjacent to the partition P i n By referring to the decoded chrominance reference pixel and performing intra-frame prediction of the chrominance component based on the intra prediction parameter output from the variable length decoding unit 41, a predicted image for the chrominance component is generated.
The intra prediction target in the chrominance signal directional intra prediction unit 53 is a color difference signal, and the intra prediction target is different from the luminance signal intra prediction unit 51 which is a luminance signal, but the processing content of the intra prediction itself is the luminance signal intra prediction unit. Same as 51. Therefore, by performing directional prediction, horizontal prediction, vertical prediction, DC prediction, etc., an intra prediction image of a color difference signal is generated.
 輝度相関利用色差信号予測部54は、切替スイッチ52から予測に用いる参照画素を受けると、動画像符号化装置の輝度相関利用色差信号予測部24と同様に、パーティションP に隣接している復号済みの輝度参照画素及び色差参照画素と、パーティションP 内の復号済みの輝度参照画素(輝度信号イントラ予測部51により先に生成されたパーティションP のイントラ予測画像(P )から得られた復号画像内の輝度参照画素)とを用いて、その符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、その相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する。
 以下、輝度相関利用色差信号予測部54の処理内容を具体的に説明する。
When the luminance correlation use color difference signal prediction unit 54 receives a reference pixel used for prediction from the changeover switch 52, the luminance correlation use color difference signal prediction unit 54 is adjacent to the partition P i n , similarly to the luminance correlation use color difference signal prediction unit 24 of the moving image encoding device. decoded luminance reference pixel and the chrominance reference pixels, the partition P i decoded brightness reference pixels in n (partition previously generated by the luminance signal intra prediction unit 51 P i n intra predicted image (P i n) Luminance reference pixels in the decoded image obtained from (1)) and smoothing the luminance component of a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block. To calculate a correlation parameter indicating a correlation between the smoothed luminance component and the color difference component, and use the correlation parameter and the smoothed luminance component to predict the color difference component. To generate the image.
Hereinafter, the processing content of the luminance correlation utilization color difference signal prediction unit 54 will be specifically described.
 輝度相関利用色差信号予測部54の平滑化輝度参照画素縮小部61は、動画像符号化装置の平滑化輝度参照画素縮小部31と同様に、イントラ予測用メモリ47により格納されているパーティションP を構成している復号済みの輝度参照画素(輝度信号イントラ予測部51により先に生成されたパーティションP のイントラ予測画像(P )から得られた復号画像内の輝度参照画素)のうち、水平方向及び垂直方向に隣接している複数の輝度参照画素の平滑化処理等を実施することで、縮小輝度参照画素Rec’を生成する。
 即ち、平滑化輝度参照画素縮小部61は、図19に示すように、パーティションP 内の色差信号の予測ブロック(図中、左側のN×Nのブロック)に対応するブロック(図中、右側の2N×2Nのブロック)内の復号済の画素値である復号済輝度信号と、その復号済輝度信号の上端と左端に隣接している復号済輝度信号とを用いて、縮小輝度参照画素Rec’を生成する。
 ここで、縮小輝度参照画素Rec’は、図17に示すように、YUV4:2:0信号において、色差信号画素と同位相になるように、輝度参照画素Recに対して、横方向に1:2:1のローパスフィルタ、縦方向に1:1のローパスフィルタを施した後に、縦横に偶数列のみをサブサンプリングすることで得られる。
Similar to the smoothed luminance reference pixel reducing unit 31 of the moving image encoding device, the smoothed luminance reference pixel reducing unit 61 of the luminance correlation using color difference signal predicting unit 54 stores the partition P i stored in the intra prediction memory 47. Decoded luminance reference pixels constituting n (luminance reference pixels in the decoded image obtained from the intra-predicted image (P i n ) of the partition P i n previously generated by the luminance signal intra prediction unit 51) Among them, the reduced luminance reference pixel Rec ′ L is generated by performing a smoothing process or the like of a plurality of luminance reference pixels adjacent in the horizontal direction and the vertical direction.
That is, the smoothed luminance reference pixel reducing unit 61, as shown in FIG. 19, (in the figure, the block on the left side of N × N) prediction block of the color difference signals in the partition P i n in the block (drawing corresponding to, A reduced luminance reference pixel using a decoded luminance signal which is a decoded pixel value in the right 2N × 2N block) and a decoded luminance signal adjacent to the upper end and the left end of the decoded luminance signal. Rec ′ L is generated.
Here, as shown in FIG. 17, the reduced luminance reference pixel Rec ′ L is arranged in the horizontal direction with respect to the luminance reference pixel Rec L so that the YUV 4: 2: 0 signal has the same phase as that of the color difference signal pixel. After a 1: 2: 1 low-pass filter and a 1: 1 low-pass filter are applied in the vertical direction, only an even column is sub-sampled in the vertical and horizontal directions.
 相関算出部62は、平滑化輝度参照画素縮小部61が縮小輝度参照画素Rec’を生成すると、動画像符号化装置の相関算出部32と同様に、その縮小輝度参照画素Rec’と、色差信号の予測ブロックの上端及び左端に隣接している色差信号の復号済の画素値である色差参照画素Recとを用いて、上記の式(4),(5)に示すように、予測に用いる相関パラメータα,βを算出する。
 色差予測画像生成部63は、相関算出部62が相関パラメータα,βを算出すると、動画像符号化装置の色差予測画像生成部33と同様に、その相関パラメータα,βと縮小輝度参照画素Rec’を用いて、上記の式(6)に示すように、色差予測画像Predを生成する。
Correlation calculating unit 62 'when generating L, and the like the correlation calculating unit 32 of the moving picture coding apparatus, the reduced luminance reference pixel Rec' smoothed luminance reference pixel reducing unit 61 reduces the luminance reference pixels Rec and L, As shown in the above equations (4) and (5), prediction is performed using the color difference reference pixel Rec C that is a decoded pixel value of the color difference signal adjacent to the upper end and the left end of the prediction block of the color difference signal. Correlation parameters α and β used in the above are calculated.
When the correlation calculation unit 62 calculates the correlation parameters α and β, the color difference predicted image generation unit 63 calculates the correlation parameters α and β and the reduced luminance reference pixel Rec as in the color difference prediction image generation unit 33 of the moving image encoding device. 'Using L , a color difference prediction image Pred C is generated as shown in the above equation (6).
 動き補償部44は、切替スイッチ42からインター予測パラメータを受けると、動画像符号化装置の動き補償予測部5と同様に、そのインター予測パラメータに基づいて、各パーティションP に対するインター予測処理を実施することにより、インター予測画像(P )を生成する(ステップST47)。
 即ち、動き補償部44は、動き補償予測フレームメモリ49により格納されている1フレーム以上の参照画像を用いて、そのインター予測パラメータに基づくパーティションP に対する動き補償予測処理を実施することで、インター予測画像(P )を生成する。
When receiving the inter prediction parameter from the changeover switch 42, the motion compensation unit 44 performs inter prediction processing for each partition P i n based on the inter prediction parameter, similarly to the motion compensation prediction unit 5 of the moving image encoding device. By performing, an inter prediction image (P i n ) is generated (step ST47).
That is, motion compensation unit 44, using one or more frames of reference images stored by the motion compensated prediction frame memory 49, by performing the motion compensation prediction processing on partition P i n based on the inter prediction parameters, An inter prediction image (P i n ) is generated.
 逆量子化・逆変換部45は、可変長復号部41から予測差分符号化パラメータを受けると、その予測差分符号化パラメータに含まれている量子化パラメータを用いて、可変長復号部41から出力された符号化ブロックに係る圧縮データを逆量子化し、その予測差分符号化パラメータに含まれている変換ブロックサイズ単位で、逆量子化後の圧縮データの逆変換処理(例えば、逆DCT(逆離散コサイン変換)や逆DST(逆離散サイン変換)、逆KL変換等の逆変換処理)を実施することで、逆変換処理後の圧縮データを復号予測差分信号(圧縮前の差分画像を示す信号)として加算部46に出力する(ステップST48)。 When the inverse quantization / inverse transform unit 45 receives the prediction difference encoding parameter from the variable length decoding unit 41, the inverse quantization / inverse conversion unit 45 outputs the prediction difference encoding parameter from the variable length decoding unit 41 using the quantization parameter included in the prediction difference encoding parameter. The compressed data related to the encoded block is dequantized, and the inverse transformed process of the compressed data after inverse quantization (for example, inverse DCT (inverse discrete) is performed in the transform block size unit included in the prediction differential encoding parameter. (Cosine transform), inverse DST (inverse discrete sine transform), inverse transform process such as inverse KL transform), and the like. To the adder 46 (step ST48).
 加算部46は、逆量子化・逆変換部45から出力された復号予測差分信号と、イントラ予測部43又は動き補償部44により生成された予測画像(P )を示す予測信号とを加算することで、復号パーティション画像ないしはその集まりとしての復号画像を示す復号画像信号を生成し、その復号画像信号をループフィルタ部48に出力する(ステップST49)。
 また、イントラ予測用メモリ47には、イントラ予測に用いるために、当該復号画像が格納される。
The addition unit 46 adds the decoded prediction difference signal output from the inverse quantization / inverse conversion unit 45 and the prediction signal indicating the prediction image (P i n ) generated by the intra prediction unit 43 or the motion compensation unit 44. Thus, a decoded image signal indicating a decoded partition image or a decoded image as a collection thereof is generated, and the decoded image signal is output to the loop filter unit 48 (step ST49).
The intra prediction memory 47 stores the decoded image for use in intra prediction.
 ループフィルタ部48は、加算部46から復号画像信号を受けると、その復号画像信号に含まれている符号化歪みを補償し、符号化歪み補償後の復号画像信号が示す復号画像を参照画像として動き補償予測フレームメモリ49に格納するとともに、その復号画像を再生画像として出力する(ステップST50)。
 なお、ループフィルタ部48によるフィルタリング処理は、入力される復号画像信号の最大符号化ブロックあるいは個々の符号化ブロック単位で行ってもよいし、1画面分のマクロブロックに相当する復号画像信号が入力された後に1画面分まとめて行ってもよい。
 ステップST43~ST49の処理は、全ての符号化ブロックBに属するパーティションP に対する処理が完了するまで繰り返し実施される(ステップST51)。
When the loop filter unit 48 receives the decoded image signal from the adder unit 46, the loop filter unit 48 compensates for the encoding distortion included in the decoded image signal, and uses the decoded image indicated by the decoded image signal after the encoding distortion compensation as a reference image. While storing in the motion compensation prediction frame memory 49, the decoded image is output as a reproduced image (step ST50).
Note that the filtering processing by the loop filter unit 48 may be performed for the maximum encoded block of the input decoded image signal or for each individual encoded block, or a decoded image signal corresponding to a macroblock for one screen is input. After being done, it may be performed for one screen at a time.
The processes in steps ST43 to ST49 are repeatedly performed until the processes for the partitions P i n belonging to all the coding blocks B n are completed (step ST51).
 以上で明らかなように、この実施の形態1によれば、動画像符号化装置のイントラ予測部4が、ブロック分割部2により分割された符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、その相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成するように構成したので、平滑化後の輝度成分におけるエイリアシングの発生を抑制して、符号化効率を高めることができる効果を奏する。
 即ち、符号化制御部1により選択された符号化モードがイントラ予測モードであり、色差信号のイントラ予測モードを示すパラメータが、平滑化輝度相関利用色差信号予測モードである旨を示している場合、輝度参照画素に対して、水平方向及び垂直方向に平滑化を施してサブサンプリングすることで縮小輝度参照画素を生成し、輝度信号と色差信号の相関を利用して色差信号のイントラ予測画像を生成するため、従来では発生していたエイリアシングによる予測誤差の増幅が抑制された予測画像が生成されて予測効率か向上し、その結果として、符号化効率を高めることができる。
As is apparent from the above, according to the first embodiment, the intra prediction unit 4 of the moving image encoding device uses the horizontal direction among the pixels constituting the encoded block divided by the block dividing unit 2. And smoothing the luminance component of a plurality of pixels adjacent in the vertical direction, calculating a correlation parameter indicating a correlation between the smoothed luminance component and the color difference component, and calculating the correlation parameter and the luminance component after smoothing. Thus, since the prediction image for the color difference component is generated, it is possible to suppress the occurrence of aliasing in the luminance component after smoothing and increase the encoding efficiency.
That is, when the encoding mode selected by the encoding control unit 1 is the intra prediction mode, and the parameter indicating the intra prediction mode of the color difference signal indicates that the color difference signal prediction mode is a smoothed luminance correlation use color mode, Reduced luminance reference pixels are generated by smoothing the luminance reference pixels in the horizontal and vertical directions and sub-sampling, and an intra prediction image of the color difference signal is generated using the correlation between the luminance signal and the color difference signal. Therefore, a prediction image in which amplification of a prediction error due to aliasing, which has been conventionally generated, is suppressed and the prediction efficiency is improved, and as a result, encoding efficiency can be increased.
 また、この実施の形態1によれば、動画像復号装置のイントラ予測部43が、可変長復号部41から出力された符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、その相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成するように構成したので、符号化効率の改善が図られている符号化データから正確に動画像を復号することができる効果を奏する。
 即ち、可変長復号部41により可変長復号された符号化モードがイントラ予測モードであり、色差信号のイントラ予測モードを示すパラメータが、平滑化輝度相関利用色差信号予測モードである旨を示している場合、輝度参照画素に対して、水平方向及び垂直方向に平滑化を施してサブサンプリングすることで縮小輝度参照画素を生成し、輝度信号と色差信号の相関を利用して色差信号のイントラ予測画像を生成するため、符号化効率の改善が図られている符号化データから正確に動画像を復号することができる。
Further, according to the first embodiment, the intra prediction unit 43 of the video decoding device is adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block output from the variable length decoding unit 41. Smoothing luminance components related to a plurality of pixels, calculating a correlation parameter indicating a correlation between the smoothed luminance component and the color difference component, and using the correlation parameter and the luminance component after smoothing, the color difference component Therefore, it is possible to accurately decode a moving image from encoded data whose encoding efficiency is improved.
That is, the coding mode that is variable-length decoded by the variable-length decoding unit 41 is the intra prediction mode, and the parameter indicating the intra prediction mode of the color difference signal indicates that the color difference signal prediction mode uses the smoothed luminance correlation. In this case, the luminance reference pixel is smoothed in the horizontal direction and the vertical direction and sub-sampled to generate a reduced luminance reference pixel, and the intra prediction image of the color difference signal using the correlation between the luminance signal and the color difference signal Therefore, it is possible to accurately decode a moving image from encoded data whose encoding efficiency is improved.
 なお、この実施の形態1では、縮小輝度参照画素の生成時に、水平方向に1:2:1の平滑化フィルタを施すものを示したが、フィルタの係数は、これに限定されるものではなく、例えば、3:2:3、7:2:7、1:0:1などのフィルタや、さらにタップ数の大きい平滑化フィルタ、あるいは、1:1のフィルタであっても、同様の効果を得ることができる。
 また、この実施の形態1では、縮小輝度参照画素の生成時に、予測対象となるブロックの輝度参照画素と、それに隣接する輝度参照画素の平滑化フィルタとして共通のものを使用するものを示したが、予測対象となるブロックの輝度参照画素には1:0:1のフィルタを適用し、隣接する輝度参照画素には1:2:1のフィルタを適用するなど、異なるフィルタを適用しても同様の効果を得ることができる。
 1:0:1のフィルタや1:1のフィルタを適用すれば、演算量を削減する効果も得ることができる一方、タップ数の大きい平滑化フィルタを適用すれば、符号化効率を向上させる効果を得ることができる。
 また、水平方向と垂直方向の平滑化は同時に行うように構成してもよい。例えば、対象となる6画素を上記フィルタ係数に応じて重み付き加算した後に平均値を算出するよう構成してもよい。
In the first embodiment, when a reduced luminance reference pixel is generated, a 1: 2: 1 smoothing filter is applied in the horizontal direction. However, the filter coefficient is not limited to this. For example, a filter such as 3: 2: 3, 7: 2: 7, 1: 0: 1, a smoothing filter with a larger number of taps, or a 1: 1 filter can achieve the same effect. Obtainable.
In the first embodiment, when a reduced luminance reference pixel is generated, a common one is used as a smoothing filter for the luminance reference pixel of the block to be predicted and the luminance reference pixel adjacent thereto. Even if different filters are applied, the 1: 0: 1 filter is applied to the luminance reference pixels of the block to be predicted, and the 1: 2: 1 filter is applied to the adjacent luminance reference pixels. The effect of can be obtained.
If a 1: 0: 1 filter or a 1: 1 filter is applied, the amount of calculation can be reduced. On the other hand, if a smoothing filter having a large number of taps is applied, the encoding efficiency is improved. Can be obtained.
Further, the smoothing in the horizontal direction and the vertical direction may be performed simultaneously. For example, the average value may be calculated after weighted addition of six target pixels according to the filter coefficient.
 なお、本願発明はその発明の範囲内において、実施の形態の任意の構成要素の変形、もしくは実施の形態の任意の構成要素の省略が可能である。 In the present invention, any component of the embodiment can be modified or any component of the embodiment can be omitted within the scope of the invention.
 この発明に係る動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法は、予測画像生成手段が、ブロック分割手段により分割された符号化ブロックにおける輝度成分のフレーム内予測を実施して、輝度成分に対する予測画像を生成する輝度成分イントラ予測手段と、上記符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、その相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する色差成分イントラ予測手段とを備え、平滑化後の輝度成分におけるエイリアシングの発生を抑制して、符号化効率を高めることができるので、動画像符号化装置、動画像復号装置に用いるのに適している。 The moving picture coding apparatus, the moving picture decoding apparatus, the moving picture coding method, and the moving picture decoding method according to the present invention are such that the prediction image generating means performs intra-frame prediction of the luminance component in the coded block divided by the block dividing means Luminance component intra prediction means for generating a prediction image for the luminance component, and luminance components relating to a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block To calculate a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component, and use the correlation parameter and the smoothed luminance component to generate a prediction image for the color difference component. And the encoding efficiency can be improved by suppressing the occurrence of aliasing in the luminance component after smoothing. It is suitable for use in the video decoding apparatus.
 1 符号化制御部(符号化制御手段)、2 ブロック分割部(ブロック分割手段)、3 切替スイッチ(予測画像生成手段)、4 イントラ予測部(予測画像生成手段)、5 動き補償予測部(予測画像生成手段)、6 減算部(差分画像生成手段)、7 変換・量子化部(画像圧縮手段)、8 逆量子化・逆変換部、9 加算部、10 イントラ予測用メモリ、11 ループフィルタ部、12 動き補償予測フレームメモリ、13 可変長符号化部(可変長符号化手段)、21 輝度信号イントラ予測部(輝度成分イントラ予測手段)、22 切替スイッチ(色差成分イントラ予測手段)、23 色差信号方向性イントラ予測部(色差成分イントラ予測手段)、24 輝度相関利用色差信号予測部(色差成分イントラ予測手段)、31 平滑化輝度参照画素縮小部、32 相関算出部、33 色差予測画像生成部、41 可変長復号部(可変長復号手段)、42 切替スイッチ(予測画像生成手段)、43 イントラ予測部(予測画像生成手段)、44 動き補償部(予測画像生成手段)、45 逆量子化・逆変換部(差分画像生成手段)、46 加算部(復号画像生成手段)、47 イントラ予測用メモリ、48 ループフィルタ部、49 動き補償予測フレームメモリ、51 輝度信号イントラ予測部(輝度成分イントラ予測手段)、52 切替スイッチ(色差成分イントラ予測手段)、53 色差信号方向性イントラ予測部(色差成分イントラ予測手段)、54 輝度相関利用色差信号予測部(色差成分イントラ予測手段)、61 平滑化輝度参照画素縮小部、62 相関算出部、63 色差予測画像生成部、931 輝度参照画素縮小部、932 相関算出部、933 色差予測画像生成部。 1. Encoding control unit (encoding control unit), 2. Block division unit (block division unit), 3. Changeover switch (prediction image generation unit), 4. Intra prediction unit (prediction image generation unit), 5. Motion compensation prediction unit (prediction) Image generation means), 6 subtraction section (difference image generation means), 7 transform / quantization section (image compression means), 8 inverse quantization / inverse transform section, 9 addition section, 10 intra prediction memory, 11 loop filter section , 12 motion compensated prediction frame memory, 13 variable length coding unit (variable length coding unit), 21 luminance signal intra prediction unit (luminance component intra prediction unit), 22 changeover switch (color difference component intra prediction unit), 23 color difference signal Directional intra prediction unit (color difference component intra prediction means), 24 Luminance correlation utilization color difference signal prediction unit (color difference component intra prediction means), 31 Smoothing luminance reference pixel reduction unit, 32 correlation calculation unit, 33 color difference prediction image generation unit, 41 variable length decoding unit (variable length decoding unit), 42 changeover switch (prediction image generation unit), 43 intra prediction unit (prediction image generation) Means), 44 motion compensation unit (prediction image generation unit), 45 inverse quantization / inverse conversion unit (difference image generation unit), 46 addition unit (decoded image generation unit), 47 intra prediction memory, 48 loop filter unit, 49 motion compensation prediction frame memory, 51 luminance signal intra prediction unit (luminance component intra prediction means), 52 changeover switch (color difference component intra prediction means), 53 color difference signal directional intra prediction unit (color difference component intra prediction means), 54 luminance Correlated color difference signal prediction unit (color difference component intra prediction means), 61 smoothed luminance reference pixel reduction unit, 62 phase Calculator, 63-color difference prediction image generation unit, 931 luminance reference pixel reducing unit, 932 correlation calculation unit, 933-color difference prediction image generation unit.

Claims (8)

  1.  階層的に分割された符号化ブロックに対して、前記符号化ブロックに対応する符号化モードに対応して予測処理を実施することで予測画像を生成する予測画像生成手段と、上記符号化モードを可変長符号化して、上記符号化モードの符号化データが多重化されているビットストリームを生成する可変長符号化手段とを備え、
     上記予測画像生成手段は、上記階層的に分割された符号化ブロックにおける輝度成分のフレーム内予測を実施して、輝度成分に対する予測画像を生成する輝度成分イントラ予測手段と、上記符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、上記相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する色差成分イントラ予測手段とから構成されていることを特徴とする動画像符号化装置。
    Predictive image generating means for generating a predictive image by performing prediction processing corresponding to the encoding mode corresponding to the encoding block for the hierarchically divided encoding block, and the encoding mode Variable length encoding means for generating a bit stream in which encoded data of the above encoding mode is multiplexed, and variable length encoding means,
    The predicted image generation unit includes a luminance component intra prediction unit that performs intra-frame prediction of a luminance component in the hierarchically divided encoded block and generates a predicted image for the luminance component, and the encoded block Smoothing the luminance component of a plurality of pixels adjacent in the horizontal direction and the vertical direction, calculating a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component, and calculating the correlation A moving picture coding apparatus comprising: a color difference component intra prediction unit that generates a predicted image for a color difference component using a parameter and a smoothed luminance component.
  2.  予測処理の処理単位となる符号化ブロックの最大サイズを決定するとともに、最大サイズの符号化ブロックが階層的に分割される際の上限の階層数を決定し、利用可能な1以上の符号化モードの中から、階層的に分割される各々の符号化ブロックに対応する符号化モードを選択する符号化制御手段と、入力画像を上記符号化制御手段により決定された最大サイズの符号化ブロックに分割するとともに、上記符号化制御手段により決定された上限の階層数に至るまで、上記符号化ブロックを階層的に分割するブロック分割手段と、上記ブロック分割手段により分割された符号化ブロックに対して、上記符号化制御手段により選択された符号化モードに対応する予測処理を実施することで予測画像を生成する予測画像生成手段と、上記ブロック分割手段により分割された符号化ブロックと上記予測画像生成手段により生成された予測画像との差分画像を生成する差分画像生成手段と、上記差分画像生成手段により生成された差分画像を圧縮し、上記差分画像の圧縮データを出力する画像圧縮手段と、上記画像圧縮手段から出力された圧縮データ及び上記符号化制御手段により選択された符号化モードを可変長符号化して、上記圧縮データ及び上記符号化モードの符号化データが多重化されているビットストリームを生成する可変長符号化手段とを備えることを特徴とする請求項1記載の動画像符号化装置。 One or more encoding modes that can be used by determining the maximum size of an encoding block that is a processing unit of prediction processing, and determining the upper limit number of layers when the encoding block of the maximum size is hierarchically divided Coding control means for selecting a coding mode corresponding to each coding block divided hierarchically, and an input image is divided into coding blocks of the maximum size determined by the coding control means. And the block dividing means for dividing the encoded block hierarchically until reaching the upper limit number of layers determined by the encoding control means, and the encoded block divided by the block dividing means, A predicted image generating unit that generates a predicted image by performing a prediction process corresponding to the encoding mode selected by the encoding control unit; A difference image generating means for generating a difference image between the encoded block divided by the dividing means and the prediction image generated by the prediction image generating means; and the difference image generated by the difference image generating means is compressed, Image compression means for outputting compressed data of a difference image; variable length coding of the compression data output from the image compression means and the coding mode selected by the coding control means; and the compressed data and the coding 2. The moving picture coding apparatus according to claim 1, further comprising: variable length coding means for generating a bit stream in which the coded data of the mode is multiplexed.
  3.  色差成分イントラ予測手段は、符号化制御手段により選択された符号化モードが、イントラ符号化モードにおける方向性予測モードであれば、ブロック分割手段により分割された符号化ブロックにおける色差成分のフレーム内予測を実施して、色差成分に対する予測画像を生成し、
     上記符号化制御手段により選択された符号化モードが、イントラ符号化モードにおける平滑化輝度相関利用色差信号予測モードであれば、上記符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、上記相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する
     ことを特徴とする請求項2記載の動画像符号化装置。
    If the coding mode selected by the coding control means is a directional prediction mode in the intra coding mode, the chrominance component intra prediction means performs intra-frame prediction of the chrominance component in the coding block divided by the block dividing means. To generate a prediction image for the color difference component,
    If the coding mode selected by the coding control means is the smoothed luminance correlation utilization color difference signal prediction mode in the intra coding mode, the horizontal direction and the vertical direction among the pixels constituting the coding block Smoothing the luminance component relating to a plurality of pixels adjacent to the pixel, calculating a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component, and using the correlation parameter and the luminance component after smoothing, The moving image encoding apparatus according to claim 2, wherein a prediction image for the color difference component is generated.
  4.  ビットストリームに多重化されている符号化データから階層的に分割されている各々の符号化ブロックに係る符号化モードを可変長復号する可変長復号手段と、上記可変長復号手段により可変長復号された符号化ブロックに対して、当該符号化ブロックに係る符号化モードに対応する予測処理を実施することで予測画像を生成する予測画像生成手段と、上記符号化ブロックに係る符号化データを復号したデータから生成された圧縮前の差分画像と、上記予測画像生成手段により生成された予測画像とを加算して復号画像を生成する復号画像生成手段とを備え、
     上記予測画像生成手段は、上記可変長復号手段により可変長復号された符号化ブロックにおける輝度成分のフレーム内予測を実施して、輝度成分に対する予測画像を生成する輝度成分イントラ予測手段と、上記符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、上記相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する色差成分イントラ予測手段とから構成されていることを特徴とする動画像復号装置。
    Variable length decoding means for variable length decoding the coding mode related to each coding block hierarchically divided from the encoded data multiplexed in the bitstream, and variable length decoding by the variable length decoding means. A prediction image generating unit that generates a prediction image by performing a prediction process corresponding to the encoding mode related to the encoded block, and the encoded data related to the encoded block are decoded. A decoded image generating unit that generates a decoded image by adding the difference image before compression generated from the data and the predicted image generated by the predicted image generating unit;
    The prediction image generation unit performs intra-frame prediction of the luminance component in the encoded block that has been variable-length decoded by the variable-length decoding unit, and generates a prediction image for the luminance component, the luminance component intra prediction unit, and the code Smoothes the luminance components of multiple pixels that are adjacent to each other in the horizontal and vertical directions, and calculates correlation parameters that indicate the correlation between the smoothed luminance and color difference components. And a color difference component intra prediction means for generating a predicted image for the color difference component using the correlation parameter and the smoothed luminance component.
  5.  ビットストリームに多重化されている符号化データから階層的に分割されている各々の符号化ブロックに係る圧縮データ及び符号化モードを可変長復号する上記可変長復号手段と、上記可変長復号手段により可変長復号された符号化ブロックに対して、当該符号化ブロックに係る符号化モードに対応する予測処理を実施することで予測画像を生成する上記予測画像生成手段と、上記可変長復号手段により可変長復号された符号化ブロックに係る圧縮データから圧縮前の差分画像を生成する差分画像生成手段と、上記差分画像生成手段により生成された差分画像と上記予測画像生成手段により生成された予測画像とを加算して復号画像を生成する上記復号画像生成手段とを備えることを特徴とする請求項4記載の動画像復号装置。 The variable-length decoding means for variable-length decoding the compressed data and the coding mode related to each coding block hierarchically divided from the coded data multiplexed in the bitstream, and the variable-length decoding means The prediction image generation unit that generates a prediction image by performing a prediction process corresponding to the encoding mode related to the encoding block with respect to the encoding block that has been subjected to variable length decoding, and variable by the variable length decoding unit A differential image generating unit that generates a pre-compression differential image from compressed data related to a long-decoded encoded block, a differential image generated by the differential image generating unit, and a predicted image generated by the predicted image generating unit, The moving image decoding apparatus according to claim 4, further comprising: the decoded image generation unit configured to generate a decoded image by adding together.
  6.  色差成分イントラ予測手段は、可変長復号手段により可変長復号された符号化モードが、イントラ符号化モードにおける方向性予測モードであれば、上記可変長復号手段により可変長復号された符号化ブロックにおける色差成分のフレーム内予測を実施して、色差成分に対する予測画像を生成し、
     上記可変長復号手段により可変長復号された符号化モードが、イントラ符号化モードにおける平滑化輝度相関利用色差信号予測モードであれば、上記符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、上記相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する
     ことを特徴とする請求項5記載の動画像復号装置。
    The chrominance component intra-prediction means, in the coding block that has been variable-length decoded by the variable-length decoding means, if the coding mode that has been variable-length decoded by the variable-length decoding means is a directional prediction mode in the intra-coding mode. Perform intra-frame prediction of the chrominance component to generate a predicted image for the chrominance component,
    If the coding mode that has been variable-length decoded by the variable-length decoding means is the smoothed luminance correlation-use color difference signal prediction mode in the intra coding mode, among the pixels constituting the coding block, the horizontal direction and By smoothing the luminance component of a plurality of pixels adjacent in the vertical direction, a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component is calculated, and the correlation parameter and the smoothed luminance component are used. The moving image decoding apparatus according to claim 5, further comprising: generating a predicted image for the color difference component.
  7.  予測画像生成手段が、階層的に分割された符号化ブロックに対して、前記符号化ブロックに対応する符号化モードに対応して予測処理を実施することで予測画像を生成する予測画像生成処理ステップと、可変長符号化手段が、上記符号化モードを可変長符号化して、上記符号化モードの符号化データが多重化されているビットストリームを生成する可変長符号化処理ステップとを備え、
     上記予測画像生成処理ステップでは、上記階層的に分割された符号化ブロックにおける輝度成分のフレーム内予測を実施して、輝度成分に対する予測画像を生成する輝度成分イントラ予測処理ステップと、上記符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、上記相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する色差成分イントラ予測処理ステップとを実施することを特徴とする動画像符号化方法。
    A predicted image generation processing step in which the predicted image generation means generates a predicted image by executing a prediction process corresponding to the encoding mode corresponding to the encoded block with respect to the hierarchically divided encoded block. And a variable length encoding means comprising a variable length encoding processing step of generating a bit stream in which the encoding mode is variable length encoded and the encoded data of the encoding mode is multiplexed,
    In the predicted image generation processing step, a luminance component intra prediction processing step for generating a predicted image for the luminance component by performing intra-frame prediction of the luminance component in the hierarchically divided encoded block, and the encoded block Smoothing the luminance component related to a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the pixel, and calculating a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component, A moving picture coding method comprising: performing a color difference component intra prediction processing step of generating a predicted image for a color difference component using the correlation parameter and the smoothed luminance component.
  8.  可変長復号手段が、ビットストリームに多重化されている符号化データから階層的に分割されている各々の符号化ブロックに係る符号化モードを可変長復号する可変長復号処理ステップと、予測画像生成手段が、上記可変長復号処理ステップで可変長復号された符号化ブロックに対して、当該符号化ブロックに係る符号化モードに対応する予測処理を実施することで予測画像を生成する予測画像生成処理ステップと、復号画像生成手段が、上記符号化ブロックに係る符号化データを復号したデータから生成された圧縮前の差分画像と、上記予測画像生成処理ステップで生成された予測画像とを加算して復号画像を生成する復号画像生成処理ステップとを備え、
     上記予測画像生成処理ステップでは、上記可変長復号処理ステップで可変長復号された符号化ブロックにおける輝度成分のフレーム内予測を実施して、輝度成分に対する予測画像を生成する輝度成分イントラ予測処理ステップと、上記符号化ブロックを構成している画素のうち、水平方向及び垂直方向に隣接している複数の画素に係る輝度成分を平滑化して、平滑化後の輝度成分と色差成分の相関を示す相関パラメータを算出し、上記相関パラメータと平滑化後の輝度成分を用いて、色差成分に対する予測画像を生成する色差成分イントラ予測処理ステップとを実施することを特徴とする動画像復号方法。
    A variable-length decoding means for variable-length decoding a coding mode related to each coding block that is hierarchically divided from the coded data multiplexed in the bitstream; A prediction image generation process in which means generates a prediction image by performing a prediction process corresponding to the encoding mode related to the encoding block on the encoding block that has been variable-length decoded in the variable-length decoding processing step. And the decoded image generation means adds the difference image before compression generated from the data obtained by decoding the encoded data related to the encoded block and the predicted image generated in the predicted image generation processing step. A decoded image generation processing step for generating a decoded image,
    In the predicted image generation processing step, a luminance component intra prediction processing step that performs intra-frame prediction of the luminance component in the coding block variable-length decoded in the variable-length decoding processing step to generate a predicted image for the luminance component; The correlation indicating the correlation between the smoothed luminance component and the color difference component by smoothing the luminance component of a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block. A moving picture decoding method comprising: calculating a parameter, and performing a color difference component intra prediction processing step of generating a predicted image for the color difference component using the correlation parameter and the smoothed luminance component.
PCT/JP2012/003679 2011-06-24 2012-06-05 Video encoding device, video decoding device, video encoding method and video decoding method WO2012176387A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-140637 2011-06-24
JP2011140637A JP2014168107A (en) 2011-06-24 2011-06-24 Video encoding device, video decoding device, video encoding method and video decoding method

Publications (1)

Publication Number Publication Date
WO2012176387A1 true WO2012176387A1 (en) 2012-12-27

Family

ID=47422251

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/003679 WO2012176387A1 (en) 2011-06-24 2012-06-05 Video encoding device, video decoding device, video encoding method and video decoding method

Country Status (2)

Country Link
JP (1) JP2014168107A (en)
WO (1) WO2012176387A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015076781A (en) * 2013-10-10 2015-04-20 三菱電機株式会社 Image encoding device, image decoding device, image encoding method, and image decoding method

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6497014B2 (en) * 2014-09-24 2019-04-10 富士ゼロックス株式会社 Image processing apparatus and image processing program
JP2018074491A (en) 2016-11-02 2018-05-10 富士通株式会社 Dynamic image encoding device, dynamic image encoding method, and dynamic image encoding program
CN108777794B (en) * 2018-06-25 2022-02-08 腾讯科技(深圳)有限公司 Image encoding method and apparatus, storage medium, and electronic apparatus

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009534876A (en) * 2006-03-23 2009-09-24 サムスン エレクトロニクス カンパニー リミテッド Image encoding method and apparatus, decoding method and apparatus
JP2010531609A (en) * 2007-06-27 2010-09-24 サムスン エレクトロニクス カンパニー リミテッド Video data encoding and / or decoding method, recording medium, and apparatus

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009534876A (en) * 2006-03-23 2009-09-24 サムスン エレクトロニクス カンパニー リミテッド Image encoding method and apparatus, decoding method and apparatus
JP2010531609A (en) * 2007-06-27 2010-09-24 サムスン エレクトロニクス カンパニー リミテッド Video data encoding and / or decoding method, recording medium, and apparatus

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIANLE CHEN ET AL.: "CE6.a.4: Chroma intra prediction by reconstructed luma samples", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT- VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/ WG11 5TH MEETING, 16 March 2011 (2011-03-16), GENEVA *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015076781A (en) * 2013-10-10 2015-04-20 三菱電機株式会社 Image encoding device, image decoding device, image encoding method, and image decoding method

Also Published As

Publication number Publication date
JP2014168107A (en) 2014-09-11

Similar Documents

Publication Publication Date Title
JP6005087B2 (en) Image decoding apparatus, image decoding method, image encoding apparatus, image encoding method, and data structure of encoded data
JP6381724B2 (en) Image decoding device
JP5782169B2 (en) Moving picture coding apparatus and moving picture coding method
JP7012809B2 (en) Image coding device, moving image decoding device, moving image coding data and recording medium
WO2013065402A1 (en) Moving picture encoding device, moving picture decoding device, moving picture encoding method, and moving picture decoding method
KR20140007074A (en) Moving image encoding apparatus, moving image decoding apparatus, moving image encoding method and moving image decoding method
WO2013114992A1 (en) Color video encoding device, color video decoding device, color video encoding method, and color video decoding method
WO2012176387A1 (en) Video encoding device, video decoding device, video encoding method and video decoding method
WO2013065678A1 (en) Dynamic image encoding device, dynamic image decoding device, method for encoding dynamic image and method for decoding dynamic image
JP2014007643A (en) Moving picture encoder, moving picture decoder, moving picture encoding method, and moving picture decoding method
JP2013168913A (en) Video encoder, video decoder, video encoding method and video decoding method
JP2013126145A (en) Color moving image encoding device, color moving image decoding device, color moving image encoding method, and color moving image decoding method
JP2013098713A (en) Video encoding device, video decoding device, video encoding method, and video decoding method
JP2012023609A (en) Moving image encoding device, moving image decoding device, moving image encoding method and moving image decoding method
WO2014051080A1 (en) Color moving image coding device, color moving image decoding device, color moving image coding method and color moving image decoding method
JP2013102269A (en) Color video encoding device, color video decoding device, color video encoding method, and color video decoding method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12802680

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12802680

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP