WO2012176387A1 - Dispositif de codage vidéo, dispositif de décodage vidéo, procédé de codage vidéo et procédé de décodage vidéo - Google Patents

Dispositif de codage vidéo, dispositif de décodage vidéo, procédé de codage vidéo et procédé de décodage vidéo Download PDF

Info

Publication number
WO2012176387A1
WO2012176387A1 PCT/JP2012/003679 JP2012003679W WO2012176387A1 WO 2012176387 A1 WO2012176387 A1 WO 2012176387A1 JP 2012003679 W JP2012003679 W JP 2012003679W WO 2012176387 A1 WO2012176387 A1 WO 2012176387A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
image
unit
coding
block
Prior art date
Application number
PCT/JP2012/003679
Other languages
English (en)
Japanese (ja)
Inventor
杉本 和夫
彰 峯澤
裕介 伊谷
亮史 服部
関口 俊一
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Publication of WO2012176387A1 publication Critical patent/WO2012176387A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/593Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial prediction techniques

Definitions

  • the present invention relates to a moving image encoding apparatus and moving image encoding method for encoding a moving image with high efficiency, a moving image decoding apparatus and a moving image decoding method for decoding a moving image encoded with high efficiency, and It is about.
  • FIG. 18 is a configuration diagram illustrating a luminance correlation use color difference signal prediction unit that performs prediction processing in the Intra_LM mode.
  • the luminance correlation use color difference signal prediction unit in FIG. 18 includes a luminance reference pixel reduction unit 931, a correlation calculation unit 932, and a color difference prediction image generation unit 933.
  • the luminance reference pixel reduction unit 931 for the YUV 4: 2: 0 signal, a decoded luminance signal that is a decoded pixel value in the block corresponding to the prediction block of the color difference signal,
  • the reduced luminance reference pixel Rec ′ L is generated using the decoded luminance signal adjacent to the upper end and the left end of the decoded luminance signal.
  • the reduced luminance reference pixel Rec ′ L is vertical to the luminance reference pixel Rec L so that the YUV 4: 2: 0 signal has the same phase as the pixels of the color difference signal.
  • the sub-sampling is performed only on even columns in the vertical and horizontal directions.
  • the correlation calculation unit 932 When the luminance reference pixel reduction unit 931 generates the reduced luminance reference pixel Rec ′ L , the correlation calculation unit 932 generates the reduced luminance reference pixel Rec ′ L and the color difference signal adjacent to the upper end and the left end of the prediction block of the color difference signal. Correlation parameters ⁇ and ⁇ used for prediction are calculated using the color difference reference pixel Rec C , which is the decoded pixel value, as shown in the following equations (1) and (2). In equations (1) and (2), I is a value twice the number of pixels on one side of the prediction block of the color difference signal to be processed.
  • the color difference predicted image generation unit 933 uses the correlation parameters ⁇ and ⁇ and the reduced luminance reference pixel Rec ′ L as shown in the following equation (3). Then, the color difference prediction image Pred C is generated.
  • JCT-VC Joint Collaborative Team on Video Coding
  • the conventional moving image coding apparatus is configured as described above, a change in the color difference signal that cannot be predicted from the adjacent pixels can be appropriately predicted using the correlation with the luminance signal, with high accuracy. Prediction becomes possible.
  • a strong low-pass filter is applied in the vertical direction of the screen by taking an average value of every two pixels, but simple in the horizontal direction of the screen. because only be subsampled every two pixels, there may aliasing occurs in the horizontal direction of the reduced luminance reference pixel Rec 'L.
  • the color difference prediction image Pred C is obtained by multiplying the reduced luminance reference pixel Rec ′ L by scalar multiplication and offset addition, the aliasing of the reduced luminance reference pixel Rec ′ L is reflected in the color difference prediction image and the prediction error is amplified. As a result, there has been a problem that improvement in coding efficiency is limited.
  • the present invention has been made to solve the above-described problems, and an object of the present invention is to provide a moving picture coding apparatus and a moving picture coding method capable of suppressing the occurrence of aliasing and improving the coding efficiency. And It is another object of the present invention to obtain a moving picture decoding apparatus and a moving picture decoding method that can accurately decode a moving picture from encoded data whose encoding efficiency is improved.
  • the moving picture coding apparatus generates a prediction image by performing a prediction process corresponding to a coding mode corresponding to the coding block on a hierarchically divided coding block.
  • Intra-frame prediction of the luminance component in the hierarchically divided encoded block to generate a prediction image for the luminance component, and among the pixels constituting the encoded block Smoothes the luminance component of a plurality of pixels adjacent in the horizontal and vertical directions, and calculates a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component.
  • the correlation parameter and the luminance component after smoothing in which as and a color difference component intra-prediction unit that generates a predicted image for the color difference components.
  • the predicted image generating means performs intra-frame prediction of the luminance component in the encoded block divided by the block dividing means, and generates a predicted image for the luminance component, the luminance component intra prediction means,
  • a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component is smoothed by smoothing the luminance component of a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block. It is composed of a chrominance component intra-prediction unit that generates a predicted image for the chrominance component using the calculated correlation parameter and the smoothed luminance component, thereby suppressing the occurrence of aliasing in the smoothed luminance component
  • the encoding efficiency can be increased.
  • FIG. 1 is a block diagram showing a moving picture coding apparatus according to Embodiment 1 of the present invention.
  • an encoding control unit 1 determines the maximum size of an encoding block that is a processing unit when intra prediction processing (intraframe prediction processing) or motion compensation prediction processing (interframe prediction processing) is performed. Then, a process of determining the upper limit number of layers when the encoding block of the maximum size is hierarchically divided is performed.
  • the encoding control unit 1 assigns each encoding block divided hierarchically from one or more available encoding modes (one or more intra encoding modes and one or more inter encoding modes). A process of selecting a suitable encoding mode is performed.
  • the encoding control unit 1 determines the quantization parameter and transform block size used when the difference image is compressed for each encoding block, and intra prediction used when the prediction process is performed. A process of determining a parameter or an inter prediction parameter is performed.
  • the quantization parameter and the transform block size are included in the prediction difference coding parameter, and are output to the transform / quantization unit 7, the inverse quantization / inverse transform unit 8, the variable length coding unit 13, and the like.
  • the encoding control unit 1 constitutes an encoding control unit.
  • the block dividing unit 2 divides the input image into encoded blocks of the maximum size determined by the encoding control unit 1 and determined by the encoding control unit 1 The process of dividing the encoded block hierarchically is performed until the upper limit number of hierarchies is reached.
  • the block dividing unit 2 constitutes block dividing means. If the coding mode selected by the coding control unit 1 is the intra coding mode, the changeover switch 3 outputs the coding block divided by the block dividing unit 2 to the intra prediction unit 4, and the coding control unit 1 If the coding mode selected by (2) is the inter coding mode, a process of outputting the coding block divided by the block dividing unit 2 to the motion compensation prediction unit 5 is performed.
  • the intra prediction unit 4 When the intra prediction unit 4 receives the encoded block divided by the block dividing unit 2 from the changeover switch 3, the intra prediction unit 4 is adjacent to the encoded block stored in the intra prediction memory 10 with respect to the encoded block.
  • generates a prediction image is implemented by implementing the intra prediction process based on the intra prediction parameter output from the encoding control part 1 using the decoded pixel which is. That is, the intra prediction unit 4 performs intra-frame prediction of the luminance component of the encoded block divided by the block dividing unit 2 to generate a prediction image for the luminance component.
  • the block dividing unit 2 Intra-frame prediction of the color difference component in the encoded block divided by the above is performed, and a predicted image for the color difference component is generated.
  • the horizontal direction and the pixel in the encoding block are Smooths the luminance component of multiple pixels adjacent in the vertical direction, calculates a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component, and uses the correlation parameter and the luminance component after smoothing.
  • a predicted image for the color difference component is generated.
  • the motion compensated prediction unit 5 is stored by the motion compensated prediction frame memory 12. Based on the inter prediction parameter output from the encoding control unit 1, using a reference image of one or more frames, a process for generating a predicted image is performed by performing a motion compensation prediction process for the encoded block To do.
  • the changeover switch 3, the intra prediction unit 4, and the motion compensation prediction unit 5 constitute a predicted image generation unit.
  • the process to generate is performed.
  • the subtracting unit 6 constitutes a difference image generating unit.
  • the transform / quantization unit 7 performs transform processing (for example, DCT (discrete) of the difference image generated by the subtraction unit 6 in units of transform block size included in the prediction difference encoding parameter output from the encoding control unit 1.
  • the transform / quantization unit 7 constitutes an image compression unit.
  • the inverse quantization / inverse transform unit 8 performs inverse quantization on the compressed data output from the transform / quantization unit 7 using the quantization parameter included in the prediction difference encoding parameter output from the encoding control unit 1. And inverse transform processing of the compressed data after inverse quantization (for example, inverse DCT (inverse discrete cosine transform) or inverse DST (inverse discrete sine transform)) in units of transform block sizes included in the prediction differential encoding parameter. (Inverse transform process such as inverse KL transform) is performed, and the process of outputting the compressed data after the inverse transform process as a local decoded prediction difference signal is performed.
  • inverse DCT inverse discrete cosine transform
  • inverse DST inverse discrete sine transform
  • the adding unit 9 adds the local decoded prediction difference signal output from the inverse quantization / inverse transform unit 8 and the prediction signal indicating the prediction image generated by the intra prediction unit 4 or the motion compensation prediction unit 5 to thereby perform local decoding. A process of generating a locally decoded image signal indicating an image is performed.
  • the intra prediction memory 10 is a recording medium such as a RAM that stores a local decoded image indicated by the local decoded image signal generated by the adding unit 9 as an image used in the next intra prediction process by the intra prediction unit 4.
  • the loop filter unit 11 compensates for the coding distortion included in the locally decoded image signal generated by the adding unit 9, and performs motion compensation prediction using the locally decoded image indicated by the locally decoded image signal after the coding distortion compensation as a reference image.
  • a process of outputting to the frame memory 12 is performed.
  • the motion compensated prediction frame memory 12 is a recording medium such as a RAM that stores a locally decoded image after the filtering process by the loop filter unit 11 as a reference image used in the next motion compensated prediction process by the motion compensated prediction unit 5.
  • the variable length encoding unit 13 includes the compressed data output from the transform / quantization unit 7, the encoding mode and prediction differential encoding parameter output from the encoding control unit 1, and the intra output from the intra prediction unit 4.
  • the prediction parameter or the inter prediction parameter output from the motion compensation prediction unit 5 is variable-length encoded, and the compressed data, the encoding mode, the prediction differential encoding parameter, and the intra prediction parameter / inter prediction parameter encoded data are multiplexed. A process for generating a converted bitstream is performed.
  • the variable length encoding unit 13 constitutes variable length encoding means.
  • a coding control unit 1 a block division unit 2, a changeover switch 3, an intra prediction unit 4, a motion compensation prediction unit 5, a subtraction unit 6, and a transform / quantization unit 7, which are components of the moving image coding apparatus.
  • the inverse quantization / inverse transform unit 8, the adder unit 9, the loop filter unit 11 and the variable length coding unit 13 each have dedicated hardware (for example, a semiconductor integrated circuit on which a CPU is mounted, or a one-chip microcomputer)
  • FIG. 2 is a
  • FIG. 3 is a block diagram showing the intra prediction unit 4 of the moving picture coding apparatus according to Embodiment 1 of the present invention.
  • the luminance signal intra prediction unit 21 performs intra-frame prediction of the luminance component in the encoded block divided by the block dividing unit 2 and performs a process of generating a prediction image for the luminance component. That is, the luminance signal intra prediction unit 21 refers to the decoded luminance reference pixel adjacent to the encoded block stored in the intra prediction memory 10 and outputs the intra prediction output from the encoding control unit 1.
  • the luminance signal intra prediction unit 21 constitutes luminance component intra prediction means.
  • the changeover switch 22 selects a reference pixel used for prediction. If the color difference signal directivity intra prediction unit 23 indicates that the parameter indicating the intra coding mode of the color difference signal is the smoothed luminance correlation use color difference signal prediction mode, the reference pixel used for prediction is used for the luminance correlation use. A process of outputting to the color difference signal prediction unit 24 is performed.
  • the chrominance signal directivity intra prediction unit 23 refers to the decoded chrominance reference pixel adjacent to the encoded block received from the changeover switch 22 and determines the chrominance based on the intra prediction parameter output from the encoding control unit 1. By performing intra-frame prediction of components, a process of generating a predicted image for the color difference component is performed.
  • the luminance correlation utilization color difference signal prediction unit 24, among the decoded pixels received from the changeover switch 22, has decoded luminance reference pixels and color difference reference pixels adjacent to the coding block, and has been decoded within the coding block.
  • a correlation parameter indicating the correlation of the color difference component is calculated, and a process of generating a predicted image for the color difference component is performed using the correlation parameter and the smoothed luminance component.
  • the changeover switch 22, the color difference signal directivity intra prediction unit 23, and the luminance correlation use color difference signal prediction unit 24 constitute a color difference component intra prediction unit.
  • FIG. 4 is a block diagram showing the luminance correlation utilizing color difference signal prediction unit 24 of the moving picture coding apparatus according to Embodiment 1 of the present invention.
  • the smoothed luminance reference pixel reducing unit 31 is adjacent in the horizontal direction and the vertical direction among the decoded luminance reference pixels constituting the encoded block stored in the intra prediction memory 10.
  • a reduced luminance reference pixel Rec ′ L is generated by performing smoothing processing or the like of a plurality of luminance reference pixels.
  • the correlation calculation unit 32 uses the color difference reference pixels stored in the intra prediction memory 10 and the reduced luminance reference pixel Rec ′ L generated by the smoothed luminance reference pixel reduction unit 31 to calculate the correlation between the luminance component and the color difference component. Processing for calculating the correlation parameters ⁇ and ⁇ shown in FIG.
  • the color difference predicted image generation unit 33 uses the correlation parameters ⁇ and ⁇ calculated by the correlation calculation unit 32 and the reduced luminance reference pixel Rec ′ L generated by the smoothed luminance reference pixel reduction unit 31 to generate a predicted image for the color difference component. Perform the process to generate.
  • FIG. 5 is a flowchart showing the processing contents of the intra prediction unit 4 of the moving picture coding apparatus according to Embodiment 1 of the present invention.
  • FIG. 6 is a flowchart showing the processing contents of the luminance correlation utilizing color difference signal prediction unit 24 of the moving picture coding apparatus according to Embodiment 1 of the present invention.
  • FIG. 7 is a block diagram showing a moving picture decoding apparatus according to Embodiment 1 of the present invention.
  • the variable length decoding unit 41 is a code that is hierarchically divided from the maximum size of the coding block that is a processing unit when the intra prediction process or the motion compensation prediction process is performed, and the coding block of the maximum size.
  • the coded data related to the largest size coded block and the hierarchically divided coded block is identified from the coded data multiplexed in the bitstream.
  • variable length decoding unit 41 constitutes variable length decoding means.
  • the changeover switch 42 outputs the intra prediction parameter output from the variable length decoding unit 41 to the intra prediction unit 43 when the coding mode related to the coding block output from the variable length decoding unit 41 is the intra coding mode.
  • the coding mode is the inter coding mode
  • a process of outputting the inter prediction parameter output from the variable length decoding unit 41 to the motion compensation unit 44 is performed.
  • the intra prediction unit 43 uses a decoded pixel adjacent to the coding block stored in the intra prediction memory 47 and uses the decoded prediction frame output from the changeover switch 42 to generate a frame for the coding block.
  • a process for generating a predicted image is performed by performing the intra prediction process. That is, for the luminance component in the encoded block output from the variable length decoding unit 41, the intra prediction unit 43 performs intra-frame prediction of the luminance component to generate a prediction image for the luminance component.
  • the coding mode output from the variable length decoding unit 41 is the directional prediction mode in the intra coding mode, the coding is performed.
  • Intraframe prediction of the color difference component in the block is performed to generate a predicted image for the color difference component.
  • the coding mode output from the variable length decoding unit 41 is the smoothed luminance correlation utilization color difference signal prediction mode in the intra coding mode, among the pixels constituting the coding block, the horizontal direction and Smooths the luminance component of multiple pixels adjacent in the vertical direction, calculates a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component, and uses the correlation parameter and the luminance component after smoothing.
  • a predicted image for the color difference component is generated.
  • the motion compensation unit 44 performs a motion compensation prediction process on the encoded block based on the inter prediction parameter output from the changeover switch 42 using one or more reference images stored in the motion compensation prediction frame memory 49. Thus, a process for generating a predicted image is performed.
  • the changeover switch 42, the intra prediction unit 43, and the motion compensation unit 44 constitute a predicted image generation unit.
  • the inverse quantization / inverse transform unit 45 uses the quantization parameter included in the prediction difference encoding parameter output from the variable length decoding unit 41 to compress the encoded block output from the variable length decoding unit 41
  • Data is inversely quantized, and inverse transform processing (for example, inverse DCT (Inverse Discrete Cosine Transform) or inverse DST (Inverse Discrete Discrete) is performed on the inverse quantization compressed data in units of transform block sizes included in the prediction differential encoding parameter.
  • inverse transform processing for example, inverse DCT (Inverse Discrete Cosine Transform) or inverse DST (Inverse Discrete Discrete) is performed on the inverse quantization compressed data in units of transform block sizes included in the prediction differential encoding parameter.
  • inverse conversion processing such as inverse KL conversion
  • a process of outputting the compressed data after the inverse conversion process as a decoded prediction difference signal (a signal indicating a difference image before compression) is performed.
  • the addition unit 46 adds the decoded prediction difference signal output from the inverse quantization / inverse conversion unit 45 to the prediction signal indicating the prediction image generated by the intra prediction unit 43 or the motion compensation unit 44, thereby indicating a decoded image. A process of generating a decoded image signal is performed.
  • the adding unit 46 constitutes a decoded image generating unit.
  • the intra prediction memory 47 is a recording medium such as a RAM that stores a decoded image indicated by the decoded image signal generated by the addition unit 46 as an image used in the next intra prediction process by the intra prediction unit 43.
  • the loop filter unit 48 compensates for the coding distortion included in the decoded image signal generated by the adding unit 46, and uses the decoded image indicated by the decoded image signal after the coding distortion compensation as a reference image as a motion compensated prediction frame memory 49. And a process of outputting the decoded image as a reproduced image to the outside.
  • the motion compensated prediction frame memory 49 is a recording medium such as a RAM that stores a decoded image after the filtering process by the loop filter unit 48 as a reference image to be used by the motion compensation unit 44 in the next motion compensation prediction process.
  • variable length decoding unit 41 a variable length decoding unit 41, a changeover switch 42, an intra prediction unit 43, a motion compensation unit 44, an inverse quantization / inverse transformation unit 45, an addition unit 46, and a loop filter unit 48, which are components of the video decoding device.
  • a changeover switch 42 an intra prediction unit 43, a motion compensation unit 44, an inverse quantization / inverse transformation unit 45, an addition unit 46, and a loop filter unit 48, which are components of the video decoding device.
  • dedicated hardware for example, a semiconductor integrated circuit on which a CPU is mounted, or a one-chip microcomputer
  • FIG. 8 is a flowchart showing the processing contents of the moving picture decoding apparatus according to Embodiment 1 of the present invention.
  • FIG. 9 is a block diagram showing the intra prediction unit 43 of the moving picture decoding apparatus according to Embodiment 1 of the present invention.
  • the luminance signal intra prediction unit 51 performs intra-frame prediction of the luminance component in the encoded block output from the variable length decoding unit 41, and performs processing to generate a prediction image for the luminance component. That is, the luminance signal intra prediction unit 51 refers to the decoded luminance reference pixel adjacent to the encoded block stored in the intra prediction memory 47, and outputs the intra prediction output from the variable length decoding unit 41.
  • the luminance signal intra prediction unit 51 constitutes luminance component intra prediction means.
  • the changeover switch 52 selects the reference pixel used for prediction. If the parameter indicating the intra coding mode of the color difference signal given to the color difference signal directional intra prediction unit 53 indicates that the color difference signal prediction mode uses the smoothed luminance correlation, the reference pixel used for the prediction uses the luminance correlation. A process of outputting to the color difference signal prediction unit 54 is performed.
  • the chrominance signal directivity intra prediction unit 53 refers to the decoded chrominance reference pixel adjacent to the encoded block received from the changeover switch 52, and the chrominance based on the intra prediction parameter output from the variable length decoding unit 41 By performing intra-frame prediction of components, a process of generating a predicted image for the color difference component is performed.
  • the luminance correlation use color difference signal prediction unit 54 has decoded luminance reference pixels and color difference reference pixels adjacent to the encoded block, and has been decoded in the encoded block.
  • a correlation parameter indicating the correlation of the color difference component is calculated, and a process of generating a predicted image for the color difference component is performed using the correlation parameter and the smoothed luminance component.
  • the changeover switch 52, the color difference signal directivity intra prediction unit 53, and the luminance correlation utilization color difference signal prediction unit 54 constitute a color difference component intra prediction unit.
  • FIG. 10 is a block diagram showing the luminance correlation utilization color difference signal prediction unit 54 of the moving picture decoding apparatus according to Embodiment 1 of the present invention.
  • the smoothed luminance reference pixel reduction unit 61 is adjacent in the horizontal direction and the vertical direction among the decoded luminance reference pixels constituting the encoded block stored in the intra prediction memory 47.
  • a reduced luminance reference pixel Rec ′ L is generated by performing smoothing processing or the like of a plurality of luminance reference pixels.
  • the correlation calculation unit 62 uses the color difference reference pixel stored in the intra prediction memory 47 and the reduced luminance reference pixel Rec ′ L generated by the smoothed luminance reference pixel reduction unit 61 to calculate the correlation between the luminance component and the color difference component.
  • the color difference predicted image generation unit 63 uses the correlation parameters ⁇ and ⁇ calculated by the correlation calculation unit 62 and the reduced luminance reference pixel Rec ′ L generated by the smoothed luminance reference pixel reduction unit 61 to generate a predicted image for the color difference component. Perform the process to generate.
  • the moving picture coding apparatus adapts to local changes in the spatial and temporal directions of a video signal, divides the video signal into regions of various sizes, and performs intraframe / interframe adaptive coding. It is characterized by performing.
  • video signals have the characteristic that the complexity of the signal changes locally in space and time, and when viewed spatially, comparison is made on a specific video frame, for example, sky or wall.
  • a picture having uniform signal characteristics in a wide image area and a picture having a complicated texture pattern in a small image area may be mixed in a picture including a person or a fine texture.
  • the encoding process generates a prediction difference signal with low signal power and entropy by temporal and spatial prediction, and performs a process of reducing the overall code amount.
  • the prediction parameter used for the prediction process is as large as possible. If it can be applied uniformly to the region, the code amount of the prediction parameter can be reduced. On the other hand, if the same prediction parameter is applied to a large image region with respect to an image signal pattern having a large temporal and spatial change, a prediction error increases, and therefore the code amount of the prediction difference signal cannot be reduced. .
  • the moving image encoding apparatus In order to perform encoding processing adapted to the general characteristics of such a video signal, the moving image encoding apparatus according to the first embodiment hierarchically divides the video signal area from a predetermined maximum block size. And the structure which adapts a prediction process and the encoding process of a prediction difference for every division area is employ
  • the video signal to be processed by the moving image coding apparatus is a color in an arbitrary color space such as a YUV signal composed of a luminance signal and two color difference signals, or an RGB signal output from a digital image sensor.
  • the video frame is an arbitrary video signal such as a monochrome image signal or an infrared image signal, in which the video frame is composed of a horizontal and vertical two-dimensional digital sample (pixel) sequence.
  • the gradation of each pixel may be 8 bits, or may be gradation such as 10 bits or 12 bits.
  • the input video signal is a YUV signal unless otherwise specified.
  • picture The processing data unit corresponding to each frame of the video is referred to as “picture”.
  • “picture” is described as a signal of a video frame that has been sequentially scanned (progressive scan).
  • the “picture” may be a field image signal which is a unit constituting a video frame.
  • the encoding control unit 1 determines the maximum size of an encoding block that is a processing unit when intra prediction processing (intraframe prediction processing) or motion compensation prediction processing (interframe prediction processing) is performed, The upper limit number of hierarchies when the coding block of the maximum size is divided hierarchically is determined (step ST1 in FIG. 2).
  • a method of determining the maximum size of the encoded block for example, a method of determining a size corresponding to the resolution of the input image for all the pictures can be considered.
  • the difference in complexity of local motion of the input image is quantified as a parameter, and the maximum size is determined to be a small value for pictures with intense motion, and the maximum size is determined to be a large value for pictures with little motion. Etc. are considered.
  • the upper limit of the number of hierarchies is set so that, for example, the more the input image moves, the deeper the number of hierarchies, so that finer motion can be detected, and the less the input image moves, the lower the number of hierarchies. A way to do this is conceivable.
  • the encoding control unit 1 includes each encoding block divided hierarchically from one or more available encoding modes (M types of intra encoding modes and N types of inter encoding modes). Is selected (step ST2).
  • M types of intra coding modes prepared in advance will be described later.
  • each encoded block is further divided into partitions. Since the encoding mode selection method by the encoding control unit 1 is a known technique, a detailed description thereof will be omitted. For example, an encoding process for an encoding block is performed using any available encoding mode. There is a method in which coding efficiency is verified by performing and a coding mode having the best coding efficiency is selected from among a plurality of available coding modes.
  • the encoding control unit 1 determines a quantization parameter and a transform block size used when the difference image is compressed for each partition included in each encoding block, and a prediction process is performed. Intra prediction parameters or inter prediction parameters used in the determination are determined.
  • the encoding control unit 1 outputs the prediction difference encoding parameter including the quantization parameter and the transform block size to the transform / quantization unit 7, the inverse quantization / inverse transform unit 8, and the variable length encoding unit 13.
  • a prediction difference encoding parameter is output to the intra estimation part 4 as needed.
  • FIG. 11 is an explanatory diagram showing a state in which the maximum-size encoded block is hierarchically divided into a plurality of encoded blocks.
  • the coding block of the maximum size is the coding block B 0 of the 0th layer, and has a size of (L 0 , M 0 ) as a luminance component.
  • the encoding block B n is obtained by performing hierarchical division to a predetermined depth determined separately in a quadtree structure with the encoding block B 0 having the maximum size as a starting point. Yes.
  • the coding block B n is an image area of size (L n , M n ).
  • the size of the encoded block B n is defined as the size of the luminance component of the encoded block B n (L n, M n ).
  • the encoding mode m (B n ) may be configured to use an individual mode for each color component, but hereinafter, unless otherwise specified, YUV The description will be made on the assumption that it indicates the coding mode for the luminance component of the coding block of the signal 4: 2: 0 format.
  • the coding mode m (B n ) includes one or more intra coding modes (collectively “INTRA”), one or more inter coding modes (collectively “INTER”), As described above, the encoding control unit 1 selects an encoding mode having the highest encoding efficiency for the encoding block B n from all the encoding modes available for the picture or a subset thereof. .
  • the encoded block Bn is further divided into one or more prediction processing units (partitions).
  • the partition belonging to the coding block B n is denoted as P i n (i: the partition number in the nth layer).
  • FIG. 12 is an explanatory diagram showing partitions P i n belonging to the coding block B n . How the partition P i n belonging to the coding block B n is divided is included as information in the coding mode m (B n ). All partitions P i n are subjected to prediction processing according to the coding mode m (B n ), but individual prediction parameters can be selected for each partition P i n .
  • the encoding control unit 1 generates a block division state as illustrated in FIG. 13 for the encoding block of the maximum size, and specifies the encoding block Bn .
  • the shaded area in FIG. 13 (a) shows the distribution of the partitions after the division
  • FIG. 13 (b) shows the situation where the encoding mode m (B n ) is assigned to the partition after the hierarchical division in a quadtree graph. Show.
  • nodes surrounded by squares indicate nodes (encoded blocks B n ) to which the encoding mode m (B n ) is assigned.
  • the changeover switch 3 performs intra prediction on the partition P i n belonging to the coding block B n divided by the block dividing unit 2.
  • the coding control unit 1 selects the inter coding mode (m (B n ) ⁇ INTER)
  • the partition P i n belonging to the coding block B n is output to the motion compensation prediction unit 5. .
  • intra prediction processing (P i n ) is generated by performing intra prediction processing for each partition P i n (step ST5).
  • P i n indicates a partition
  • P i n indicates a predicted image of the partition P i n .
  • Intra prediction parameters used to generate the intra-prediction image (P i n) is also the moving picture decoding apparatus, it is necessary to generate exactly the same intra prediction image (P i n), the variable length coding unit 13 Multiplexed into a bitstream.
  • the number of intra prediction directions that can be selected as the intra prediction parameter may be configured to differ depending on the size of the block to be processed. Since the efficiency of intra prediction decreases in a large size partition, the number of intra prediction directions that can be selected can be reduced, and the number of intra prediction directions that can be selected in a small size partition can be increased. For example, a 4 ⁇ 4 pixel partition or an 8 ⁇ 8 pixel partition may be configured in 34 directions, a 16 ⁇ 16 pixel partition in 17 directions, a 32 ⁇ 32 pixel partition in 9 directions, or the like.
  • each of the partitions P i is based on the inter prediction parameter determined by the coding control unit 1.
  • the inter prediction image (P i n ) is generated by performing the inter prediction process for n (step ST6). That is, the motion compensation prediction unit 5 uses the reference image of one or more frames stored in the motion compensation prediction frame memory 12, and based on the inter prediction parameter output from the encoding control unit 1, By performing the motion compensated prediction process for, an inter predicted image (P i n ) is generated.
  • Inter prediction parameters used to generate the inter prediction image (P i n) is also the moving picture decoding apparatus, it is necessary to generate exactly the same inter-prediction image (P i n), the variable length coding unit 13 Multiplexed into a bitstream.
  • the subtraction unit 6 When the subtraction unit 6 receives the predicted image (P i n ) from the intra prediction unit 4 or the motion compensation prediction unit 5, the subtraction unit 6 performs prediction from the partition P i n belonging to the encoded block B n divided by the block division unit 2. By subtracting the image (P i n ), a prediction difference signal e i n indicating the difference image is generated (step ST7).
  • the transform / quantization unit 7 generates the prediction difference in units of transform block size included in the prediction difference encoding parameter output from the encoding control unit 1.
  • the conversion processing for the signal e i n e.g., DCT (discrete cosine transform) or DST (discrete sine transform), the orthogonal transform for KL conversion and the base design have been made in advance to the particular learning sequence
  • DCT discrete cosine transform
  • DST discrete sine transform
  • the compressed data of the differential image is a transform coefficient after quantization
  • the data is output to the inverse quantization / inverse transform unit 8 and the variable length coding unit 13 (step ST8).
  • the inverse quantization / inverse transform unit 8 uses the quantization parameter included in the prediction difference coding parameter output from the coding control unit 1 to The compressed data is inversely quantized, and the inverse quantization processing (for example, inverse DCT (inverse discrete cosine transform) or inverse DST (discrete) is performed on the basis of the transform block size included in the prediction differential encoding parameter.
  • the inverse quantization processing for example, inverse DCT (inverse discrete cosine transform) or inverse DST (discrete) is performed on the basis of the transform block size included in the prediction differential encoding parameter.
  • inverse transformation processing such as inverse KL transformation
  • the adder 9 Upon receiving the local decoded prediction difference signal from the inverse quantization / inverse transform unit 8, the adder 9 receives the local decoded prediction difference signal and the predicted image (P i ) generated by the intra prediction unit 4 or the motion compensated prediction unit 5. n ) to generate a locally decoded image signal indicating a locally decoded partition image or a locally decoded block image (hereinafter referred to as “locally decoded image”) as a collection thereof.
  • the locally decoded image signal is output to the loop filter unit 11 (step ST10).
  • the intra prediction memory 10 stores the local decoded image for use in intra prediction.
  • the loop filter unit 11 When the loop filter unit 11 receives the local decoded image signal from the adder unit 9, the loop filter unit 11 compensates for the encoding distortion included in the local decoded image signal, and the local decoded image indicated by the local decoded image signal after the encoding distortion compensation Is stored in the motion compensated prediction frame memory 12 as a reference image (step ST11).
  • the filtering process by the loop filter unit 11 may be performed in units of the maximum encoded block or individual encoded blocks of the input local decoded image signal, or a local decoded image signal corresponding to a macroblock for one screen. It may be performed for one screen after the input.
  • steps ST4 to ST10 are repeated until the processes for the partitions P i n belonging to all the encoded blocks B n divided by the block dividing unit 2 are completed (step ST12).
  • the variable length coding unit 13 the compressed data output from the transform / quantization unit 7, the coding mode and prediction differential coding parameter output from the coding control unit 1, and the intra prediction unit 4 output
  • the intra-prediction parameter or the inter-prediction parameter output from the motion compensated prediction unit 5 is variable-length coded, and the compressed data, the coding mode, the prediction differential coding parameter, and the intra-prediction parameter / inter-prediction parameter encoded data are obtained.
  • a multiplexed bit stream is generated (step ST13).
  • FIG. 14 is an explanatory diagram showing an example of intra prediction parameters (intra prediction mode) that can be selected in each partition P i n belonging to the coding block B n .
  • intra prediction parameters intra prediction mode
  • the prediction direction vector corresponding to the intra prediction mode is shown, and the relative angle between the prediction direction vectors is designed to be smaller as the number of selectable intra prediction modes increases.
  • the luminance signal intra prediction unit 21 of the intra prediction unit 4 performs intra-frame prediction of the luminance component in the encoded block divided by the block dividing unit 2, and generates a prediction image for the luminance component (FIG. 5). Step ST21).
  • the processing content of the luminance signal intra prediction unit 21 will be specifically described.
  • the luminance signal intra prediction unit 21 of the intra prediction unit 4 based on the intra prediction parameters (intra prediction mode) for the luminance signal partitions P i n, the intra processing for generating an intra prediction signal of the luminance signal described To do.
  • the size of the partition P i n a l i n ⁇ m i n pixels.
  • the partition P i n in the pixel on the partition already coded that is adjacent to ((2 ⁇ l i n +1 ) pixels) the left partition pixel ((2 ⁇ m i n) Number of pixels) is used as a reference pixel for prediction, but the number of pixels used for prediction may be more or less than that shown in FIG.
  • pixels for one row or one column adjacent to each other are used for prediction, but pixels for two rows or two columns or more may be used for prediction. .
  • k is a positive scalar value.
  • the integer pixel When the reference pixel is at the integer pixel position, the integer pixel is set as the prediction value of the prediction target pixel. When the reference pixel is not located at the integer pixel position, an interpolation pixel generated from the integer pixel adjacent to the reference pixel is set as the predicted value. In the example of FIG. 15, since the reference pixel is not located at the integer pixel position, the average value of two pixels adjacent to the reference pixel is used as the predicted value. Note that an interpolation pixel may be generated not only from two adjacent pixels but also from two or more adjacent pixels, and used as a predicted value.
  • the luminance signal intra prediction unit 21 generates prediction pixels for all the pixels of the luminance signal in the partition P i n in the same procedure, and outputs the generated intra prediction image (P i n ). As described above, the intra-prediction parameters used for generating the intra-predicted image (P i n ) are output to the variable-length encoding unit 13 for multiplexing into the bitstream.
  • the changeover switch 22 of the intra prediction unit 4 determines whether the parameter indicating the intra coding mode of the color difference signal among the intra prediction parameters output from the coding control unit 1 is the directionality prediction mode, or uses smoothed luminance correlation. It is determined whether the color difference signal prediction mode is set (step ST22). If the parameter indicating the intra coding mode of the chrominance signal indicates that it is the directional prediction mode, the changeover switch 22 gives the reference pixel used for prediction to the chrominance signal directional intra prediction unit 23, and If the parameter indicating the intra coding mode indicates that it is the smoothed luminance correlation utilization color difference signal prediction mode, the reference pixel used for the prediction is given to the luminance correlation utilization color difference signal prediction unit 24.
  • 16 is an explanatory diagram showing a correspondence example between the intra prediction parameters of the color difference signal and the color difference intra prediction modes.
  • the color difference signal intra prediction parameter is “34”
  • a reference pixel used for prediction is given to the luminance correlation utilization color difference signal prediction unit 24, and the color difference signal intra prediction parameter is other than “34”.
  • reference pixels used for prediction are given to the color difference signal directional intra prediction unit 23.
  • Color difference signal directional intra-prediction unit 23 receives the reference pixels used for prediction from the changeover switch 22, with reference to the decoded chrominance reference pixels adjacent to the partition P i n, output from the coding controller 1 By performing intra-frame prediction of the color difference component based on the intra prediction parameter thus generated, a predicted image for the color difference component is generated (step ST23).
  • the intra prediction target in the chrominance signal directional intra prediction unit 23 is a color difference signal, and the intra prediction target is different from the luminance signal intra prediction unit 21 which is a luminance signal, but the processing content of the intra prediction itself is the luminance signal intra prediction unit. 21. Therefore, by performing directional prediction, horizontal prediction, vertical prediction, DC prediction, etc., an intra prediction image of a color difference signal is generated.
  • Brightness correlation utilizing the color difference signal prediction section 24 receives the reference pixels used for prediction from the changeover switch 22, the decoded luminance reference pixel and the chrominance reference pixels adjacent to the partition P i n is a coding block, partition decoded brightness reference pixels in P i n (brightness reference pixels in the local decoded in an image obtained from the intra prediction image of the partition P i n previously generated by the luminance signal intra prediction unit 21 (P i n)) are used to smooth the luminance component of a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block, and the smoothed luminance component and color difference component A correlation parameter indicating the correlation is calculated, and a predicted image for the color difference component is generated using the correlation parameter and the smoothed luminance component (step ST24).
  • the processing content of the luminance correlation utilization color difference signal prediction unit 24 will be specifically described.
  • the correlation calculating unit 32 of the luminance correlation using color difference signal predicting unit 24 and the reduced luminance reference pixel Rec ′ L and the prediction block of the color difference signal Using the color difference reference pixel Rec C , which is the decoded pixel value of the color difference signal adjacent to the upper end and the left end of the image, as shown in the following equations (4) and (5), ⁇ and ⁇ are calculated (step ST32).
  • I is a value twice the number of pixels on one side of the prediction block of the color difference signal to be processed.
  • the color difference predicted image generation unit 33 uses the correlation parameters ⁇ and ⁇ and the reduced luminance reference pixel Rec ′ L as shown in the following equation (6). Then, the color difference prediction image Pred C is generated (step ST33).
  • Intra prediction is a means for predicting an unknown area in the screen from a known area, but the texture of the luminance signal and the color difference signal are correlated, and in the spatial direction, neighboring pixels change pixel values.
  • Predictive efficiency by calculating the correlation parameter between the luminance signal and the color difference signal using the decoded luminance signal and the color difference signal adjacent to the prediction block and predicting the color difference signal from the luminance signal and the correlation parameter. Can be improved. In this case, since the resolution of the luminance signal and the color difference signal is different in the YUV 4: 2: 0 signal, it is necessary to subsample the luminance signal. However, the occurrence of aliasing can be suppressed by applying a low-pass filter, and the prediction efficiency Can be improved.
  • variable length coding unit 13 performs variable length coding on the intra prediction parameter output from the intra prediction unit 4 and multiplexes the codeword of the intra prediction parameter into the bitstream.
  • a representative prediction direction vector (prediction direction representative vector) is selected from prediction direction vectors of a plurality of directional predictions, and an intra prediction parameter is used as an index of the prediction direction representative vector (prediction direction representative). Index) and an index (prediction direction difference index) representing the difference from the prediction direction representative vector, and by performing Huffman coding such as arithmetic coding according to the probability model for each index, the code amount is You may comprise so that it may reduce and encode.
  • variable length decoding unit 41 receives the bit stream generated by the moving picture encoding device in FIG. 1, the variable length decoding unit 41 performs variable length decoding processing on the bit stream (step ST41 in FIG. 8), and the picture of one frame or more The frame size is decoded in sequence units or picture units.
  • the variable length decoding unit 41 determines the maximum coding block size determined by the moving image coding apparatus in FIG. 1 (a code that is a processing unit when the intra prediction process or the motion compensation prediction process is performed).
  • the maximum size of the encoded block and the upper limit of the number of division layers are determined in the same procedure as that of the video encoding device (step ST42).
  • the moving image shown in FIG. 1 is based on the previously decoded frame size.
  • the maximum size of the encoded block is determined by the same procedure as that of the image encoding apparatus.
  • the maximum size of the encoded block and the number of layers of the encoded block are multiplexed in the bitstream by the moving image encoding device, the maximum size of the encoded block and the number of layers of the encoded block from the bitstream Is decrypted.
  • the variable length decoding unit 41 determines the maximum size of the encoded block and the number of layers of the encoded block, and grasps the hierarchical division state of each encoded block from the maximum encoded block as a starting point. Among the encoded data multiplexed in the bit stream, the encoded data related to each encoded block is specified, and the encoding mode assigned to each encoded block is decoded from the encoded data. Then, the variable length decoding unit 41 refers to the partition information of the partition P i n belonging to the coding block B n included in the coding mode, and encodes the encoded data that is multiplexed in the bit stream. in, it identifies the coded data according to each partition P i n (step ST43).
  • Variable-length decoding unit 41 the compressed data from each partition P i n encoded data according to the predictive differential coding parameters, intra prediction parameters / inter prediction parameters by variable length decoding, the compressed data and predictive differential coding parameters Is output to the inverse quantization / inverse transform unit 45, and the encoding mode and the intra prediction parameter / inter prediction parameter are output to the changeover switch 42 (step ST44).
  • the prediction direction representative index and the prediction direction difference index are multiplexed in the bitstream, the prediction direction representative index and the prediction direction difference index are entropy decoded by arithmetic decoding or the like according to each probability model, An intra prediction parameter is specified from the prediction direction representative index and the prediction direction difference index. Thereby, even when the code amount of the intra prediction parameter is reduced on the moving image encoding device side, the intra prediction parameter can be correctly decoded.
  • the changeover switch 42 When the coding mode of the partition P i n belonging to the coding block B n output from the variable length decoding unit 41 is the intra coding mode, the changeover switch 42 outputs the intra prediction parameter output from the variable length decoding unit 41. Is output to the intra prediction unit 43, and when the encoding mode is the inter encoding mode, the inter prediction parameter output from the variable length decoding unit 41 is output to the motion compensation unit 44.
  • the intra prediction unit 43 receives the intra prediction parameter from the variable length decoding unit 41 (step ST45), like the intra prediction unit 4 in FIG. 1, on the basis of the intra prediction parameters, intra prediction for each partition P i n By performing the process, an intra-predicted image (P i n ) is generated (step ST46).
  • the processing content of the intra estimation part 43 is demonstrated concretely.
  • the changeover switch 52 of the intra-prediction unit 43 is similar to the changeover switch 22 of the video encoding device, and the parameter indicating the intra-coding mode of the color difference signal among the intra-prediction parameters output from the variable-length decoding unit 41 is It is determined whether the mode is the directionality prediction mode or the smoothed luminance correlation utilization color difference signal prediction mode.
  • the changeover switch 52 provides the reference pixel used for prediction to the chrominance signal directional intra prediction unit 53, and If the parameter indicating the intra coding mode indicates that it is the smoothed luminance correlation utilization color difference signal prediction mode, the reference pixel used for the prediction is given to the luminance correlation utilization color difference signal prediction unit 54.
  • Color difference signal directional intra-prediction unit 53 receives the reference pixels used for prediction from the changeover switch 52, similarly to the color difference signals directional intra prediction unit 23 of the moving picture coding apparatus, adjacent to the partition P i n By referring to the decoded chrominance reference pixel and performing intra-frame prediction of the chrominance component based on the intra prediction parameter output from the variable length decoding unit 41, a predicted image for the chrominance component is generated.
  • the intra prediction target in the chrominance signal directional intra prediction unit 53 is a color difference signal, and the intra prediction target is different from the luminance signal intra prediction unit 51 which is a luminance signal, but the processing content of the intra prediction itself is the luminance signal intra prediction unit. Same as 51. Therefore, by performing directional prediction, horizontal prediction, vertical prediction, DC prediction, etc., an intra prediction image of a color difference signal is generated.
  • the luminance correlation use color difference signal prediction unit 54 When the luminance correlation use color difference signal prediction unit 54 receives a reference pixel used for prediction from the changeover switch 52, the luminance correlation use color difference signal prediction unit 54 is adjacent to the partition P i n , similarly to the luminance correlation use color difference signal prediction unit 24 of the moving image encoding device. decoded luminance reference pixel and the chrominance reference pixels, the partition P i decoded brightness reference pixels in n (partition previously generated by the luminance signal intra prediction unit 51 P i n intra predicted image (P i n) Luminance reference pixels in the decoded image obtained from (1)) and smoothing the luminance component of a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block.
  • the smoothed luminance reference pixel reducing unit 61 of the luminance correlation using color difference signal predicting unit 54 stores the partition P i stored in the intra prediction memory 47.
  • Decoded luminance reference pixels constituting n luminance reference pixels in the decoded image obtained from the intra-predicted image (P i n ) of the partition P i n previously generated by the luminance signal intra prediction unit 51
  • the reduced luminance reference pixel Rec ′ L is generated by performing a smoothing process or the like of a plurality of luminance reference pixels adjacent in the horizontal direction and the vertical direction.
  • the smoothed luminance reference pixel reducing unit 61 as shown in FIG. 19, (in the figure, the block on the left side of N ⁇ N) prediction block of the color difference signals in the partition P i n in the block (drawing corresponding to, A reduced luminance reference pixel using a decoded luminance signal which is a decoded pixel value in the right 2N ⁇ 2N block) and a decoded luminance signal adjacent to the upper end and the left end of the decoded luminance signal. Rec ′ L is generated.
  • FIG. 19 in the figure, the block on the left side of N ⁇ N prediction block of the color difference signals in the partition P i n in the block (drawing corresponding to, A reduced luminance reference pixel using a decoded luminance signal which is a decoded pixel value in the right 2N ⁇ 2N block) and a decoded luminance signal adjacent to the upper end and the left end of the decoded luminance signal.
  • Rec ′ L is generated.
  • the reduced luminance reference pixel Rec ′ L is arranged in the horizontal direction with respect to the luminance reference pixel Rec L so that the YUV 4: 2: 0 signal has the same phase as that of the color difference signal pixel.
  • a 1: 2: 1 low-pass filter and a 1: 1 low-pass filter are applied in the vertical direction, only an even column is sub-sampled in the vertical and horizontal directions.
  • the reduced luminance reference pixel Rec' smoothed luminance reference pixel reducing unit 61 reduces the luminance reference pixels Rec and L, As shown in the above equations (4) and (5), prediction is performed using the color difference reference pixel Rec C that is a decoded pixel value of the color difference signal adjacent to the upper end and the left end of the prediction block of the color difference signal. Correlation parameters ⁇ and ⁇ used in the above are calculated.
  • the color difference predicted image generation unit 63 calculates the correlation parameters ⁇ and ⁇ and the reduced luminance reference pixel Rec as in the color difference prediction image generation unit 33 of the moving image encoding device. 'Using L , a color difference prediction image Pred C is generated as shown in the above equation (6).
  • the motion compensation unit 44 When receiving the inter prediction parameter from the changeover switch 42, the motion compensation unit 44 performs inter prediction processing for each partition P i n based on the inter prediction parameter, similarly to the motion compensation prediction unit 5 of the moving image encoding device. By performing, an inter prediction image (P i n ) is generated (step ST47). That is, motion compensation unit 44, using one or more frames of reference images stored by the motion compensated prediction frame memory 49, by performing the motion compensation prediction processing on partition P i n based on the inter prediction parameters, An inter prediction image (P i n ) is generated.
  • the inverse quantization / inverse conversion unit 45 When the inverse quantization / inverse transform unit 45 receives the prediction difference encoding parameter from the variable length decoding unit 41, the inverse quantization / inverse conversion unit 45 outputs the prediction difference encoding parameter from the variable length decoding unit 41 using the quantization parameter included in the prediction difference encoding parameter.
  • the compressed data related to the encoded block is dequantized, and the inverse transformed process of the compressed data after inverse quantization (for example, inverse DCT (inverse discrete) is performed in the transform block size unit included in the prediction differential encoding parameter. (Cosine transform), inverse DST (inverse discrete sine transform), inverse transform process such as inverse KL transform), and the like.
  • inverse DCT inverse discrete
  • inverse transform process such as inverse KL transform
  • the addition unit 46 adds the decoded prediction difference signal output from the inverse quantization / inverse conversion unit 45 and the prediction signal indicating the prediction image (P i n ) generated by the intra prediction unit 43 or the motion compensation unit 44.
  • a decoded image signal indicating a decoded partition image or a decoded image as a collection thereof is generated, and the decoded image signal is output to the loop filter unit 48 (step ST49).
  • the intra prediction memory 47 stores the decoded image for use in intra prediction.
  • the loop filter unit 48 compensates for the encoding distortion included in the decoded image signal, and uses the decoded image indicated by the decoded image signal after the encoding distortion compensation as a reference image. While storing in the motion compensation prediction frame memory 49, the decoded image is output as a reproduced image (step ST50).
  • the filtering processing by the loop filter unit 48 may be performed for the maximum encoded block of the input decoded image signal or for each individual encoded block, or a decoded image signal corresponding to a macroblock for one screen is input. After being done, it may be performed for one screen at a time.
  • the processes in steps ST43 to ST49 are repeatedly performed until the processes for the partitions P i n belonging to all the coding blocks B n are completed (step ST51).
  • the intra prediction unit 4 of the moving image encoding device uses the horizontal direction among the pixels constituting the encoded block divided by the block dividing unit 2. And smoothing the luminance component of a plurality of pixels adjacent in the vertical direction, calculating a correlation parameter indicating a correlation between the smoothed luminance component and the color difference component, and calculating the correlation parameter and the luminance component after smoothing.
  • the prediction image for the color difference component is generated, it is possible to suppress the occurrence of aliasing in the luminance component after smoothing and increase the encoding efficiency.
  • the encoding mode selected by the encoding control unit 1 is the intra prediction mode
  • the parameter indicating the intra prediction mode of the color difference signal indicates that the color difference signal prediction mode is a smoothed luminance correlation use color mode
  • Reduced luminance reference pixels are generated by smoothing the luminance reference pixels in the horizontal and vertical directions and sub-sampling, and an intra prediction image of the color difference signal is generated using the correlation between the luminance signal and the color difference signal. Therefore, a prediction image in which amplification of a prediction error due to aliasing, which has been conventionally generated, is suppressed and the prediction efficiency is improved, and as a result, encoding efficiency can be increased.
  • the intra prediction unit 43 of the video decoding device is adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block output from the variable length decoding unit 41. Smoothing luminance components related to a plurality of pixels, calculating a correlation parameter indicating a correlation between the smoothed luminance component and the color difference component, and using the correlation parameter and the luminance component after smoothing, the color difference component Therefore, it is possible to accurately decode a moving image from encoded data whose encoding efficiency is improved. That is, the coding mode that is variable-length decoded by the variable-length decoding unit 41 is the intra prediction mode, and the parameter indicating the intra prediction mode of the color difference signal indicates that the color difference signal prediction mode uses the smoothed luminance correlation.
  • the luminance reference pixel is smoothed in the horizontal direction and the vertical direction and sub-sampled to generate a reduced luminance reference pixel, and the intra prediction image of the color difference signal using the correlation between the luminance signal and the color difference signal Therefore, it is possible to accurately decode a moving image from encoded data whose encoding efficiency is improved.
  • a 1: 2: 1 smoothing filter is applied in the horizontal direction.
  • the filter coefficient is not limited to this.
  • a filter such as 3: 2: 3, 7: 2: 7, 1: 0: 1, a smoothing filter with a larger number of taps, or a 1: 1 filter can achieve the same effect.
  • a common one is used as a smoothing filter for the luminance reference pixel of the block to be predicted and the luminance reference pixel adjacent thereto.
  • the 1: 0: 1 filter is applied to the luminance reference pixels of the block to be predicted, and the 1: 2: 1 filter is applied to the adjacent luminance reference pixels.
  • the effect of can be obtained. If a 1: 0: 1 filter or a 1: 1 filter is applied, the amount of calculation can be reduced.
  • a smoothing filter having a large number of taps is applied, the encoding efficiency is improved. Can be obtained. Further, the smoothing in the horizontal direction and the vertical direction may be performed simultaneously. For example, the average value may be calculated after weighted addition of six target pixels according to the filter coefficient.
  • any component of the embodiment can be modified or any component of the embodiment can be omitted within the scope of the invention.
  • the moving picture coding apparatus, the moving picture decoding apparatus, the moving picture coding method, and the moving picture decoding method according to the present invention are such that the prediction image generating means performs intra-frame prediction of the luminance component in the coded block divided by the block dividing means Luminance component intra prediction means for generating a prediction image for the luminance component, and luminance components relating to a plurality of pixels adjacent in the horizontal direction and the vertical direction among the pixels constituting the coding block To calculate a correlation parameter indicating the correlation between the smoothed luminance component and the color difference component, and use the correlation parameter and the smoothed luminance component to generate a prediction image for the color difference component. And the encoding efficiency can be improved by suppressing the occurrence of aliasing in the luminance component after smoothing. It is suitable for use in the video decoding apparatus.
  • Encoding control unit encoding control unit
  • Block division unit block division unit
  • Changeover switch prediction image generation unit
  • Intra prediction unit prediction image generation unit
  • Motion compensation prediction unit prediction
  • Image generation means 6 subtraction section
  • 7 transform / quantization section image compression means
  • 8 inverse quantization / inverse transform section 9 addition section
  • 10 intra prediction memory 11 loop filter section
  • 12 motion compensated prediction frame memory 13 variable length coding unit (variable length coding unit), 21 luminance signal intra prediction unit (luminance component intra prediction unit), 22 changeover switch (color difference component intra prediction unit), 23 color difference signal Directional intra prediction unit (color difference component intra prediction means), 24 Luminance correlation utilization color difference signal prediction unit (color difference component intra prediction means), 31 Smoothing luminance reference pixel reduction unit, 32 correlation calculation unit, 33 color difference prediction image generation unit, 41 variable length decoding unit (variable length decoding unit), 42 changeover switch (prediction image generation unit), 43 intra prediction unit (prediction image generation) Means), 44

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Selon l'invention, une unité de prédiction intra (4) d'un dispositif de codage vidéo effectue un lissage des composantes de luminance d'une pluralité de pixels constituant des blocs codés divisés par une unité de division en blocs (2), ces pixels étant adjacents suivants des directions horizontale et verticale ; l'unité de prédiction intra (4) calcule un paramètre de corrélation qui indique la corrélation entre les composantes de luminance lissées et des composantes de différence de couleurs ; et l'unité de prédiction intra (4) génère des images de prédiction pour les composantes de différence de couleurs à partir du paramètre de corrélation et des composantes de luminance lissées.
PCT/JP2012/003679 2011-06-24 2012-06-05 Dispositif de codage vidéo, dispositif de décodage vidéo, procédé de codage vidéo et procédé de décodage vidéo WO2012176387A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011140637A JP2014168107A (ja) 2011-06-24 2011-06-24 動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法
JP2011-140637 2011-06-24

Publications (1)

Publication Number Publication Date
WO2012176387A1 true WO2012176387A1 (fr) 2012-12-27

Family

ID=47422251

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/003679 WO2012176387A1 (fr) 2011-06-24 2012-06-05 Dispositif de codage vidéo, dispositif de décodage vidéo, procédé de codage vidéo et procédé de décodage vidéo

Country Status (2)

Country Link
JP (1) JP2014168107A (fr)
WO (1) WO2012176387A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015076781A (ja) * 2013-10-10 2015-04-20 三菱電機株式会社 画像符号化装置、画像復号装置、画像符号化方法及び画像復号方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6497014B2 (ja) * 2014-09-24 2019-04-10 富士ゼロックス株式会社 画像処理装置及び画像処理プログラム
JP2018074491A (ja) 2016-11-02 2018-05-10 富士通株式会社 動画像符号化装置、動画像符号化方法、および動画像符号化プログラム
CN108777794B (zh) * 2018-06-25 2022-02-08 腾讯科技(深圳)有限公司 图像的编码方法和装置、存储介质、电子装置

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009534876A (ja) * 2006-03-23 2009-09-24 サムスン エレクトロニクス カンパニー リミテッド 画像の符号化方法及び装置、復号化方法及び装置
JP2010531609A (ja) * 2007-06-27 2010-09-24 サムスン エレクトロニクス カンパニー リミテッド 映像データ符号化及び/または復号化の方法、記録媒体、及び装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009534876A (ja) * 2006-03-23 2009-09-24 サムスン エレクトロニクス カンパニー リミテッド 画像の符号化方法及び装置、復号化方法及び装置
JP2010531609A (ja) * 2007-06-27 2010-09-24 サムスン エレクトロニクス カンパニー リミテッド 映像データ符号化及び/または復号化の方法、記録媒体、及び装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
JIANLE CHEN ET AL.: "CE6.a.4: Chroma intra prediction by reconstructed luma samples", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT- VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/ WG11 5TH MEETING, 16 March 2011 (2011-03-16), GENEVA *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015076781A (ja) * 2013-10-10 2015-04-20 三菱電機株式会社 画像符号化装置、画像復号装置、画像符号化方法及び画像復号方法

Also Published As

Publication number Publication date
JP2014168107A (ja) 2014-09-11

Similar Documents

Publication Publication Date Title
JP6005087B2 (ja) 画像復号装置、画像復号方法、画像符号化装置、画像符号化方法及び符号化データのデータ構造
JP6381724B2 (ja) 画像復号装置
JP5782169B2 (ja) 動画像符号化装置及び動画像符号化方法
JP7012809B2 (ja) 画像符号化装置、動画像復号装置、動画像符号化データ及び記録媒体
WO2013065402A1 (fr) Dispositif de codage d'image en mouvement, dispositif de décodage d'image en mouvement, procédé de codage d'image en mouvement et procédé de décodage d'image en mouvement
KR20140007074A (ko) 화상 부호화 장치, 화상 복호 장치, 화상 부호화 방법, 화상 복호 방법 및 화상 예측 장치
WO2013114992A1 (fr) Dispositif de codage vidéo couleur, dispositif de décodage vidéo couleur, procédé de codage vidéo couleur et procédé de décodage vidéo couleur
WO2012176387A1 (fr) Dispositif de codage vidéo, dispositif de décodage vidéo, procédé de codage vidéo et procédé de décodage vidéo
WO2013065678A1 (fr) Dispositif de codage d'image dynamique, dispositif de décodage d'image dynamique, procédé pour coder une image dynamique et procédé pour décoder une image dynamique
JP2014007643A (ja) 動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法
JP2013168913A (ja) 動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法
JP2013126145A (ja) カラー動画像符号化装置、カラー動画像復号装置、カラー動画像符号化方法及びカラー動画像復号方法
JP2013098713A (ja) 動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法
JP2012023609A (ja) 動画像符号化装置、動画像復号装置、動画像符号化方法及び動画像復号方法
WO2014051080A1 (fr) Dispositif de codage d'image mobile en couleurs, dispositif de décodage d'image mobile en couleurs, procédé de codage d'image mobile en couleurs et procédé de décodage d'image mobile en couleurs
JP2013102269A (ja) カラー動画像符号化装置、カラー動画像復号装置、カラー動画像符号化方法及びカラー動画像復号方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12802680

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12802680

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP