WO2012121352A1 - Dispositif de décodage de vidéo, dispositif de codage de vidéo et structure de données - Google Patents

Dispositif de décodage de vidéo, dispositif de codage de vidéo et structure de données Download PDF

Info

Publication number
WO2012121352A1
WO2012121352A1 PCT/JP2012/056029 JP2012056029W WO2012121352A1 WO 2012121352 A1 WO2012121352 A1 WO 2012121352A1 JP 2012056029 W JP2012056029 W JP 2012056029W WO 2012121352 A1 WO2012121352 A1 WO 2012121352A1
Authority
WO
WIPO (PCT)
Prior art keywords
filter
information
flag
component
unit
Prior art date
Application number
PCT/JP2012/056029
Other languages
English (en)
Japanese (ja)
Inventor
隆紀 山崎
知宏 猪飼
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2012121352A1 publication Critical patent/WO2012121352A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention relates to an encoding device including an image filter device that performs image filtering, and a decoding device.
  • the present invention also relates to a data structure of encoded data decoded by such a decoding device.
  • a moving image encoding device that generates encoded data by encoding the moving image, and decoding by decoding the encoded data
  • a video decoding device (decoding device) that generates an image is used.
  • a specific moving picture encoding method for example, H.264 is used. H.264 / MPEG-4. There is AVC.
  • an input image is divided into one luminance component (luminance component Y) and two color difference components (color difference component Cr and color difference component Cb, respectively), and encoding / decoding processing is performed.
  • a predicted image is generated based on a locally decoded image obtained by encoding / decoding an input image, and difference data between the predicted image and the input image is encoded.
  • methods for generating a predicted image methods called inter-screen prediction (inter prediction) and intra-screen prediction (intra prediction) are known.
  • noise is reduced by performing filter processing using adaptively set filter coefficient groups on the decoded image or an image obtained by performing block noise reduction processing on the decoded image.
  • an adaptive loop filter Adaptive Loop Filter
  • adaptive filter that generates a decoded image.
  • the filter coefficient group used for the adaptive filter is adaptively determined so as to minimize the square error between the encoding target image and the decoded decoded image obtained by applying the adaptive filter to the decoded image. It has been done.
  • the presence / absence of filter processing and the filter coefficient group are independently determined for the luminance component and the color difference component. Which color difference component is subjected to filtering is determined by a flag.
  • the color difference component filter is turned off, if either the color difference component Cb or the color difference component Cr is filtered, the color difference component Cb or the color difference component Cr may be filtered.
  • the color difference component Cb and the color difference component Cr are processed by the same filter coefficient group.
  • the encoding device and decoding device provided with such an adaptive filter can improve prediction accuracy and encoding efficiency by generating a prediction image with reference to the filtered decoded image.
  • JCT-VC Joint Collaborative Team Video Coding
  • the same filter coefficient group has been used for adaptive filter processing applied to the color difference component Cr and the color difference component Cb of the encoding target image. Therefore, in the conventional technique, it is difficult to perform effective filter processing on the color difference component Cr and the color difference component Cb, and there is a problem that the encoding efficiency is not improved.
  • the present invention has been made on the basis of the knowledge obtained by the inventor in view of the above problems, and an object of the present invention is to provide an image filter that improves the encoding efficiency even when there are a plurality of color components.
  • An apparatus, and a moving picture decoding apparatus and a moving picture encoding apparatus provided with such an image filter apparatus are realized.
  • a video decoding device is a video decoding device that decodes an image composed of a plurality of color components, and specifies a color component to be filtered.
  • Color component designation information and filter information decoding means for decoding a filter coefficient group, and the filter coefficient group decoded by the filter information decoding means and the color component designation information, for each color component to be processed
  • filtering means for performing a filtering process.
  • the filter processing is performed on each color component to be processed using the filter coefficient group and the color component designation information, there are a plurality of color components. Even in this case, appropriate filter processing can be performed on each color component. Therefore, according to the moving image decoding apparatus, the encoding efficiency is improved.
  • the moving picture decoding apparatus is a moving picture decoding apparatus that decodes an image composed of a plurality of color components, and performs a filtering process on each of a plurality of unit areas constituting the image.
  • Filter information decoding means for decoding filter on / off information including one or more on / off flags for designating whether or not to perform, and a filter coefficient group, and a filter for color components for each unit region with reference to the filter on / off information
  • a filter means for performing processing is a moving picture decoding apparatus that decodes an image composed of a plurality of color components, and performs a filtering process on each of a plurality of unit areas constituting the image.
  • the image to be decoded may include a region where the encoding efficiency is improved without performing the filtering process.
  • the filter process is performed on the color component for each unit area with reference to the filter on / off information, the encoding efficiency is improved.
  • the moving image encoding apparatus is an encoding apparatus that encodes an image composed of a plurality of color components, and includes color component designation information and a filter for designating a color component to be filtered.
  • Filter coefficient group used for processing and filter information to be encoded as a filter parameter, the color component designation information and the filter coefficient group are determined for each color component, and the filter information determination unit
  • each color component to be processed is filtered using the filter coefficient group determined by the filter information determination unit, and the filter information is encoded. Therefore, encoding efficiency is improved.
  • the moving image encoding device is a moving image encoding device that encodes an input image composed of a plurality of color components, and in each of the plurality of unit regions constituting the input image, Filter on / off information for specifying whether to perform filter processing, filter coefficient group used for filter processing, filter information determining means for determining filter information to be encoded for each color component, the filter on / off information and filter coefficients
  • Filter on / off information for specifying whether to perform filter processing
  • filter coefficient group used for filter processing filter information determining means for determining filter information to be encoded for each color component, the filter on / off information and filter coefficients
  • the filter on / off information for each color component is included.
  • the image to be encoded may include a region where the encoding efficiency is improved when the filter processing is not performed.
  • the filtering process is performed for each unit region with reference to the filter on / off information and the filter coefficient group, so that the encoding efficiency is improved.
  • the data structure of the encoded data according to the present invention is a data structure of encoded data referred to by the video decoding device, and includes color component specifying information for specifying a color component to be filtered, and a filter
  • the moving picture decoding apparatus decodes the color component designation information and the filter coefficient group, and uses the decoded filter coefficient group and the color component designation information to be the processing target.
  • the filter processing is performed for each color component.
  • the encoded data configured as described above includes color component specifying information for specifying a color component to be subjected to filter processing, and a filter coefficient group.
  • a moving image decoding apparatus for decoding the encoded data includes: Since the color component designation information and the filter coefficient group are decoded and the filter processing is performed on each color component to be processed using the decoded filter coefficient group and the color component designation information, there are a plurality of color components. Even in this case, appropriate filter processing can be performed on each color component. Therefore, according to the encoded data, the encoding efficiency is improved.
  • the data structure of the encoded data according to the present invention is the data structure of the encoded data referred to by the moving image decoding apparatus, and is the filter process performed in each of a plurality of unit areas constituting the target image? Filter on / off information including one or more on / off flags for designating whether or not, and a filter coefficient group, the video decoding device decodes the filter on / off information and the filter coefficient group, and the filter on / off With reference to the information, a color component filtering process is performed for each unit area.
  • the decoding target image may include a region where the encoding efficiency is improved when the filtering process is not performed.
  • the encoded data configured as described above includes filter on / off information including one or more on / off flags for designating whether or not to perform filter processing in each of a plurality of unit regions constituting the target image, and a filter coefficient group.
  • the moving picture decoding apparatus that decodes the encoded data decodes the filter on / off information and the filter coefficient group, and refers to the filter on / off information to perform color component filtering for each unit region. Encoding efficiency is improved.
  • the moving picture decoding apparatus is a moving picture decoding apparatus that decodes encoded data obtained by encoding a target picture composed of a plurality of color components, and a plurality of units constituting the target picture.
  • Filter information decoding means for decoding filter on / off information including one or more on / off flags that specify whether or not to perform filter processing in each of the areas, and a decoded image in each unit area using the decoded filter on / off information
  • a filter means for performing filter processing wherein the filter means performs filter processing independently for each of the color components.
  • the moving image decoding apparatus configured as described above, independent filter processing for each color component using filter on / off information including one or more on / off flags for designating whether or not to perform filter processing for each unit region. Therefore, even when there are a plurality of color components, it is possible to appropriately switch on / off the filter processing for each color component. Therefore, according to the moving image decoding apparatus, the encoding efficiency is improved.
  • the moving picture decoding apparatus is a moving picture decoding apparatus that decodes an image composed of a plurality of color components, and color component designation information that designates a color component to be filtered.
  • filter information decoding means for decoding the filter coefficient group, and the filter coefficient group decoded by the filter information decoding means and the color component designation information are used to perform filter processing on each color component to be processed. Filter means.
  • the filter processing is performed on each color component to be processed using the filter coefficient group and the color component designation information, even when there are a plurality of color components, Appropriate filter processing can be performed for each color component. Therefore, according to the moving image decoding apparatus, the encoding efficiency is improved.
  • FIG. 1 shows the data structure of the encoding data produced
  • A shows the configuration of the picture layer of the encoded data
  • (b) shows the configuration of the slice layer included in the picture layer
  • (c) shows the LCU layer included in the slice layer.
  • D shows the configuration of the leaf CU included in the CU layer
  • (e) shows the configuration of the inter prediction information for the leaf CU
  • ( f) shows the configuration of the intra prediction information for the leaf CU
  • (g) shows the configuration of the filter parameter included in the slice header.
  • (Ii) shows the configuration of the color difference filter information when the first color difference component is specified by sw_flag when the color difference filter mode alf_chroma_idc is 4. It is a figure which shows each syntax contained in the color difference filter information of the coding data which concerns on 1st Embodiment. The case where the color difference filter mode alf_chroma_idc is 4 and the first color difference component is designated in advance is shown. It is a block diagram which shows the structure of the moving image decoding apparatus which concerns on 1st Embodiment. It is a block diagram which shows the structure of the adaptive filter with which the moving image decoding apparatus which concerns on 1st Embodiment is provided.
  • (A) shows a process of deriving the filter coefficient group HCr and the filter coefficient group HCb from the filter coefficient group coeff_Cb and the filter coefficient group coeff_Cr included in the filter coefficient information when the color difference filter mode alf_chroma_idc is 1 or 2.
  • (B) shows the configuration and processing for deriving the filter coefficient group HCr and the filter coefficient group HCb from the filter coefficient group coeff_chroma included in the filter coefficient information when the color difference filter mode alf_chroma_idc is 3.
  • (A) is a block diagram of a color difference adaptive filter coefficient deriving unit when the color difference filter mode alf_chroma_idc is 4 and the first color difference component is designated in advance as the color difference component Cr.
  • (B) shows the configuration and processing of the color difference adaptive filter coefficient deriving unit when the color difference filter mode alf_chroma_idc is 4 and the first color difference component is designated in advance as the color difference component Cb. It is a figure for demonstrating the color difference adaptive filter coefficient deriving part with which the adaptive filter which concerns on 1st Embodiment is provided.
  • the configuration and processing of the color difference adaptive filter coefficient derivation unit when the color difference filter mode alf_chroma_idc is 4 and the first color difference component C1 is specified by the assignment flag sw_flag are shown. It is a figure for demonstrating the switch part with which the color difference adaptive filter part which concerns on 1st Embodiment is provided, Comprising: It is a table
  • (A) is a target unit region UR, a filter reference region R that is a set of pixels referred to in order to calculate a pixel value of a filter target pixel in the target unit region UR, and a filter reference region for each filter target pixel.
  • FIG. 7B is a diagram illustrating filter coefficients assigned to each pixel included in the filter reference region R. It is a block diagram which shows the structure of the moving image encoder which concerns on 1st Embodiment. It is a block diagram which shows the structure of the adaptive filter with which the moving image encoder which concerns on 1st Embodiment is provided.
  • FIG. 1 It is a block diagram which shows the structure of the color difference filter information determination part with which the adaptive filter which concerns on 1st Embodiment is provided. It is a figure which shows the data structure of the filter on / off information of the coding data which concerns on 2nd Embodiment. (A) to (d) show the configuration of the filter on / off information for control patterns 1 to 4. It is a figure which shows each syntax contained in the filter parameter of the control pattern 1 of the coding data which concerns on 2nd Embodiment. It is a figure which shows each syntax contained in the filter parameter of the control pattern 2 of the coding data which concerns on 2nd Embodiment.
  • FIG. 1 It is a block diagram which shows the structure of the filter parameter determination part with which the adaptive filter which concerns on 2nd Embodiment is provided. It is a figure which shows each syntax contained in the color difference filter information of the coding data which concerns on 1st Embodiment, and the case where the color difference filter mode alf_chroma_idc is 4, and the 1st color difference component is designated by the allocation flag sw_flag Show. It is a figure for demonstrating the adaptive filter process when applying the on-off flag of the area unit which concerns on 2nd Embodiment.
  • (A) is a figure which shows the example of a division
  • (b) is the object area
  • (A) is a figure which shows ON / OFF of the filter process of the color difference component Cr and color difference component Cb designated by 1st filter mode alf_chroma_idc1.
  • (B) is a figure which shows the filter coefficient group in the filter process of the color difference component Cr designated by 2nd filter mode alf_chroma_idc2, and the color difference component Cb. It is the figure shown about the structure of the transmitter which mounts the moving image encoder which concerns on embodiment, and the receiver which mounts the moving image decoder which concerns on embodiment.
  • (A) shows a transmitting apparatus equipped with a moving picture coding apparatus, and (b) shows a receiving apparatus equipped with a moving picture decoding apparatus. It is the figure shown about the structure of the recording device carrying the moving image encoder which concerns on embodiment, and the reproducing
  • (A) shows a recording apparatus equipped with a moving picture coding apparatus, and (b) shows a reproduction apparatus equipped with a moving picture decoding apparatus.
  • the decoding apparatus decodes a moving image from encoded data. Therefore, hereinafter, this is referred to as “moving image decoding apparatus”.
  • the encoding device according to the present embodiment generates encoded data by encoding a moving image. Therefore, in the following, this is referred to as a “video encoding device”.
  • the scope of application of the first embodiment of the present invention is not limited to this. That is, as will be apparent from the following description, the features of the present invention can be realized without assuming a plurality of frames. That is, the present invention can be applied to a general decoding apparatus and a general encoding apparatus regardless of whether the target is a moving image or a still image.
  • the encoded data # 1 Prior to the description of the video decoding device 1 according to the present embodiment, the configuration of the encoded data # 1 generated by the video encoding device 2 according to the present embodiment and decoded by the video decoding device 1 will be described with reference to FIG. Will be described with reference to FIG.
  • the encoded data # 1 has a hierarchical structure including a sequence layer, a GOP (Group Of Pictures) layer, a picture layer, a slice layer, and a maximum coding unit (LCU) layer.
  • GOP Group Of Pictures
  • FIG. 1 shows a hierarchical structure below the picture layer in the encoded data # 1.
  • FIGS. 1A to 1F show a picture layer P, a slice layer S, an LCU layer LCU, a leaf CU included in the LCU (denoted as CUL in FIG. 1D), and inter prediction (inter-screen prediction). It is a figure which shows the structure of inter prediction information PI_Inter which is the prediction information PI about a partition, and intra prediction information PI_Intra which is the prediction information PI about an intra prediction (prediction in a screen) partition.
  • PI_Inter is the prediction information PI about a partition
  • intra prediction information PI_Intra which is the prediction information PI about an intra prediction (prediction in a screen) partition.
  • the picture layer P is a set of data that is referenced by the video decoding device 1 in order to decode a target picture that is a processing target picture. As shown in FIG. 1A, the picture layer P includes a picture header PH and slice layers S1 to SNs (Ns is the total number of slice layers included in the picture layer P).
  • the picture header PH includes a coding parameter group referred to by the video decoding device 1 in order to determine a decoding method of the target picture.
  • the encoding mode information (entropy_coding_mode_flag) indicating the variable length encoding mode used in encoding by the moving image encoding device 2 is an example of an encoding parameter included in the picture header PH.
  • Each slice layer S included in the picture layer P is a set of data referred to by the video decoding device 1 in order to decode a target slice that is a slice to be processed.
  • the slice layer S includes a slice header SH and LCU layers LCU1 to LCUNc (Nc is the total number of LCUs included in the slice S).
  • the slice header SH includes a coding parameter group that the moving image decoding apparatus 1 refers to in order to determine a decoding method of the target slice.
  • Slice type designation information (slice_type) for designating a slice type is an example of an encoding parameter included in the slice header SH.
  • I slice that uses only intra prediction at the time of encoding (2) P slice that uses unidirectional prediction or intra prediction at the time of encoding, (3) B-slice using unidirectional prediction, bidirectional prediction, or intra prediction at the time of encoding may be used.
  • the slice header SH includes a filter parameter FP that is referred to by an adaptive filter included in the video decoding device 1.
  • the configuration of the filter parameter FP will be described later and will not be described here.
  • Each LCU layer LCU included in the slice layer S is a set of data that the video decoding device 1 refers to in order to decode the target LCU that is the processing target LCU.
  • the LCU layer LCU is composed of a plurality of coding units (CU: Coding Units) obtained by hierarchically dividing the LCU into a quadtree.
  • the LCU layer LCU is a coding unit corresponding to the highest level in a hierarchical structure that recursively includes a plurality of CUs.
  • each CU included in the LCU layer LCU has a hierarchical structure that recursively includes a CU header CUH and a plurality of CUs obtained by dividing the CU into quadtrees. is doing.
  • a node of a CU having a recursive structure may be referred to as a coding tree.
  • each CU excluding the LCU is half the size of the CU to which the CU directly belongs (that is, the CU one layer higher than the CU), and the size that each CU can take is encoded data # 1.
  • a CU that is not further divided is called a leaf CU.
  • the CU header CUH includes a coding parameter referred to by the video decoding device 1 in order to determine a decoding method of the target CU. Specifically, as shown in FIG. 1C, a CU division flag SP_CU for specifying whether or not the target CU is further divided into four subordinate CUs is included. When the CU division flag SP_CU is 0, that is, when the CU is not further divided, the CU is a leaf CU.
  • CU leaf A CU (CU leaf) that is not further divided is handled as a prediction unit (PU: Prediction Unit) and a transform unit (TU: Transform Unit).
  • PU Prediction Unit
  • TU Transform Unit
  • the leaf CU (denoted as CUL in FIG. 1 (d)) includes (1) PU information PUI that is referred to when the moving image decoding apparatus 1 generates a predicted image, and (2) The TU information TUI that is referred to when the residual data is decoded by the moving picture decoding apparatus 1 is included.
  • the skip flag SKIP is a flag indicating whether or not the skip mode is applied to the target PU.
  • the value of the skip flag SKIP is 1, that is, when the skip mode is applied to the target leaf, PU information PUI and TU information TUI in the leaf CU are omitted. Note that the skip flag SKIP is omitted for the I slice.
  • the PU information PUI includes a skip flag SKIP, prediction type information PT, and prediction information PI as shown in FIG.
  • the prediction type information PT is information that specifies whether intra prediction or inter prediction is used as a predicted image generation method for the target leaf CU (target PU).
  • the prediction information PI includes intra prediction information PI_Intra or inter prediction information PI_Inter depending on which prediction method is specified by the prediction type information PT.
  • a PU to which intra prediction is applied is also referred to as an intra PU
  • a PU to which inter prediction is applied is also referred to as an inter PU.
  • the PU information PUI includes information specifying the shape and size of each partition included in the target PU and the position in the target PU.
  • the partition is one or a plurality of non-overlapping areas constituting the target leaf CU, and the generation of the predicted image is performed in units of partitions.
  • the TU information TUI specifies a quantization parameter difference ⁇ qp (tu_qp_delta) that specifies the magnitude of the quantization step, and a division pattern for each block of the target leaf CU (target TU).
  • TU partition information SP_TU and quantized prediction residuals QD1 to QDNT are included.
  • the quantization parameter difference ⁇ qp is a difference qp ⁇ qp ′ between the quantization parameter qp in the target TU and the quantization parameter qp ′ in the TU encoded immediately before the TU.
  • TU partition information SP_TU is information that specifies the shape and size of each block included in the target TU and the position in the target TU.
  • Each TU can be, for example, a size from 64 ⁇ 64 pixels to 2 ⁇ 2 pixels.
  • the block is one or a plurality of non-overlapping areas constituting the target leaf CU, and the encoding and decoding of the prediction residual is performed in units of blocks.
  • Each quantized prediction residual QD is encoded data generated by the moving image encoding apparatus 2 performing the following processes 1 to 3 on a target block that is a processing target block.
  • Process 1 DCT transform (Discrete Cosine Transform) is performed on the prediction residual obtained by subtracting the prediction image from the encoding target image.
  • Process 2 The DCT coefficient obtained in Process 1 is quantized.
  • Process 3 The DCT coefficient quantized in Process 2 is variable length encoded.
  • the inter prediction information PI_Inter includes a coding parameter referred to when the video decoding device 1 generates an inter prediction image by inter prediction. As shown in FIG. 1 (e), the inter prediction information PI_Inter includes inter PU partition information SP_Inter that specifies a partition pattern for each partition of the target PU, and inter prediction parameters PP_Inter1 to PP_InterNe (Ne for each partition). The total number of inter prediction partitions included in the target PU).
  • the inter-PU partition information SP_Inter is information for designating the shape and size of each inter prediction partition included in the target PU (inter PU) and the position in the target PU.
  • the inter PU is composed of 4 symmetric splittings of 2N ⁇ 2N pixels, 2N ⁇ N pixels, N ⁇ 2N pixels, and N ⁇ N pixels, and 2N ⁇ nU pixels, 2N ⁇ nD pixels, nL ⁇ 2N With a pixel and four asymmetric splittings of nR ⁇ 2N pixels, it can be divided into a total of 8 partitions.
  • the specific value of N is defined by the size of the CU to which the PU belongs, and the specific values of nU, nD, nL, and nR are determined according to the value of N.
  • an inter PU of 128 ⁇ 128 pixels is 128 ⁇ 128 pixels, 128 ⁇ 64 pixels, 64 ⁇ 128 pixels, 64 ⁇ 64 pixels, 128 ⁇ 32 pixels, 128 ⁇ 96 pixels, 32 ⁇ 128 pixels, and 96 ⁇ It is possible to divide into 128-pixel inter prediction partitions.
  • the inter prediction parameter PP_Inter includes a reference image index RI, an estimated motion vector index PMVI, and a motion vector residual MVD.
  • the motion vector residual MVD is encoded data generated by the moving image encoding device 2 executing the following processes 4 to 6.
  • Process 4 Select an encoded / decoded locally decoded image (more precisely, an image obtained by performing deblocking processing and adaptive filtering on the encoded / decoded local decoded image)
  • the motion vector mv for the target partition is derived with reference to the selected encoded / decoded local decoded image (hereinafter also referred to as “reference image”).
  • Process 5 An estimation method is selected, and an estimated value (hereinafter also referred to as “estimated motion vector”) pmv of the motion vector mv assigned to the target partition is derived using the selected estimation method.
  • Process 6 The motion vector residual MVD obtained by subtracting the estimated motion vector pmv derived in Process 5 from the motion vector mv derived in Process 4 is encoded.
  • the reference image index RI designates the locally decoded image (reference image) that has been encoded / decoded selected in the process 4.
  • the estimated motion vector index PMVI described above is the estimation method selected in the process 5. Is specified.
  • the estimation methods that can be selected in the processing 5 include: (1) a locally decoded image being encoded / decoded (more precisely, a region that has already been decoded in a locally decoded image being encoded / decoded).
  • a median of a motion vector allocated to a partition adjacent to the target partition hereinafter also referred to as “adjacent partition” is used as an estimated motion vector pmv.
  • a motion vector assigned to a partition (often referred to as a “collocated partition”) occupying the same position as the target partition is used as an estimated motion vector pmv, etc. Is mentioned.
  • the prediction parameter PP related to the partition for which unidirectional prediction is performed includes one reference image index RI, estimated motion vector index PMVI, and one motion vector residual MVD.
  • the prediction parameters PP related to the partition performing bi-directional prediction include two reference image indexes RI1 and RI2, two estimated motion vector indexes PMVI1 and PMVI2, and two motion vector residuals MVD1. And MVD2.
  • the intra prediction information PI_Intra includes a coding parameter referred to when the video decoding device 1 generates an intra predicted image by intra prediction.
  • the intra prediction information PI_Intra includes intra PU partition information SP_Intra that specifies a partition pattern for the target PU (intra PU) into each partition, and intra prediction parameters PP_Intra1 to PP_IntraNa for each partition.
  • Na is the total number of intra prediction partitions included in the target PU).
  • the intra-PU partition information SP_Intra is information that specifies the shape and size of each intra-predicted partition included in the target PU, and the position in the target PU.
  • the intra PU split information SP_Intra includes an intra split flag (intra_split_flag) that specifies whether or not the target PU is split into partitions. If the intra partition flag is 1, the target PU is divided symmetrically into four partitions. If the intra partition flag is 0, the target PU is not divided and the target PU itself is one partition.
  • N 2 n , n is an arbitrary integer of 1 or more.
  • a 128 ⁇ 128 pixel intra PU can be divided into 128 ⁇ 128 pixel and 64 ⁇ 64 pixel intra prediction partitions.
  • the intra prediction parameter PP_Intra includes an estimation flag MPM and a residual prediction mode index RIPM as shown in FIG.
  • the intra prediction parameter PP_Intra is a parameter for designating an intra prediction method (prediction mode) for each partition.
  • the estimation flag MPM is a flag indicating whether or not the prediction mode estimated based on the prediction mode assigned to the peripheral partition of the target partition to be processed and the prediction mode for the target partition are the same. is there.
  • examples of partitions around the target partition include a partition adjacent to the upper side of the target partition and a partition adjacent to the left side of the target partition.
  • the residual prediction mode index RIPM is an index included in the intra prediction parameter PP_Intra when the estimated prediction mode and the prediction mode for the target partition are different, and is an index for designating a prediction mode assigned to the target partition. It is.
  • the slice header SH includes the filter parameter FP that is referred to by the adaptive filter included in the video decoding device 1.
  • FIG. 1G shows the data structure of the filter parameter FP.
  • the filter parameter FP includes filter on / off information, luminance filter information, and color difference filter information.
  • the filter on / off information includes (1) area designation information for designating a divided area included in the target slice, and (2) on / off information for designating on / off of the filter process for each divided area.
  • the luminance filter information includes luminance filter coefficient information
  • the chrominance filter information includes chrominance filter mode alf_chroma_idc and chrominance filter coefficient information for switching whether or not to perform filter processing on the chrominance component.
  • the color difference filter coefficient information includes the number of color difference component taps alf_length_chroma_minus5_div2 and a color difference filter coefficient group.
  • the filter coefficient group includes (1) filter coefficients a 0 to a NT-1 (NT is the total number of filter coefficients included in the filter coefficient group), and (2) offset o.
  • luminance filter information and the color difference filter information may have the following configuration.
  • Luminance filter information includes color component designation information, filter on / off information for luminance components, and filter coefficient groups for luminance components.
  • Color difference filter information includes color component designation information, filter on / off information for color difference components, and filter coefficient groups for color difference components.
  • the filter information may include independent filter on / off information for the luminance component and the color difference component.
  • the luminance component can be encoded in units of CU
  • the color difference component can be encoded in units of LCU.
  • FIG. 2 shows a data structure of color difference filter coefficient information for each alf_chroma_idc.
  • alf_chroma_idc When alf_chroma_idc is 0, the color difference filter coefficient information is omitted as shown in FIG.
  • the color difference filter coefficient information When alf_chroma_idc is 1, as shown in FIG. 2B, the color difference filter coefficient information includes alf_length_chroma_minus5_div2 and a filter coefficient group alf_coeff_Cr of the color difference component Cr.
  • alf_chroma_idc is 2, as shown in FIG. 2C, the color difference filter coefficient information includes alf_length_chroma_minus5_div2 and a filter coefficient group alf_coeff_Cb of the color difference component Cb.
  • the color difference filter coefficient information includes alf_length_chroma_minus5_div2 and a filter coefficient group alf_coeff_chroma.
  • the chrominance filter coefficient information includes alf_chroma_idc, an independent filter coefficient group alf_coeff_C1 and a dependent filter coefficient group alf_coeff_C2 for the chrominance component.
  • one of the two color difference components is expressed as a first color difference component C1
  • the other is expressed as a second color difference component C2.
  • the filter coefficient group coeff_C1 is a filter coefficient group for deriving the color difference component C1
  • the filter coefficient group alf_coeff_C2 is a difference filter coefficient group for deriving the color difference component C2.
  • the color difference filter mode alf_chroma_idc has a role as color component designation information for designating a color component to be filtered as described above.
  • FIG. 2 (e) shows, as another configuration when alf_chroma_idc is 4, a configuration in which the chrominance filter coefficient information includes a flag sw_flag that specifies a chrominance component corresponding to the first chrominance component C1.
  • FIG. 3 is a diagram showing syntax included in the color difference filter information of the encoded data # 1 according to the present embodiment.
  • the color difference filter information of the encoded data # 1 according to the present embodiment includes (1) syntax (color difference filter mode) alf_chroma_idc that specifies whether or not to perform filter processing on each color difference component , (2) syntax alf_length_chroma_minus5_div2 for specifying the filter length of the color difference adaptive filter, and (3) syntax for specifying the filter coefficient group.
  • the syntax for specifying the filter coefficient group is alf_coeff_Cr [i] when alf_chroma_idc is 1, alf_coeff_Cb [i] when alf_chroma_idc is 2, alf_coeff_chroma [i] when alf_chroma_idc is 3, and alf_chroma_idc is 4.
  • alf_coeff_C1 [i] and alf_coeff_C2 [i] are included.
  • FIG. 25 is a diagram illustrating a syntax of a configuration including a flag sw_flag. In this configuration, when alf_chroma_idc is 4, sw_flag is included.
  • the moving picture decoding apparatus 1 includes H.264 as a part thereof. H.264 / MPEG-4. It is a decoding device including a technique adopted in a test model HM of AVC, VCEG (Video Coding Expert Group), and HEVC (High Efficiency Video Coding) proposed as a successor codec.
  • FIG. 4 is a block diagram showing a configuration of the moving picture decoding apparatus 1.
  • the moving image decoding apparatus 1 includes a variable length code decoding unit 11, a motion vector restoration unit 12, an inverse quantization / inverse conversion unit 13, an adder 14, a buffer memory 15, and an inter prediction image generation unit 16. , An intra prediction image generation unit 17, a prediction method determination unit 18, a deblocking filter 19, and an adaptive filter 30.
  • the moving picture decoding apparatus 1 is an apparatus for generating moving picture # 2 by decoding encoded data # 1.
  • variable-length code decoding unit 11 decodes the prediction parameter PP for each partition from the encoded data # 1. That is, for the inter prediction partition, the reference image index RI, the estimated motion vector index PMVI, and the motion vector residual MVD are decoded from the encoded data # 1, and these are supplied to the motion vector restoration unit 12. On the other hand, with respect to the intra prediction partition, (1) size designation information for designating the size of the partition and (2) prediction index designation information for designating the prediction index are decoded from the encoded data # 1, and this is decoded into the intra prediction image. This is supplied to the generation unit 17.
  • variable-length code decoding unit 11 decodes the quantization prediction residual QD for each block and the quantization parameter difference ⁇ qp for the macroblock including the block from the encoded data # 1, and dequantizes them. This is supplied to the inverse conversion unit 13. In addition, the variable length code decoding unit 11 decodes the filter parameter FP from the encoded data # 1 and supplies it to the adaptive filter 30.
  • the motion vector restoration unit 12 restores the motion vector mv related to each inter prediction partition from the motion vector residual MVD related to that partition and the restored motion vector mv ′ related to another partition. Specifically, (1) the estimated motion vector pmv is derived from the restored motion vector mv ′ according to the estimation method specified by the estimated motion vector index PMVI, and (2) the derived estimated motion vector pmv and the motion vector remaining are derived. The motion vector mv is obtained by adding the difference MVD. Note that the motion vector restoration unit 12 can read the restored motion vector mv ′ relating to another partition from the buffer memory 15. The motion vector restoration unit 12 supplies the restored motion vector mv together with the corresponding reference image index RI to the inter predicted image generation unit 16.
  • generation part 16 produces
  • the inter prediction image generation unit 16 uses the motion vector mv supplied from the motion vector restoration unit 12 to perform filtered decoding specified by the reference image index RI that is also supplied from the motion vector restoration unit 12.
  • a motion compensated image mc is generated from the image P_FL ′.
  • the filtered decoded image P_FL ′ is obtained by performing deblocking processing by the deblocking filter 19 and filtering processing by the adaptive filter 30 on the decoded image that has already been decoded for the entire frame.
  • the inter predicted image generation unit 16 can read out the pixel value of each pixel constituting the filtered decoded image P_FL ′ from the buffer memory 15.
  • the motion compensation image mc generated by the inter prediction image generation unit 16 is supplied to the prediction method determination unit 18 as an inter prediction image Pred_Inter.
  • the intra predicted image generation unit 17 generates a predicted image Pred_Intra related to each intra prediction partition. Specifically, first, the intra predicted image generation unit 17 specifies the prediction mode based on the intra prediction parameter PP_Intra supplied from the variable length code decoding unit 11. Next, the intra predicted image generation unit 17 assigns the identified prediction mode to the target partition, for example, in raster scan order.
  • the intra prediction image generation unit 17 specifies a prediction mode based on the intra prediction parameter PP_Intra as follows. (1) The estimation flag MPM is decoded, and the estimation flag MPM indicates that the prediction mode for the target partition to be processed is the same as the prediction mode assigned to the peripheral partition of the target partition. If it is, the prediction mode assigned to the partition around the target partition is assigned to the target partition. (2) On the other hand, if the estimation flag MPM indicates that the prediction mode for the target partition to be processed is not the same as the prediction mode assigned to a partition around the target partition, the remaining The prediction mode index RIPM is decoded, and the prediction mode indicated by the residual prediction mode index RIPM is assigned to the target partition.
  • the intra predicted image generation unit 17 generates a predicted image Pred_Intra from the (local) decoded image P by intra prediction according to the prediction method indicated by the prediction mode assigned to the target partition.
  • the intra predicted image Pred_Intra generated by the intra predicted image generation unit 17 is supplied to the prediction method determination unit 18.
  • the intra predicted image generation unit 17 may be configured to generate a predicted image Pred_Intra from the filtered decoded image P_FL by intra prediction.
  • the prediction method determination unit 18 determines whether each partition is an inter prediction partition that should perform inter prediction or an intra prediction partition that should perform intra prediction, based on the prediction type information PT for the PU to which each partition belongs. To do. When each partition is an inter prediction partition to be subjected to inter prediction, the prediction method determination unit 18 supplies the inter prediction image Pred_Inter generated by the inter prediction image generation unit 16 to the adder 14 as a prediction image Pred. Supply. On the other hand, when each partition is an intra prediction partition to be subjected to intra prediction, the prediction method determination unit 18 supplies the intra prediction image Pred_Intra generated by the intra prediction image generation unit 17 to the adder 14 as the prediction image Pred. Supply.
  • the inverse quantization / inverse transform unit 13 (1) inversely quantizes the quantized prediction residual QD, (2) performs inverse DCT (Discrete Cosine Transform) transform on the DCT coefficient obtained by the inverse quantization, and (3) The prediction residual D obtained by the inverse DCT transform is supplied to the adder 14.
  • the inverse quantization / inverse transform unit 13 derives the quantization step QP from the quantization parameter difference ⁇ qp supplied from the variable length code decoding unit 11.
  • the inverse quantization / inverse transform unit 13 derives the quantization parameter qp by adding the quantization parameter difference ⁇ qp to the quantization parameter qp ′ for the TU that has been inversely quantized / inversely DCT transformed immediately before.
  • the generation of the prediction residual D by the inverse quantization / inverse transform unit 13 is performed in units of blocks obtained by dividing TUs or TUs.
  • the adder 14 generates the decoded image P by adding the prediction image Pred supplied from the prediction method determination unit 18 and the prediction residual D supplied from the inverse quantization / inverse conversion unit 13.
  • the generated decoded image P is stored in the buffer memory 15.
  • Deblocking filter 19 When the difference between the pixel values of pixels adjacent to each other via a block boundary or partition boundary in the decoded image P is smaller than a predetermined threshold, the deblocking filter 19 By performing a deblocking process on the partition boundary, the block boundary or an image near the partition boundary is smoothed. The image subjected to the deblocking process by the deblocking filter 19 is output to the adaptive filter 30 as a deblocked decoded image P_DB.
  • the adaptive filter 30 performs adaptive filter processing on the luminance component Y, the chrominance component Cr, and the chrominance component Cb that form the deblocked decoded image P_DB, and generates a filtered decoded image P_FL.
  • luminance component Y the luminance component Y
  • chrominance component Cr the chrominance component Cb constituting the deblocked decoded image P_DB
  • luminance component Y also referred to as “component Cr”
  • color difference component Cb the luminance component Y
  • FIG. 5 is a block diagram of the adaptive filter 30.
  • the adaptive filter 30 includes a color difference adaptive filter information decoding unit 31, a color difference adaptive filter unit 32, and a luminance adaptive filter unit 33.
  • the color difference adaptive filter information decoding unit 31 decodes alf_chroma_idc from the filter parameter FP, and further decodes the number of taps, the filter coefficient group HCr, and the filter coefficient group HCb according to the value of alf_chroma_idc.
  • the filter coefficient group HCr and the filter coefficient group HCb are filter coefficient groups for adaptive filter processing applied to the color difference component Cr and the color difference component Cb.
  • the color difference adaptive filter information decoding unit 31 includes a color difference adaptive filter coefficient deriving unit 311.
  • the filter coefficient group HCb and the filter coefficient group HCr derived by the color difference adaptive filter coefficient deriving unit 311 are supplied to the color difference adaptive filter unit 32.
  • Color difference adaptive filter coefficient deriving unit 3111 The color difference adaptive filter coefficient deriving unit 311 will be described with reference to FIGS. The operation of the color difference adaptive filter coefficient deriving unit 311 is switched according to the value of alf_chroma_idc.
  • the color difference adaptive filter coefficient deriving unit 311 derives the tap number tap_chroma of the filter and the number of filter coefficients AlfNumCoeffChroma from alf_length_chroma_minus5_div2 by the following formula.
  • alf_chroma_idc 0, the filter coefficient group HCr and the filter coefficient group HCb are not derived.
  • HCr [i] alf_coeff_Cr [i]
  • i is an integer from 0 to AlfNumCoeffChroma-1.
  • alf_coeff_Cr [i] is simply expressed as “coeff_Cr” (the same applies to coeff_Cb and alf_coeff_chroma).
  • HCb [i] alf_coeff_Cb [i]
  • i is an integer from 0 to AlfNumCoeffChroma-1.
  • the filter coefficient group HCr and the filter coefficient group HCb are derived.
  • the filter coefficient group alf_coeff_chroma decoded from the filter parameter FP is assigned to the filter coefficient group HCr and the filter coefficient group HCb according to the following expression.
  • HCr [i] alf_coeff_chroma [i]
  • HCb [i] alf_coeff_chroma [i]
  • i is an integer from 0 to AlfNumCoeffChroma-1.
  • the filter coefficient prediction processing unit 3111 included in the color difference adaptive filter coefficient deriving unit 311 subtracts the independent filter coefficient group alf_coeff_C1 [i] and the dependent filter coefficient group alf_coeff_C2 [i] for each element, A filter coefficient group HC2 of the second color difference component C2 is derived.
  • HC1 [i] alf_coeff_C1 [i]
  • HC2 [i] alf_coeff_C1 [i] -alf_coeff_C2 [i]
  • i is an integer from 0 to AlfNumCoeffChroma-1.
  • the derived filter coefficient group HC1 and filter coefficient group HC2 are assigned to the filter coefficient group HCr and the filter coefficient group HCb according to the correspondence relationship between the first color difference component C1 and the second color difference component C2.
  • the moving picture coding apparatus 2 described later has a difference between the filter coefficient HC2 [i] of the filter coefficient group of the second color difference component C2 and the filter coefficient HC1 [i] of the filter coefficient group of the first color difference component C1.
  • the value of the dependent filter coefficient group alf_coeff_C2 [i] is determined by the following equation.
  • alf_coeff_C2 [i] HC1 [i] -HC2 [i]
  • i is an integer from 0 to AlfNumCoeffChroma-1.
  • the filter coefficient prediction processing unit 3111 adds the independent filter coefficient group alf_coeff_C1 [i] and the dependent filter coefficient group alf_coeff_C2 [i] for each element, as shown in the following expression, to thereby generate the second color difference component C2
  • the filter coefficient group HC2 may be derived.
  • HC2 [i] alf_coeff_C1 [i] + alf_coeff_C2 [i]
  • i is an integer from 0 to AlfNumCoeffChroma-1.
  • the moving picture encoding apparatus 2 to be described later calculates a difference value between the two filter coefficient groups by the following formula to determine a dependent filter coefficient group alf_coeff_C2 [i].
  • alf_coeff_C2 [i] HC2 [i] -HC1 [i]
  • i is an integer from 0 to AlfNumCoeffChroma-1.
  • the color difference adaptive filter coefficient deriving unit 311 assigns a filter coefficient group HCr and a filter coefficient group HCb according to the following equations.
  • HCr [i] HC1 [i]
  • HCb [i] HC2 [i]
  • i is an integer from 0 to AlfNumCoeffChroma-1.
  • the color difference adaptive filter coefficient deriving unit 311 may use the first color difference component C1 as the color difference component Cb.
  • the filter coefficient group HCr and the filter coefficient group HCb are assigned by the following equations.
  • HCb [i] HC1 [i]
  • HCr [i] HC2 [i]
  • i is an integer from 0 to AlfNumCoeffChroma-1.
  • the color difference filter coefficient assignment unit 3112 assigns the filter coefficient group HC1 and the filter coefficient group HC2.
  • the color difference filter coefficient assigning unit 3112 assigns the filter coefficient group HC1 and the filter coefficient group HC2 according to the value of sw_flag.
  • sw_flag When sw_flag is 0, the color difference filter coefficient assigning unit 3112 assigns the filter coefficient group HC1 to the filter coefficient group HCr, and assigns the filter coefficient group HC2 to the filter coefficient group HCb.
  • sw_flag When sw_flag is 1, the color difference filter coefficient assignment unit 3112 assigns the filter coefficient group HC1 to the filter coefficient group HCb, and assigns the filter coefficient group HC2 to the filter coefficient group HCr.
  • the color difference adaptive filter unit 32 performs adaptive filter processing on the color difference component Cr and the color difference component Cb of the deblocked decoded image P_DB, and the color difference component Cr and the color difference of the filtered decoded image P_FL. Component Cb is generated. Further, the color difference component Cr and the color difference component Cb of the generated filtered decoded image P_FL are stored in the buffer memory 15.
  • the color difference adaptive filter unit 32 includes a switch unit 321, a Cr filter processing unit 322, and a Cb filter processing unit 323.
  • the switch unit 321 includes a switch unit 321a and a switch unit 321b.
  • the switch unit 321 switches whether to perform adaptive filter processing on the color difference component Cr and the color difference component Cb of the deblocked decoded image P_DB. If the switch unit 321a is on, the switch unit 321a supplies the color difference component Cr of the deblocked decoded image P_DB to the Cr filter processing unit 322. If the switch unit 321b is on, the switch unit 321b supplies the color difference component Cb of the deblocked decoded image P_DB to the Cb filter processing unit 323.
  • FIG. 9 is a diagram illustrating a correspondence table between the color difference filter mode alf_chroma_idc and the on / off states of the switches 321a and 321b.
  • alf_chroma_idc When alf_chroma_idc is 0, filter processing for the color difference component Cr and the color difference component Cb is not performed.
  • alf_chroma_idc When alf_chroma_idc is 1, filter processing is performed on the color difference component Cr.
  • alf_chroma_idc When alf_chroma_idc is 2, filter processing is performed on the color difference component Cb.
  • alf_chroma_idc When alf_chroma_idc is 3 or 4, filter processing is performed on both the color difference component Cr and the color difference component Cb.
  • the switch unit 321a is turned on when alf_chroma_idc is 1, 3, or 4.
  • the color difference component Cr of the deblocked decoded image P_DB is supplied to the Cr filter processing unit 322.
  • the switch unit 321a is off, the color difference component Cr of the deblocked decoded image P_DB is output as the color difference component Cr of the filtered decoded image P_FL.
  • the switch unit 321b is turned on when alf_chroma_idc is 2, 3, or 4.
  • the switch unit 321b is on, the color difference component Cb of the deblocked decoded image P_DB is supplied to the Cb filter processing unit 323.
  • the switch unit 321b is off, the color difference component Cb of the deblocked decoded image P_DB is output as the color difference component Cb of the filtered decoded image P_FL.
  • the Cr filter processing unit 322, the Cb filter processing unit 323, and the luminance adaptive filter unit 33 perform adaptive filter processing as follows.
  • the pixel value of the filter target pixel is represented as SF (x ′, y ′), and the pixel values in or around the target macroblock in the deblocked decoded image P_DB (hereinafter also referred to as “pre-filter image”)
  • pre-filter image When expressed as S (x, y), the filter processing units (Cr filter processing unit 322, Cb filter processing unit 323, and luminance adaptive filter unit 33) express the pixel value SF (x ′, y ′) as follows: Calculated by equation (1).
  • the coordinates (x, y) may be the same coordinates as the coordinates (x ′, y ′), or may be different coordinates as long as they have a one-to-one correspondence.
  • AI (i, j) represents a filter coefficient to be multiplied by the pixel value S (x + i, y + j) of the pre-filter image, and corresponds to the filter coefficient included in the luminance filter information or the color difference filter information of the filter parameter FP. is doing.
  • OI represents an offset included in the filter coefficient group I.
  • R represents an area referred to in the filter processing (hereinafter also referred to as “filter reference area R”), and is set according to the position of the filter target pixel.
  • the filter reference range (also referred to as “filter reference range RA”) is defined as the union of the filter reference regions R for each filter target pixel.
  • the filter reference range RA can also be expressed as a set of pixels required for calculating all pixel values of the filtered image in the target unit region.
  • FIG. 10A shows the filter reference region R and the filter reference range RA when the target unit region UR is 8 ⁇ 8 pixels and the filter reference region is 5 ⁇ 5 taps.
  • the hatched pixels indicate the filter target pixels S (x ′, y ′).
  • FIG. 10B shows an example of how to assign the filter coefficient to each pixel included in the filter reference region R. Also in FIG. 10B, the hatched pixels indicate the filter target pixels S (x ′, y ′). As shown in FIG. 10B, each filter coefficient can be assigned to each pixel included in the filter reference region R so as to have a rotational symmetry of 180 degrees.
  • the present embodiment is not limited to this, and the assignment of each filter coefficient to each pixel value may not have rotational symmetry.
  • the filter reference region R may be a rhombus region constituted by pixels whose city distance from the filter target pixel is Ncb or less in units of pixels, or regions having other shapes. Good.
  • the method of assigning the filter coefficient to each pixel included in the filter reference region R and the shape of the filter reference region R may be appropriately set according to the configuration of the moving picture encoding device that generates the encoded data # 1. Good.
  • the Cr filter processing unit 322 illustrated in FIG. 5 uses the filter coefficient group HCr supplied from the color difference adaptive filter coefficient deriving unit 311 for the color difference component Cr of the deblocked decoded image P_DB supplied from the switch unit 321. An adaptive filter process is performed to generate a color difference component Cr of the filtered decoded image P_FL.
  • the Cb filter processing unit 323 illustrated in FIG. 5 uses the filter coefficient group HCb supplied from the color difference adaptive filter coefficient deriving unit 311 for the color difference component Cb of the deblocked decoded image P_DB supplied from the switch unit 321.
  • An adaptive filter process is performed to generate a color difference component Cb of the filtered decoded image P_FL.
  • the luminance adaptive filter unit 33 illustrated in FIG. 5 performs adaptive filter processing on the luminance component Y of the deblocked decoded image P_DB using luminance filter information supplied from the filter parameter FP, and performs the filtered decoded image P_FL. Luminance component Y is generated.
  • the moving picture encoding apparatus 2 includes H.264 as a part thereof. H.264 / MPEG-4. It is an encoding device including a technique adopted in a test model HM of AVC, VCEG (Video Coding Expert Group), and HEVC (High Efficiency Video Coding) proposed as a successor codec.
  • FIG. 11 is a block diagram showing a configuration of the moving picture coding apparatus 2.
  • the moving image encoding device 2 includes a transform / quantization unit 21, a variable length code encoding unit 22, an inverse quantization / inverse transform unit 23, a buffer memory 24, an intra-predicted image generation unit 25,
  • the inter prediction image generation unit 26, the motion vector detection unit 27, the prediction scheme control unit 28, the motion vector redundancy deletion unit 29, an adder 41, a subtractor 42, a deblocking filter 43, and an adaptive filter 44 are provided.
  • the moving image encoding device 2 is a device that generates encoded data # 1 by encoding moving image # 10 (encoding target image).
  • the transform / quantization unit 21 performs (1) DCT transform (Discrete Cosine Transform) on the prediction residual D obtained by subtracting the predicted image Pred from the encoding target image, and (2) DCT coefficients obtained by the DCT transform. (3) The quantized prediction residual QD obtained by the quantization is supplied to the variable-length code encoding unit 22 and the inverse quantization / inverse transform unit 23.
  • the transform / quantization unit 21 selects (1) a quantization step QP used for quantization for each TU, and (2) sets a quantization parameter difference ⁇ qp indicating the magnitude of the selected quantization step QP.
  • the variable length code encoding unit 22 is supplied, and (3) the selected quantization step QP is supplied to the inverse quantization / inverse transform unit 23.
  • the difference value obtained by subtracting the value of.
  • variable length code encoder 22 includes (1) a quantized prediction residual QD and ⁇ qp supplied from the transform / quantization unit 21, and (2) a quantization parameter PP supplied from a prediction scheme control unit 28 described later, (3) The encoded parameter # 1 is generated by variable-length encoding the filter parameter FP supplied from the adaptive filter 44 described later.
  • the inverse quantization / inverse transform unit 23 (1) inversely quantizes the quantized prediction residual QD, (2) performs inverse DCT (Discrete Cosine Transform) transformation on the DCT coefficient obtained by the inverse quantization, and (3) The prediction residual D obtained by the inverse DCT transform is supplied to the adder 41.
  • the quantization step QP supplied from the transform / quantization unit 21 is used.
  • the prediction residual D output from the inverse quantization / inverse transform unit 23 is obtained by adding a quantization error to the prediction residual D input to the transform / quantization unit 21. Common names are used for this purpose.
  • the intra predicted image generation unit 25 generates a predicted image Pred_Intra related to each partition. Specifically, (1) a prediction mode used for intra prediction is selected for each partition, and (2) a prediction image Pred_Intra is generated from the decoded image P using the selected prediction mode. The intra predicted image generation unit 25 supplies the generated intra predicted image Pred_Intra to the prediction method control unit 28.
  • the intra prediction image generation unit 25 identifies the prediction index PI for each partition from the prediction mode selected for each partition and the size of each partition, and specifies a prediction index indicating the prediction index PI for each partition. Information is supplied to the prediction method control unit 28.
  • the motion vector detection unit 27 detects a motion vector mv regarding each partition. Specifically, (1) the filtered decoded image P_FL ′ to be used as a reference image is selected, and (2) the target partition is searched by searching for the region that best approximates the target partition in the selected filtered decoded image P_FL ′. Detects a motion vector mv.
  • the filtered decoded image P_FL ′ is obtained by performing deblocking processing by the deblocking filter 43 and adaptive filtering processing by the adaptive filter 44 on the decoded image that has already been decoded.
  • the motion vector detection unit 27 can read out the pixel value of each pixel constituting the filtered decoded image P_FL ′ from the buffer memory 24.
  • the motion vector detection unit 27 supplies the detected motion vector mv to the inter predicted image generation unit 26 and the motion vector redundancy deletion unit 29 together with the reference image index RI that specifies the filtered decoded image P_FL ′ used as the reference image. To do. Note that for a partition that performs bi-directional prediction (weighted prediction), the motion vector detection unit 27 selects two filtered decoded images P_FL1 ′ and P_FL2 ′ as reference images.
  • the motion vector detection unit 27 uses the motion vector mv1 and mv2 corresponding to each of the two filtered decoded images P_FL1 ′ and P_FL2 ′, and the reference image indexes RI1 and RI2 as the inter prediction image generation unit 26 and the motion vector. This is supplied to the redundancy deletion unit 29.
  • generation part 26 produces
  • inter prediction image Pred_Inter inter prediction image Pred_Inter
  • the inter prediction image generation unit 26 uses (1) motion compensation from the filtered decoded image P_FL1 ′ specified by the reference image index RI1 using the motion vector mv1. An image mc1 is generated, (2) a motion compensated image mc2 is generated from the filtered reference image P_FL2 ′ specified by the reference image index RI2 using the motion vector mv2, and (3) the motion compensated image mc1 and the motion compensated image An inter prediction image Pred_Inter is generated by adding an offset value to the weighted average with mc2.
  • the prediction scheme control unit 28 compares the intra predicted image Pred_Intra and the inter predicted image Pred_Inter with the encoding target image, and selects whether to perform intra prediction or inter prediction. When the intra prediction is selected, the prediction scheme control unit 28 supplies the intra prediction image Pred_Intra as the prediction image Pred to the adder 41 and the subtractor 42 and also predicts the prediction index PI supplied from the intra prediction image generation unit 25.
  • the parameter PP is supplied to the variable length code encoder 22.
  • the prediction scheme control unit 28 supplies the inter prediction image Pred_Inter as the prediction image Pred to the adder 41 and the subtractor 42 and the reference image index supplied from the inter prediction image generation unit 26.
  • RI and an estimated motion vector index PMVI and a motion vector residual MVD supplied from a motion vector redundancy deleting unit 29 (described later) are supplied to the variable length code encoding unit 22 as prediction parameters PP.
  • the prediction residual D is generated by the subtractor 42 by subtracting the prediction image Pred selected by the prediction method control unit 28 from the encoding target image.
  • the prediction residual D generated by the subtractor 42 is DCT transformed / quantized by the transform / quantization unit 21 as described above.
  • the adder 41 by adding the prediction image Pred selected by the prediction method control unit 28 to the prediction residual D generated by the inverse quantization / inverse conversion unit 23, the adder 41 generates a local decoded image P. Generated.
  • the local decoded image P generated by the adder 41 is stored in the buffer memory 24 as a filtered decoded image P_FL after passing through the deblocking filter 43 and the adaptive filter 44. This filtered decoded image P_FL is used as a reference image in inter prediction.
  • the motion vector redundancy deletion unit 29 deletes redundancy in the motion vector mv detected by the motion vector detection unit 27. Specifically, (1) an estimation method used for estimating the motion vector mv is selected, (2) an estimated motion vector pmv is derived according to the selected estimation method, and (3) the estimated motion vector pmv is subtracted from the motion vector mv. As a result, a motion vector residual MVD is generated. The motion vector redundancy deletion unit 29 supplies the generated motion vector residual MVD to the prediction method control unit 28 together with the estimated motion vector index PMVI indicating the selected estimation method.
  • the deblocking filter 43 determines the block boundary in the decoded image P when the difference between the pixel values of pixels adjacent to each other via the block boundary in the decoded image P or the macroblock boundary is smaller than a predetermined threshold value, or By performing deblocking processing on the macroblock boundary, an image near the block boundary or the macroblock boundary is smoothed.
  • the image subjected to the deblocking process by the deblocking filter 43 is output to the adaptive filter 44 as a deblocked decoded image P_DB.
  • FIG. 12 is a block diagram showing the configuration of the adaptive filter 44.
  • the adaptive filter 44 includes a color difference adaptive filter unit 441 and a luminance adaptive filter unit 444.
  • the adaptive filter 44 generates a filtered decoded image P_FL by performing an adaptive filter process on the deblocked decoded image P_DB supplied from the deblocking filter 43.
  • the filtered decoded image P_FL that has been filtered by the adaptive filter 44 is stored in the buffer memory 24.
  • the adaptive filter 44 generates a filter parameter FP and supplies it to the variable length code encoding unit 22.
  • the color difference adaptive filter unit 441 includes a color difference filter information determination unit 442 and an adaptive filter unit 443.
  • the color difference adaptive filter unit 441 uses the color difference component Cr and the color difference component Cb of the deblocked decoded image P_DB and the color difference component Cr and color difference component Cb of the moving image # 10 to obtain the color difference filter information and the filtered decoded image P_FL.
  • a color difference component Cr and a color difference component Cb are generated.
  • the color difference filter information determination unit 442 includes a color difference filter coefficient calculation unit 4421 and a color difference filter coefficient determination unit 4422.
  • the color difference filter information determination unit 442 determines the color difference filter information and supplies it to the adaptive filter unit 443 and the variable length code encoding unit 22.
  • the color difference filter coefficient calculation unit 4421 calculates a filter coefficient group so that the square error between the color difference component of the deblocked decoded image P_DB and the color difference component of the encoding target image (moving image # 10) is minimized. .
  • the color difference filter coefficient calculation unit 4421 includes a common filter coefficient calculation unit (not shown).
  • the common filter coefficient calculation unit is a means used in common for the luminance component and the color difference component, and a square error between the deblocked decoded image and the encoding target image in the designated color component and the designated region.
  • the filter coefficient group is calculated so that is minimized.
  • calculating the filter coefficient group so that the square error is minimized can also be expressed as “calculating the filter coefficient group so that the square error is smaller” (hereinafter, referred to as “calculate the filter coefficient group”). The same).
  • the filter coefficient calculation unit 4421 can specify the luminance component Y, the color difference component Cb, the color difference component Cr, and both the color difference component Cb and the color difference component Cr, and each square error has the following (2), (3), (4), and (5).
  • S (x, y), SCb (x, y), and SCr (x, y) are pixels whose coordinates are (x, y) among the pixels included in the deblocked decoded image P_DB.
  • the pixel values of the luminance component Y, the color difference component Cb, and the color difference component Cr are represented.
  • ST (x, y), STCb (x, y), and STCr (x, y) are the pixels included in the encoding target image. Of these, the pixel values of the luminance component Y, the color difference component Cb, and the color difference component Cr of the pixel having coordinates (x, y) are represented.
  • the filter coefficient calculation unit 4421 performs the following three calculations.
  • (Calculation 1) The filter coefficient groups aI (i, j) and oI obtained by minimizing the expression (3) are inversely quantized to obtain alf_coeff_Cr.
  • (Calculation 2) The filter coefficient groups aI (i, j) and oI obtained by minimizing the equation (4) are inversely quantized to obtain alf_coeff_Cb.
  • the filter coefficient groups aI (i, j) and oI obtained by minimizing the expression (5) are inversely quantized to obtain alf_coeff_chroma.
  • the inverse quantization process is performed according to the quantization bit depth filter_bitdepth of the filter coefficient.
  • the reciprocal 1 ⁇ filter_bitdepth of the quantization step determined from the quantization bit depth is applied, and this is performed according to the following equation (6).
  • coeff represents one of alf_coeff_Cr, alf_coeff_Cb, and alf_coeff_chroma
  • i represents a subscript of the filter coefficient a shown in FIG. k1 and k2 are determined so that the positional relationship with i corresponds.
  • the color difference filter coefficient determination unit 4422 receives the filter coefficient group alf_coeff_Cr, the filter coefficient group alf_coeff_Cb, and the filter coefficient group alf_coeff_chroma from the color difference filter coefficient calculation unit 4421.
  • the chrominance filter coefficient determination unit 4422 obtains each encoding cost in each mode of alf_chroma_idc by the following processing 1 to processing 5, and obtains the optimum alf_chroma_idc by processing 6.
  • the chrominance filter coefficient determination unit 4422 calculates each encoding cost by calculating the square error obtained by comparing the image obtained by the adaptive filter processing of each mode and the encoding target image, and the filter coefficient used in each mode. By using the rate distortion cost calculated from the code amount of the group, it is derived as follows.
  • the encoding cost 2 of mode 2 is calculated from the square error when the adaptive filter processing is applied to only the color difference component Cb using the filter coefficient group alf_coeff_Cb and the code amount of the filter coefficient group of the color difference component Cb Ask.
  • the coding cost 3 of mode 3 is calculated from the square error when adaptive filter processing is applied to the color difference component Cr and the color difference component Cb using the filter coefficient group alf_coeff_chroma and the code amount of the filter coefficient group alf_coeff_chroma. Ask.
  • alf_chroma_idc K.
  • the chrominance filter coefficient determination unit 4422 performs the following processing 7, processing 8, and processing 9 instead of processing 5 and processing 6 to obtain optimal alf_chroma_idc and sw_flag.
  • the adaptive filter unit 443 illustrated in FIG. 12 performs adaptive filter processing on the color difference component Cr and the color difference component Cb of the deblocked decoded image P_DB using the color difference filter information supplied from the color difference filter information determination unit 442. Then, the color difference component Cr and the color difference component Cb of the filtered decoded image P_FL are generated. Since the process of the adaptive filter unit 443 is the same as the process of the color difference adaptive filter unit 32 described above, the description thereof is omitted here.
  • the luminance adaptive filter unit 444 illustrated in FIG. 12 uses the luminance filter information calculated from the moving image # 10 and the luminance component Y of the deblocked decoded image P_DB, to the luminance component Y of the deblocked decoded image P_DB. Apply adaptive filtering.
  • the luminance adaptive filter unit 444 generates a luminance component Y, filter on / off information, and luminance filter information of the filtered decoded image P_FL. Note that the process of the luminance adaptive filter unit 444 is the same as the process of the luminance adaptive filter unit 33 described above, and thus the description thereof is omitted here.
  • different filter coefficient groups can be used for the filter coefficient group of the color difference component Cr and the filter coefficient group of the color difference component Cb. Therefore, when a common filter coefficient group is used between the color difference components. In comparison, the image quality of the color difference component can be improved.
  • one filter coefficient is compared with the other filter coefficient.
  • the code amount of the filter coefficient can be reduced.
  • the code amount of the filter coefficient can be further reduced by appropriately selecting the independent filter coefficient group that is the color difference component that is not encoded as the difference value.
  • the difference value between the filter coefficient group HCb of the chrominance component Cb and the filter coefficient group HCr of the chrominance component Cr does not change even if either filter coefficient group is encoded as a component that is not a difference value. That is, even if the difference value is calculated by HCb [i] ⁇ HCr [i] or the difference value is calculated by HCr [i] ⁇ HCb [i], the absolute values are equal only by reversing the sign. For this reason, when a difference value is encoded, both code amounts are equal.
  • the filter coefficient group HCr of the chrominance component Cr and the filter coefficient group HCb of the chrominance component Cb are generally different in value, so depending on which of them is encoded as an independent filter coefficient group, The total code amount composed of independent filter coefficient groups is different. Accordingly, the filter coefficient group of the color difference component with a small code amount is stored in the filter coefficient group of the first color difference component C1, and the other color difference component is set as the second color difference component C2. From the difference value between the color difference components, By decoding them, the amount of codes can be reduced. In fact, according to the knowledge obtained by the inventor through experiments, the code amount of the filter coefficient group HCr and the filter coefficient group HCb may differ by a factor of two. Therefore, the code amount can be reduced by specifying a component that is not a difference value using sw_flag. Is possible.
  • the method of deriving the filter coefficient group HC2 from the dependent filter coefficient group alf_coeff_C2 when alf_chroma_idc is 4 is not limited to the above description.
  • all the filter coefficients of the filter coefficient group HC2 are derived from the difference between alf_coeff_C2 and alf_coeff_C1, but some filter coefficients are derived from the difference between alf_coeff_C2 and alf_coeff_C1, and the other filter coefficients are alf_coeff_C2.
  • the structure derived from the above may be used. That is, a configuration may be adopted in which some filter coefficients of the filter coefficient group HC2 are encoded as difference values, and other filter coefficients are encoded with values that are not difference values (non-difference values).
  • a filter coefficient other than the offset is a difference value and to encode the offset using a non-difference value.
  • AlfNumCoeffChroma-1 is an offset
  • the calculation method of the filter coefficient group HC2 is expressed by the following equation (7).
  • i is an integer from 0 to AlfNumCoeffChroma-1.
  • a differential encoding flag indicating whether the offset component is encoded as a difference value or non-difference value is included in the encoded data # 1, and the moving image decoding apparatus decodes the flag Depending on the value, the decoding method of the filter coefficient may be changed. This is due to the inventor's knowledge that generally there is often no correlation between offsets of different color difference components. When there is no correlation, the absolute value of the difference value may increase by taking the difference value between the offsets.
  • the filter coefficients of the filter coefficient group including the offset are encoded such that a value close to 0 is expressed by a short code length by using the concentration to 0. In such an encoding method, a difference value that increases the absolute value leads to an increase in the code amount. Therefore, it is appropriate not to use the difference value for the offset.
  • the code amount can be reduced not only when the offsets of different filter coefficient groups are not approximated but also when they are approximated. For this reason, the method using the differential encoding flag indicating whether or not to take the difference value of the offset is not only used for encoding the filter coefficient between the chrominance components, but generally, other filter coefficients are already decoded. It is possible to use this when decoding using a difference value from the filter coefficient. For example, it can be used when decoding a filter coefficient to be decoded from a previous slice that has already been decoded, a decoded filter coefficient of the current slice, or the like.
  • the color space handled in the first embodiment is not limited to the YUV space, but can be extended to other color spaces, for example, RGB, CMYK, and XYZ.
  • a flag (which is a flag corresponding to alf_chroma_idc and more generally referred to as a filter mode) indicating which component is subjected to filtering processing in at least two color components included in the color space. Decrypt. This also means that it can be expanded to filter processing between YU, YV, and YUV without being limited to filtering between UVs in the YUV space.
  • a specific color component is a component to be encoded as an independent filter coefficient group, and filter coefficients of other color components are encoded as a dependent filter coefficient group.
  • the code amount of the filter coefficient can be reduced.
  • a flag corresponding to sw_flag
  • the color difference filter mode alf_chroma_idc has a function indicating the following two pieces of information, but can be encoded as two pieces of information that play the following functions 1 and 2.
  • (Function 1) A function indicating whether or not to perform filter processing on each color component in at least two color components to be filtered by the adaptive filter 30.
  • (Function 2) A function indicating whether a common filter coefficient group is used for each color component or an individually derived filter coefficient group is used when there are two color components to be filtered by the adaptive filter 30 .
  • FIG. 29 shows the syntax of filter parameters when the first filter mode alf_chroma_idc1 and the second filter mode alf_chroma_idc2 are used instead of the color difference filter mode alf_chroma_idc.
  • the first filter mode alf_chroma_idc1 takes any value from 0 to 3
  • the second filter mode alf_chroma_idc2 takes 0 or 1.
  • the first filter mode alf_chroma_idc1 indicates whether or not to perform filter processing on each color component of the color difference component Cr and the color difference component Cb, and plays the role of the function 1 described above.
  • the second filter mode alf_chroma_idc2 indicates whether to use a shared filter coefficient group between two color difference components or to use individually derived filter coefficient groups, and plays the role of the function 2 described above.
  • the color difference adaptive filter coefficient deriving unit 311 does not decode the filter coefficient group when the first filter mode alf_chroma_idc1 is 0.
  • the filter coefficient group HCr is decoded from alf_coeff_Cr.
  • the filter coefficient group HCb is decoded from alf_coeff_Cb.
  • the second filter mode alf_chroma_idc2 is further decoded.
  • the filter coefficient group HCr and the filter coefficient group HCb are decoded from alf_coeff_chroma.
  • the filter coefficient group HCr and the filter coefficient group HCb are decoded from the independent filter coefficient group alf_coeff_C1 and the dependent filter coefficient group alf_coeff_C2.
  • FIG. 30A shows the relationship between the first filter mode alf_chroma_idc1 and the value of the color difference component Cr and the presence / absence of filter processing of the color difference component Cb.
  • FIG. 30B shows the relationship between the second filter mode alf_chroma_idc2 and the filter coefficient group included in the encoded data # 1.
  • the second filter mode alf_chroma_idc2 is 0, one common filter coefficient group _alf_coeff_chroma is included in the encoded data # 1, and the corresponding filter coefficient group is used for the filter processing of the color difference component Cr and the color difference component Cb.
  • the two filter coefficient groups alf_coeff_Cr and the filter coefficient group alf_coeff_Cb are included in the encoded data # 1, and the corresponding filter coefficient group is used for the filter processing of each color difference component.
  • FIG. 14 to 24, 26, and 27 a second embodiment of the present invention will be described with reference to FIGS. 14 to 24, 26, and 27.
  • FIG. The adaptive filter according to the present embodiment encodes an on / off flag that specifies on / off of the filter process for each of the coding units up to a predetermined division depth.
  • This embodiment is characterized in that the code amount of the on / off flag is reduced by using a common on / off flag among a plurality of color components.
  • one of the control patterns 2 to 4 out of the following four control patterns is used as the configuration.
  • Control pattern 1 On / off control is performed in units of areas for the luminance component Y using a set of on / off flags. In this case, the color difference component does not perform on / off control in units of regions.
  • Control pattern 2 On / off control of each color component is performed using a set of on / off flags common to the luminance component Y, the color difference component Cb, and the color difference component Cr.
  • Control Pattern 3 Using two sets of on / off flags, on / off control of the luminance component Y is performed with one set of on / off flags, and common on / off control is performed with the color difference component Cr and the color difference component Cb using another set of on / off flags. I do.
  • Control pattern 4 Independent on / off control is performed for each of the luminance component Y, the color difference component Cb, and the color difference component Cr by using three sets of on / off flags.
  • the scope of application of the second embodiment of the present invention is not limited to this. That is, as will be apparent from the following description, the features of the present invention can be realized without assuming a plurality of frames. That is, the present invention can be applied to a general decoding apparatus and a general encoding apparatus regardless of whether the target is a moving image or a still image.
  • FIG. 14 is a diagram showing the data structure of the filter on / off information included in the filter parameter FP of the encoded data # 3.
  • FIGS. 15 to 18 show the filter parameter FP (denoted as alf_param) of the encoded data # 3. It is a figure which shows each syntax contained.
  • FIGS. 15 to 18 are diagrams showing syntaxes included in the filter parameter FP in the control patterns 1 to 4 respectively.
  • the filter parameter FP of the encoded data # 3 includes (1) syntax adaptive_loop_filter_flag that specifies whether or not to perform adaptive filter processing on the target slice, and (2) adaptive filter.
  • syntax adaptive_loop_filter_flag that specifies whether or not to perform adaptive filter processing on the target slice
  • adaptive filter Syntax alf_cu_control_flag that specifies whether or not processing on / off is controlled for each coding unit, (3) Syntax that specifies the maximum division depth from the maximum coding unit for a coding unit that is subject to on / off control. (Hierarchy designation information) alf_cu_control_max_depth, (4) The syntax alf_length_cu_control_info for designating the number of coding units to be subjected to on / off control is included in common to all control patterns.
  • the coding unit to be subjected to on / off control is an LCU (a CU obtained by dividing the LCU as a root 0 times), and when the maximum division depth alf_cu_control_max_depth is greater than 0, the on / off control is performed.
  • the coding unit to be the target is a CU (coding tree) divided up to the maximum division depth.
  • control pattern 1 As shown in FIG. 15, an on / off flag alf_cu_flag is included in the filter parameter FP for each coding unit to be subjected to on / off control.
  • the filter parameter FP includes an on / off flag alf_cu_flag for adaptive filter processing and a flag alf_cu_control_flag_chroma for designating whether the on / off flag alf_cu_flag is applied to the color difference adaptive filter.
  • the on / off flag alf_cu_flag_luma related to the adaptive filter of the luminance component Y and the on / off flag alf_cu_flag_chroma related to the adaptive filter of the color difference component Cr and the color difference component Cb are included in the filter parameter FP.
  • the on / off flags relating to the adaptive filter of the luminance component Y, the chrominance component Cr, and the chrominance component Cb, alf_cu_flag_luma, alf_cu_flag_cr, and alf_cu_flag_cb are included in the filter parameter FP.
  • i in FIGS. 15 to 18 is an index of an on / off flag used for control of adaptive filter processing, and is an integer from 0 to NumAlfCuFlag-1.
  • NumAlfCuFlag is the total number of on / off flags for performing adaptive filter processing, and is determined by the hierarchy designation information alf_cu_control_max_depth and the CU structure.
  • NumAlfCuFlag can be explicitly encoded in the encoded data instead of determining the NumAlfCuFlag from the hierarchy designation information alf_cu_control_max_depth and the CU structure.
  • FIG. 19 is a block diagram showing a configuration of the moving picture decoding apparatus 3.
  • the video decoding device 3 includes an adaptive filter 50 instead of the adaptive filter 30 included in the video decoding device 1.
  • the filter parameter FP included in the encoded data # 3 is supplied to the adaptive filter 50.
  • description is abbreviate
  • FIG. 20 is a block diagram showing the configuration of the adaptive filter 50. As illustrated in FIG. 20, the adaptive filter 50 includes an adaptive filter information restoration unit 51, a color difference adaptive filter unit 52, and a luminance adaptive filter unit 53.
  • the adaptive filter information restoration unit 51 decodes the filter parameter FP. As shown in FIG. 20, the adaptive filter information restoration unit 51 includes an on / off information restoration unit 511 and a filter information restoration unit 512.
  • the on / off information restoration unit 511 decodes the on / off flags AlfFilterFlagY, AlfFilterFlagCb, and AlfFilterFlagCr of the luminance component Y, the chrominance component Cr, and the chrominance component Cb from the filter on / off information included in the filter parameter FP.
  • FIG. 26 (a) shows an example of division of the coding unit for the luminance component Y in the maximum coding unit LCU when the value of alf_cu_control_max_depth is 2, along with a branch diagram showing a hierarchical structure.
  • the adaptive filter 50 can identify all coding units by an index assigned in the raster scan order.
  • FIG. 26B shows the correspondence between the filter processing control for the divided region shown in FIG. 26A and the on / off flag AlfFilterFlagY [i].
  • the on / off flag AlfFilterFlagY [i] for the coding unit including the target unit region is 0, it is specified that adaptive filter processing is not performed on the coding unit.
  • AlfFilterFlagY [i] for the coding unit including the target unit region
  • AlfFilterFlagCr and AlfFilterFlagCb specify whether or not to perform adaptive filter processing for the coding unit for the color difference component Cr and the color difference component Cb.
  • the correspondence relationship between the on / off flag in each encoded data # 3 and the on / off flag used for control in the case of control pattern 1 to control pattern 4 is shown in the table of FIG.
  • the on / off flags AlfFilterFlagY, AlfFilterFlagCr, AlfFilterFlagCb used for control and the on / off flags alf_cu_flag, alf_cu_flag_luma, alf_cu_flag_chroma, alf_cu_flag_cr, and alf_cu_flag_cb will be described.
  • AlfFilterFlagY, AlfFilterFlagCr, and AlfFilterFlagCb mean on / off flags used for adaptive filter control for the luminance component Y, the chrominance component Cr, and the chrominance component Cb, and are alf_cu_flag, alf_cu_flag_luma, alf_cu_flag_chroma, alf_cu_f_flag_c, Means an on / off flag in the encoded data # 3.
  • alf_cu_flag is stored in AlfFilterFlagY, and a value of 1 is stored in AlfFilterFlagCb and AlfFilterFlagCr.
  • alf_cu_control_flag_chroma is a flag for selecting whether the decoded on / off flag alf_cu_flag is used as an on / off flag for the luminance component Y or an on / off flag common to the luminance component Y, the color difference component Cr, and the color difference component Cb.
  • alf_cu_control_flag_chroma when alf_cu_control_flag_chroma is true, alf_cu_flag is decoded and stored in AlfFilterFlagY, AlfFilterFlagCb, and AlfFilterFlagCr.
  • alf_cu_control_flag_chroma when alf_cu_control_flag_chroma is false, alf_cu_flag is decoded and stored in AlfFilterFlagY. 1 is always stored in AlfFilterFlagCb and AlfFilterFlagCr.
  • alf_cu_flag_luma is stored in AlfFilterFlagY
  • alf_cu_flag_chroma is stored in AlfFilterFlagCb and AlfFilterFlagCr.
  • alf_cu_flag_luma, alf_cu_flag_cb, and alf_cu_flag_cr are decoded.
  • alf_cu_flag_luma, alf_cu_flag_cb, and alf_cu_flag_cr are stored in AlfFilterFlagY, AlfFilterFlagCb, and AlfFilterFlagCr, respectively.
  • the filter information restoration unit 512 uses the luminance filter information and the chrominance filter information included in the filter parameter FP, and the adaptive filter coefficient group HY of the luminance component Y, the adaptive filter coefficient group HCr of the chrominance component Cr, and the adaptive filter coefficient of the chrominance component Cb. Decode the group HCb.
  • the filter coefficient group HY is supplied to the luminance adaptive filter unit 53, and the filter coefficient group HCr and the filter coefficient group HCb are supplied to the color difference adaptive filter unit 52.
  • the color difference adaptive filter unit 52 refers to the on / off flag AlfFilterFlagCr and the on / off flag AlfFilterFlagCb, performs adaptive filter processing on the color difference component Cr and the color difference component Cb of the deblocked decoded image P_DB, and outputs the filtered decoded image P_FL.
  • a color difference component Cr and a color difference component Cb are generated. Further, the color difference component Cr and the color difference component Cb of the generated filtered decoded image P_FL are stored in the buffer memory 15.
  • the color difference adaptive filter unit 52 includes a switch unit 521a, a switch unit 521b, a Cr filter processing unit 522, and a Cb filter processing unit 523.
  • the switch unit 521a supplies the color difference component Cr of the deblocked decoded image P_DB to the Cr filter processing unit 522 when the on / off flag AlfFilterFlagCr is on. When OFF, the color difference component Cr of the deblocked decoded image P_DB is output as the color difference component Cr of the filtered decoded image P_FL.
  • the switch unit 521b supplies the color difference component Cb of the deblocked decoded image P_DB to the Cb filter processing unit 523 when the on / off flag AlfFilterFlagCb is on. When OFF, the color difference component Cb of the deblocked decoded image P_DB is output as the color difference component Cb of the filtered decoded image P_FL.
  • the Cr filter processing unit 522 performs adaptive filter processing using the filter coefficient group HCr supplied from the filter information restoration unit 512 on the color difference component Cr of the deblocked decoded image P_DB supplied from the switch unit 521a. Thus, the color difference component Cr of the filtered decoded image P_FL is generated.
  • the Cb filter processing unit 523 performs adaptive filter processing using the filter coefficient group HCb supplied from the filter information restoration unit 512 on the color difference component Cb of the deblocked decoded image P_DB supplied from the switch unit 521b. Thus, the color difference component Cb of the filtered decoded image P_FL is generated.
  • the luminance adaptive filter unit 53 refers to the on / off flag AlfFilterFlagY, performs adaptive filter processing on the luminance component Y of the deblocked decoded image P_DB, and generates the luminance component Y of the filtered decoded image P_FL. Further, the luminance component Y of the generated filtered decoded image P_FL is stored in the buffer memory 15.
  • the luminance adaptive filter unit 53 includes a switch unit 531 and a Y filter processing unit 532.
  • the switch unit 531 supplies the luminance component Y of the deblocked decoded image P_DB to the Y filter processing unit 532 when the on / off flag AlfFilterFlagY is on. When OFF, the luminance component Y of the deblocked decoded image P_DB is output as the luminance component Y of the filtered decoded image P_FL.
  • the Y filter processing unit 532 performs adaptive filter processing using the filter coefficient group HY supplied from the filter information restoration unit 512 on the luminance component Y of the deblocked decoded image P_DB supplied from the switch unit 531. Thus, the luminance component Y of the filtered decoded image P_FL is generated.
  • the video decoding device 3 may be configured to be able to select any of a plurality of control patterns.
  • the flag alf_cu_control_mode for selecting the control pattern is included in the filter parameter, and the moving picture decoding apparatus uses a control mode corresponding to the decoded alf_cu_control_mode.
  • alf_cu_control_mode takes a value of 1 to 4, and control patterns 1 to 4 are selected for each value.
  • FIG. 27 shows the syntax of the filter parameter at this time.
  • alf_cu_control_flag_chroma the configuration that uses the on / off flag for the luminance component and the configuration that uses the common on / off flag for the luminance component and the color difference component are switched.
  • control pattern 1 and control pattern 2 the former is controlled The same switching is possible by treating the pattern 1 and the latter with the control pattern 2.
  • control pattern 1 or control pattern 3 can be selected using flag alf_cu_control_mode for selecting a control pattern. That is, it is possible to select whether to use an on / off flag only for the luminance component Y or to use an independent on / off flag for the luminance component Y and the two color difference components.
  • alf_cu_control_mode 1
  • alf_cu_control_mode 3
  • control pattern 3 is selected.
  • the filter parameter syntax at this time is shown in FIG.
  • the video encoding device 4 (Moving picture encoding device 4)
  • the video encoding device 4 according to the present embodiment will be described with reference to FIGS.
  • the same parts as those already described in the first embodiment are denoted by the same reference numerals, and the description thereof is omitted.
  • FIG. 22 is a block diagram showing a configuration of the video encoding device 4 according to the present embodiment.
  • the moving image encoding device 4 includes an adaptive filter 60 instead of the adaptive filter 44 included in the moving image encoding device 2 according to the first embodiment.
  • the other configuration of the video encoding device 4 is the same as the configuration of the video encoding device 2 according to the first embodiment, and a description thereof will be omitted.
  • the adaptive filter 60 generates a filtered decoded image P_FL by performing an adaptive filter process on the deblocked decoded image P_DB.
  • the generated filtered decoded image P_FL is stored in the buffer memory 24.
  • the adaptive filter 60 also supplies various types of adaptive filter information used for the filter processing to the variable length code encoding unit 22 as filter parameters FP.
  • the variable length code encoding unit 22 encodes the filter parameter FP as a part of the encoded data # 3.
  • FIG. 23 is a block diagram showing the configuration of the adaptive filter 60.
  • the adaptive filter 60 includes a filter parameter determination unit 61 and an adaptive filter unit 62.
  • the filter parameter determination unit 61 determines luminance filter information, color difference filter information, and filter on / off information, and supplies the information to the adaptive filter unit 62 and the variable length code encoding unit 22. As illustrated in FIG. 24, the filter parameter determination unit 61 includes an on / off information determination unit 611 and a filter information determination unit 612.
  • the filter information determination unit 612 determines luminance filter information and color difference filter information from the deblocked decoded image P_DB and the moving image # 10. A filter coefficient group is calculated so that the square error between the deblocked decoded image P_DB and the encoding target image (moving image # 10) is minimized. Luminance filter information and chrominance filter information are determined based on an encoding cost that can be calculated by comparing an image subjected to adaptive filter processing and an encoding target image.
  • the on / off information determination unit 611 determines an on / off flag using the filter coefficient group determined by the filter information determination unit 612. When the control pattern is 1, the on / off information determination unit 611 determines an on / off flag alf_cu_flag applied to the adaptive filter processing of the luminance component Y. When the control pattern is 2, the on / off information determination unit 611 determines an on / off flag alf_cu_flag for adaptive filter processing and a flag alf_cu_control_flag_chroma for designating whether to apply the on / off flag alf_cu_flag to the color difference adaptive filter.
  • the on / off information determination unit 611 determines the on / off flag alf_cu_flag_luma of the luminance component Y and the on / off flag alf_cu_flag_chroma of the color difference component Cr and the color difference component Cb. In the case of the control pattern 4, the on / off information determination unit 611 determines the on / off flags alf_cu_flag_luma, alf_cu_flag_cr, and alf_cu_flag_cb applied to the adaptive filter processing of each of the luminance component Y, the color difference component Cr, and the color difference component Cb.
  • the on / off flag is compared with the case where the filter processing is performed using the filter coefficient group determined by the filter information determination unit 612 and the case where the filter processing is not performed. It is determined to be on when the square error with the image to be encoded is small, and off in other cases.
  • alf_cu_flag_chroma is set to 1 when the square error becomes small. In other cases, it is determined to be 0.
  • the square error between the encoding target image and the filtered decoded image in the color difference component Cb, and the square error between the encoding target image and the filtered decoded image in the color difference component Cr Is determined to be on when it is smaller than the sum of square errors between the encoding target image and the decoded image before the filter, and it is determined to be off otherwise.
  • an optimal on / off flag may be determined by simultaneously handling a plurality of components.
  • the square error between the encoding target image and the filtered decoded image in the luminance component Y, and 2 of the encoding target image and the filtered decoded image in the color difference component Cb and the color difference component Cr It is determined to be ON when the sum of the square error is smaller than the sum of the square errors of the encoding target image and the decoded image before the filter, and OFF otherwise.
  • control by the common on / off flag alf_cu_flag and the luminance component Y by alf_cu_flag_luma is performed in CU units of coding units, but the control by the chrominance component on / off flag alf_cu_flag_chroma, alf_cu_flag_cr, and alf_cu_flag_cb is performed by the maximum coding unit. Is performed in units of LCU. If the encoding unit of the on / off flag is reduced, the image quality after filtering can be improved, but the code amount of the on / off flag is increased. The encoding unit of the on / off flag needs to be determined appropriately.
  • the resolution of the color difference component is smaller than the resolution of the luminance component, so the effect of improving the image quality obtained using the adaptive filter is also small.
  • the color difference on / off flag is encoded using the same unit as the luminance, the on / off flag becomes relatively large, which is not appropriate in terms of encoding efficiency. For this reason, it is appropriate to control the color difference in units larger than the luminance control.
  • the decoding process can be performed in units of LCUs, it is appropriate to encode the on / off flag in units of LCUs or less. For this reason, in this embodiment, the LCU unit is used as the unit of the color difference on / off flag, thereby reducing the number of on / off flags and improving the coding efficiency as a whole.
  • the adaptive filter unit 62 performs an adaptive filter process using the filter parameter FP from the filter parameter determination unit 61 on the deblocked decoded image P_DB to generate a filtered decoded image P_FL.
  • the adaptive filter processing of the luminance component Y is the same as the luminance adaptive filter unit 33
  • the adaptive filter processing of the color difference component Cr and the color difference component Cb is the same as the color difference adaptive filter unit 32. The description is omitted here.
  • the on / off control of the adaptive filter processing of the coding unit can be commonly performed in two or more components of the luminance component Y, the color difference component Cr, and the color difference component Cb.
  • the code amount of the on / off flag can be reduced by using a common on / off flag in the adaptive filter processing of the luminance component Y and the adaptive filter processing of the color difference component.
  • the code amount of the on / off flag can be reduced by using a common on / off flag in a plurality of color difference component adaptive filters.
  • the color difference component Cr and the color difference component Cb have a lower resolution than the luminance component Y, even if the on / off control of the color difference component is performed in LCU units in a wider range than the CU unit, the influence on image quality degradation is small. Further, when the on / off control of the color difference component Cr and the color difference component Cb is performed in units of LCUs, the total number of on / off flags used for the color difference component Cr and the color difference component Cb can be reduced, so that the code amount can be reduced.
  • the above-described moving image encoding apparatuses 2 and 4 and moving image decoding apparatuses 1 and 3 can be mounted and used in various apparatuses that perform transmission, reception, recording, and reproduction of moving images.
  • the moving image may be a natural moving image captured by a camera or the like, or may be an artificial moving image (including CG and GUI) generated by a computer or the like.
  • the above-described moving image encoding device 2 and moving image decoding device 1 can be used for transmission and reception of moving images.
  • FIG. 31 is a block diagram illustrating a configuration of a transmission device PROD_A in which the moving image encoding device 2 is mounted.
  • the transmission device PROD_A modulates a carrier wave with an encoding unit PROD_A1 that obtains encoded data by encoding a moving image and the encoded data obtained by the encoding unit PROD_A1.
  • a modulation unit PROD_A2 that obtains a modulation signal and a transmission unit PROD_A3 that transmits the modulation signal obtained by the modulation unit PROD_A2 are provided.
  • the moving image encoding apparatus 2 described above is used as the encoding unit PROD_A1.
  • the transmission device PROD_A is a camera PROD_A4 that captures a moving image, a recording medium PROD_A5 that records the moving image, an input terminal PROD_A6 that inputs the moving image from the outside, as a supply source of the moving image input to the encoding unit PROD_A1.
  • An image processing unit A7 that generates or processes an image may be further provided.
  • FIG. 31A illustrates a configuration in which the transmission apparatus PROD_A includes all of these, but a part of the configuration may be omitted.
  • the recording medium PROD_A5 may be a recording of a non-encoded moving image, or a recording of a moving image encoded by a recording encoding scheme different from the transmission encoding scheme. It may be a thing. In the latter case, a decoding unit (not shown) for decoding the encoded data read from the recording medium PROD_A5 according to the recording encoding method may be interposed between the recording medium PROD_A5 and the encoding unit PROD_A1.
  • FIG. 31 is a block diagram illustrating a configuration of the receiving device PROD_B in which the moving image decoding device 1 is mounted.
  • the reception device PROD_B includes a reception unit PROD_B1 that receives a modulation signal, a demodulation unit PROD_B2 that obtains encoded data by demodulating the modulation signal received by the reception unit PROD_B1, and a demodulation A decoding unit PROD_B3 that obtains a moving image by decoding the encoded data obtained by the unit PROD_B2.
  • the moving picture decoding apparatus 1 described above is used as the decoding unit PROD_B3.
  • the receiving device PROD_B has a display PROD_B4 for displaying a moving image, a recording medium PROD_B5 for recording the moving image, and an output terminal for outputting the moving image to the outside as a supply destination of the moving image output by the decoding unit PROD_B3.
  • PROD_B6 may be further provided.
  • FIG. 31B a configuration in which all of these are provided in the receiving device PROD_B is illustrated, but a part may be omitted.
  • the recording medium PROD_B5 may be used for recording a non-encoded moving image, or may be encoded using a recording encoding method different from the transmission encoding method. May be. In the latter case, an encoding unit (not shown) for encoding the moving image acquired from the decoding unit PROD_B3 according to the recording encoding method may be interposed between the decoding unit PROD_B3 and the recording medium PROD_B5.
  • the transmission medium for transmitting the modulation signal may be wireless or wired.
  • the transmission mode for transmitting the modulated signal may be broadcasting (here, a transmission mode in which the transmission destination is not specified in advance) or communication (here, transmission in which the transmission destination is specified in advance). Refers to the embodiment). That is, the transmission of the modulation signal may be realized by any of wireless broadcasting, wired broadcasting, wireless communication, and wired communication.
  • a terrestrial digital broadcast broadcasting station (broadcasting equipment or the like) / receiving station (such as a television receiver) is an example of a transmitting device PROD_A / receiving device PROD_B that transmits and receives a modulated signal by wireless broadcasting.
  • a broadcasting station (such as broadcasting equipment) / receiving station (such as a television receiver) of cable television broadcasting is an example of a transmitting device PROD_A / receiving device PROD_B that transmits and receives a modulated signal by cable broadcasting.
  • a server workstation etc.
  • Client television receiver, personal computer, smart phone etc.
  • VOD Video On Demand
  • video sharing service using the Internet is a transmitting device for transmitting and receiving modulated signals by communication.
  • PROD_A / reception device PROD_B usually, either a wireless or wired transmission medium is used in a LAN, and a wired transmission medium is used in a WAN.
  • the personal computer includes a desktop PC, a laptop PC, and a tablet PC.
  • the smartphone also includes a multi-function mobile phone terminal.
  • the video sharing service client has a function of encoding a moving image captured by the camera and uploading it to the server. That is, the client of the video sharing service functions as both the transmission device PROD_A and the reception device PROD_B.
  • the above-described moving image encoding device 2 and moving image decoding device 1 can be used for recording and reproduction of moving images.
  • FIG. 32A is a block diagram showing a configuration of a recording apparatus PROD_C in which the above-described moving picture encoding apparatus 2 is mounted.
  • the recording device PROD_C includes an encoding unit PROD_C1 that obtains encoded data by encoding a moving image, and the encoded data obtained by the encoding unit PROD_C1 on the recording medium PROD_M.
  • the moving image encoding apparatus 2 described above is used as the encoding unit PROD_C1.
  • the recording medium PROD_M may be of a type built in the recording device PROD_C, such as (1) HDD (Hard Disk Drive) or SSD (Solid State Drive), or (2) SD memory. It may be of the type connected to the recording device PROD_C, such as a card or USB (Universal Serial Bus) flash memory, or (3) DVD (Digital Versatile Disc) or BD (Blu-ray Disc: registration) Or a drive device (not shown) built in the recording device PROD_C.
  • HDD Hard Disk Drive
  • SSD Solid State Drive
  • SD memory such as a card or USB (Universal Serial Bus) flash memory, or (3) DVD (Digital Versatile Disc) or BD (Blu-ray Disc: registration) Or a drive device (not shown) built in the recording device PROD_C.
  • the recording device PROD_C is a camera PROD_C3 that captures moving images as a supply source of moving images to be input to the encoding unit PROD_C1, an input terminal PROD_C4 for inputting moving images from the outside, and reception for receiving moving images.
  • the unit PROD_C5 and an image processing unit C6 that generates or processes an image may be further provided.
  • FIG. 32A illustrates a configuration in which the recording apparatus PROD_C includes all of these, but a part of the configuration may be omitted.
  • the receiving unit PROD_C5 may receive a non-encoded moving image, or may receive encoded data encoded by a transmission encoding scheme different from the recording encoding scheme. You may do. In the latter case, a transmission decoding unit (not shown) that decodes encoded data encoded by the transmission encoding method may be interposed between the reception unit PROD_C5 and the encoding unit PROD_C1.
  • Examples of such a recording device PROD_C include a DVD recorder, a BD recorder, and an HDD (Hard Disk Drive) recorder (in this case, the input terminal PROD_C4 or the receiving unit PROD_C5 is a main supply source of moving images).
  • a camcorder in this case, the camera PROD_C3 is a main source of moving images
  • a personal computer in this case, the receiving unit PROD_C5 or the image processing unit C6 is a main source of moving images
  • a smartphone in this case In this case, the camera PROD_C3 or the receiving unit PROD_C5 is a main supply source of moving images
  • the camera PROD_C3 or the receiving unit PROD_C5 is a main supply source of moving images
  • FIG. 32 is a block showing a configuration of a playback device PROD_D in which the above-described video decoding device 1 is mounted.
  • the playback device PROD_D reads a moving image by decoding a read unit PROD_D1 that reads encoded data written on the recording medium PROD_M and a coded data read by the read unit PROD_D1. And a decoding unit PROD_D2 to be obtained.
  • the moving picture decoding apparatus 1 described above is used as the decoding unit PROD_D2.
  • the recording medium PROD_M may be of the type built into the playback device PROD_D, such as (1) HDD or SSD, or (2) such as an SD memory card or USB flash memory, It may be of a type connected to the playback device PROD_D, or (3) may be loaded into a drive device (not shown) built in the playback device PROD_D, such as DVD or BD. Good.
  • the playback device PROD_D has a display PROD_D3 that displays a moving image, an output terminal PROD_D4 that outputs the moving image to the outside, and a transmission unit that transmits the moving image as a supply destination of the moving image output by the decoding unit PROD_D2.
  • PROD_D5 may be further provided.
  • FIG. 32B illustrates a configuration in which the playback apparatus PROD_D includes all of these, but some of them may be omitted.
  • the transmission unit PROD_D5 may transmit an unencoded moving image, or transmits encoded data encoded by a transmission encoding method different from the recording encoding method. You may do. In the latter case, it is preferable to interpose an encoding unit (not shown) that encodes a moving image with an encoding method for transmission between the decoding unit PROD_D2 and the transmission unit PROD_D5.
  • Examples of such a playback device PROD_D include a DVD player, a BD player, and an HDD player (in this case, an output terminal PROD_D4 to which a television receiver or the like is connected is a main supply destination of moving images).
  • a television receiver in this case, the display PROD_D3 is a main supply destination of moving images
  • a digital signage also referred to as an electronic signboard or an electronic bulletin board
  • the display PROD_D3 or the transmission unit PROD_D5 is the main supply of moving images.
  • Desktop PC (in this case, the output terminal PROD_D4 or the transmission unit PROD_D5 is the main video image supply destination), laptop or tablet PC (in this case, the display PROD_D3 or the transmission unit PROD_D5 is a moving image)
  • a smartphone which is a main image supply destination
  • a smartphone in this case, the display PROD_D3 or the transmission unit PROD_D5 is a main moving image supply destination
  • the like are also examples of such a playback device PROD_D.
  • Each block of the above-described moving image decoding apparatuses 1 and 3 and moving image encoding apparatuses 2 and 4 may be realized in hardware by a logic circuit formed on an integrated circuit (IC chip), or CPU (Central Processing Unit) may be used for software implementation.
  • IC chip integrated circuit
  • CPU Central Processing Unit
  • each device includes a CPU that executes instructions of a program that realizes each function, a ROM (Read (Memory) that stores the program, a RAM (Random Memory) that expands the program, the program, and various types
  • a storage device such as a memory for storing data is provided.
  • An object of the present invention is to provide a recording medium in which a program code (execution format program, intermediate code program, source program) of a control program of each of the above devices, which is software that realizes the above-described functions, is recorded so as to be readable by a computer. This can also be achieved by supplying to each of the above devices and reading and executing the program code recorded on the recording medium by the computer (or CPU or MPU).
  • Examples of the recording medium include tapes such as magnetic tapes and cassette tapes, magnetic disks such as floppy (registered trademark) disks / hard disks, and disks including optical disks such as CD-ROM / MO / MD / DVD / CD-R.
  • IC cards including memory cards
  • semiconductor memories such as mask ROM / EPROM / EEPROM / flash ROM, or PLD (Programmable logic device) or FPGA (Field Programmable Gate Array) Logic circuits can be used.
  • each of the above devices may be configured to be connectable to a communication network, and the program code may be supplied via the communication network.
  • the communication network is not particularly limited as long as it can transmit the program code.
  • the Internet intranet, extranet, LAN, ISDN, VAN, CATV communication network, virtual private network (Virtual Private Network), telephone line network, mobile communication network, satellite communication network, etc. can be used.
  • the transmission medium constituting the communication network may be any medium that can transmit the program code, and is not limited to a specific configuration or type.
  • wired lines such as IEEE 1394, USB, power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line, infrared rays such as IrDA and remote control, Bluetooth (registered trademark), IEEE 802.11 wireless, HDR ( It can also be used by wireless such as High Data Rate, NFC (Near Field Communication), DLNA (Digital Living Network Alliance), mobile phone network, satellite line, and terrestrial digital network.
  • wired lines such as IEEE 1394, USB, power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line, infrared rays such as IrDA and remote control, Bluetooth (registered trademark), IEEE 802.11 wireless, HDR ( It can also be used by wireless such as High Data Rate, NFC (Near Field Communication), DLNA (Digital Living Network Alliance), mobile phone network, satellite line, and terrestrial digital network.
  • wired lines such as IEEE 1394, USB, power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line,
  • the moving picture decoding apparatus is a moving picture decoding apparatus that decodes an image composed of a plurality of color components, and color component designation information that designates a color component to be filtered.
  • filter information decoding means for decoding the filter coefficient group, and the filter coefficient group decoded by the filter information decoding means and the color component designation information are used to perform filter processing on each color component to be processed. And filtering means.
  • the filter processing is performed on each color component to be processed using the filter coefficient group and the color component designation information, there are a plurality of color components. Even in this case, appropriate filter processing can be performed on each color component. Therefore, according to the moving image decoding apparatus, the encoding efficiency is improved.
  • the filter information decoding unit includes at least one filter coefficient included in a filter coefficient group of a certain color component, a filter coefficient included in a filter coefficient group of another color component that has already been decoded, Decoding is preferably performed as an addition value or a subtraction value with a value decoded from the encoded data.
  • the filter information decoding means includes at least one filter coefficient included in a filter coefficient group of a certain color component, a filter coefficient included in a filter coefficient group of another color component that has already been decoded, Since decoding is performed as an addition value or a subtraction value with a value decoded from the encoded data, the code amount of the filter coefficient group can be reduced. Therefore, according to said structure, the further improvement of encoding efficiency can be aimed at.
  • the filter information decoding means decodes a plurality of filter coefficient groups used for filter processing for a plurality of color components, and the plurality of filter coefficient groups include filter coefficient groups of other color components that have already been decoded.
  • One independent filter coefficient group that is decoded without using one and one or more dependent filter coefficient groups that are decoded using filter coefficient groups of other color components that have already been decoded.
  • the means decodes an assignment flag indicating a color component to be filtered using the independent filter coefficient group, and the filter means performs a filter process using the filter coefficient group derived with reference to the assignment flag. It is preferable.
  • the inventor can further improve the encoding efficiency by appropriately selecting an independent filter coefficient group to be decoded without using a filter coefficient group of another color component that has already been decoded from the plurality of filter coefficient groups.
  • the knowledge that it can plan was acquired.
  • the assignment flag indicating the color component to be filtered is decoded using the independent filter coefficient group, and the filter process is performed using the filter coefficient group derived with reference to the assignment flag. Further improvement in coding efficiency can be achieved.
  • the moving picture decoding apparatus is a moving picture decoding apparatus that decodes an image composed of a plurality of color components, and performs a filtering process on each of a plurality of unit areas constituting the image.
  • Filter information decoding means for decoding filter on / off information including one or more on / off flags for designating whether or not to perform, and a filter coefficient group, and a filter for color components for each unit region with reference to the filter on / off information
  • a filter means for performing processing is a moving picture decoding apparatus that decodes an image composed of a plurality of color components, and performs a filtering process on each of a plurality of unit areas constituting the image.
  • the image to be decoded may include a region where the encoding efficiency is improved without performing the filtering process.
  • the filter process is performed on the color component for each unit area with reference to the filter on / off information, the encoding efficiency is improved.
  • the moving image decoding apparatus refers to one on / off flag included in the filter on / off information, and determines on / off in a unit region existing at the same position for two or more color components of the image, It is preferable to control the filtering process for the two or more color components in common.
  • the two or more color components of the image are determined to be on / off in the unit region existing at the same position, thereby Since filter processing for two or more color components is controlled in common, it is possible to reduce the amount of filter processing as well as the amount of code.
  • the filter information decoding unit specifies whether an on / off flag included in the filter on / off information is referred to as an on / off flag of a specific color component or a common on / off flag for a plurality of color components.
  • the selection flag to be decoded is decoded.
  • the coding efficiency is improved by referring to the on / off flag as being related to a specific color component depending on the image characteristics of the image to be decoded, and the on / off flag is divided into a plurality of color components. In some cases, the coding efficiency may be improved by referring to them as common.
  • the selection specifying whether the on / off flag included in the filter on / off information is referred to as an on / off flag of a specific color component or a common on / off flag for a plurality of color components Since the flag is decoded, the encoding efficiency can be improved even when the decoding target image has various image characteristics.
  • the color component includes at least a luminance component and a color difference component
  • the filter information decoding unit refers to one on / off flag included in the filter on / off information as an on / off flag of the luminance component as the selection flag. It is preferable to decode a flag that specifies whether the luminance component and the color difference component are referred to as a common on / off flag.
  • the coding efficiency is improved by referring to the on / off flag as the on / off flag of the luminance component depending on the image characteristics of the image to be decoded, and the on / off flag is used as the luminance component and the color difference component.
  • the coding efficiency may be improved by referring to the common on / off flag.
  • the selection flag one on / off flag included in the filter on / off information is referred to as a luminance component on / off flag, or as a common on / off flag between the luminance component and the color difference component. Since the flag designating whether to be referred to is decoded, even if the decoding target image has various image characteristics, the encoding efficiency can be improved.
  • the filter information decoding means includes a first control pattern using an on / off flag only for the luminance component, a second control pattern using an on / off flag common to the luminance component and the color difference component, and an on / off flag common to the plurality of color difference components.
  • a selection mode indicating at least one of a third control pattern using a luminance component and a fourth control pattern using an independent on / off flag for a luminance component and a plurality of color difference components, and the selection mode further comprises a first control When the pattern or the second control pattern is indicated, one on / off flag is decoded. When the selection mode indicates the third control pattern, two on / off flags are decoded, and the selection mode is the fourth control. When indicating a pattern, it is preferable to decode the three on / off flags.
  • the first control pattern using the on / off flag only for the luminance component the second control pattern using the common on / off flag for the luminance component and the color difference component, and the common on / off flag for the plurality of color difference components are used.
  • the third control pattern a selection mode indicating at least one of the fourth control patterns using an independent on / off flag for the luminance component and the plurality of color difference components is decoded, and the selection mode is the first control pattern or
  • the second control pattern is indicated, one on / off flag is decoded, and when the selection mode indicates the third control pattern, two on / off flags are decoded, and the selection mode indicates the fourth control pattern.
  • the encoding efficiency can be improved even if the decoding target image has various image characteristics. Rukoto can.
  • the moving picture coding apparatus is a moving picture coding apparatus for coding an image composed of a plurality of color components, and color component designation information for designating a color component to be filtered.
  • a filter coefficient group used for filter processing, and filter information to be encoded as a filter parameter, the filter information determination unit determining the color component designation information and the filter coefficient group for each color component, and the filter information determination unit Filter means for performing filter processing on each of the color components to be processed using the filter coefficient group determined by the above, and variable length encoding means for encoding the filter information determined by the filter information determination means. It is characterized by.
  • each color component to be processed is filtered using the filter coefficient group determined by the filter information determination unit, and the filter information is encoded. Therefore, encoding efficiency is improved.
  • the filter information determination unit includes at least one element of the filter information, a filter coefficient of one filter coefficient group determined by the filter information determination unit, and a filter coefficient of a filter coefficient group of other color components. It is preferable to determine by the difference.
  • At least one element of the filter information is a difference between a filter coefficient of one filter coefficient group determined by the filter information determination unit and a filter coefficient of a filter coefficient group of another color component. Therefore, the amount of codes can be reduced. Therefore, according to said structure, the further improvement of encoding efficiency can be aimed at.
  • the filter information determination means includes one independent filter coefficient group that is encoded without using filter coefficient groups of other color components, and a filter coefficient group that is encoded using filter coefficient groups of other color components; It is preferable that an allocation flag indicating a color component to be filtered is determined using the independent filter coefficient group, and the variable length encoding unit encodes the allocation flag.
  • the inventor intends to further improve the encoding efficiency by appropriately selecting one independent filter coefficient group to be encoded without using filter coefficient groups of other color components from the plurality of filter coefficient groups. The knowledge that it can be obtained.
  • the said filter information determination means is encoded using one independent filter coefficient group encoded without using the filter coefficient group of another color component, and the filter coefficient group of another color component.
  • the allocation flag indicating the color component to be filtered using the independent filter coefficient group, and the variable length encoding means encodes the allocation flag, so that the encoding efficiency can be further improved. Can be improved.
  • the moving image encoding device is a moving image encoding device that encodes an input image composed of a plurality of color components, and in each of the plurality of unit regions constituting the input image, Filter on / off information for specifying whether to perform filter processing, filter coefficient group used for filter processing, filter information determining means for determining filter information to be encoded for each color component, the filter on / off information and filter coefficients
  • Filter on / off information for specifying whether to perform filter processing
  • filter coefficient group used for filter processing filter information determining means for determining filter information to be encoded for each color component, the filter on / off information and filter coefficients
  • the image to be encoded may include a region where the encoding efficiency is improved when the filter processing is not performed.
  • the filtering process is performed for each unit region with reference to the filter on / off information and the filter coefficient group, so that the encoding efficiency is improved.
  • the moving image encoding device preferably includes a filter information determining unit that determines a common on / off flag for designating on / off in a unit region existing at the same position for two or more color components of the input image. .
  • the image to be encoded since the common on / off flag for designating on / off in the unit region existing at the same position is determined for two or more color components of the input image, the image to be encoded has various image characteristics. Even if it has, it can aim at the improvement of encoding efficiency.
  • the filter information determination means selects whether to refer to the already-encoded filter on / off information as an on / off flag of a specific color component or as an on / off flag common to a plurality of color components. Preferably, a selection flag is determined.
  • the on / off flag is referred to as relating to a specific color component, the encoding efficiency is improved, and the on / off flag is set to a plurality of colors. In some cases, encoding efficiency is improved by referring to the common component.
  • the filter on / off information is referred to as an on / off flag of a specific color component, or a selection flag for selecting whether to refer to an on / off flag common to a plurality of color components is determined. Even if the image to be encoded has various image characteristics, the encoding efficiency can be improved.
  • the color component includes at least a luminance component and a color difference component
  • the filter information determination unit refers to the already decoded filter on / off information as an on / off flag of the luminance component, or It is preferable to determine a flag for selecting whether to refer to a common on / off flag in the color difference component.
  • the coding efficiency is improved by referring to the on / off flag as the on / off flag of the luminance component, depending on the image characteristics of the image to be encoded, and the on / off flag is compared with the luminance component and the color difference.
  • the coding efficiency is improved by referring to a common on / off flag for the component.
  • the filter on / off information is referred to as a luminance component on / off flag or a flag for selecting whether to refer to a luminance component and a color difference component as a common on / off flag, the encoding is performed. Even when the target image has various image characteristics, the encoding efficiency can be improved.
  • the filter information determination means includes a first control pattern that uses an on / off flag only for the luminance component, a second control pattern that uses an on / off flag common to the luminance component and the color difference component, and a common on / off for a plurality of color difference components.
  • a selection mode indicating at least one of a third control pattern using a flag and a fourth control pattern using an independent on / off flag independent of the luminance component and the plurality of color difference components is determined.
  • the control pattern or the second control pattern is indicated, one on / off flag is determined.
  • the selection mode indicates the third control pattern, two on / off flags are determined, and the selection mode is the fourth control pattern. It is preferable to determine three on / off flags.
  • the first control pattern using the on / off flag only for the luminance component the second control pattern using the common on / off flag for the luminance component and the chrominance component, and the common on / off flag for the plurality of chrominance components.
  • a selection mode indicating at least one of a third control pattern to be used, a fourth control pattern using an independent on / off flag independent of the luminance component and the plurality of color difference components is determined, and the selection mode is the first control pattern.
  • one ON / OFF flag is determined when the second control pattern is indicated, two ON / OFF flags are determined when the selection mode indicates the third control pattern, and the selection mode indicates the fourth control pattern. In this case, since three on / off flags are determined, the encoding efficiency is improved even when the image to be encoded has various image characteristics. Door can be.
  • the data structure of the encoded data according to the present invention is a data structure of encoded data referred to by the video decoding device, and includes color component specifying information for specifying a color component to be filtered, and a filter
  • the moving picture decoding apparatus decodes the color component designation information and the filter coefficient group, and uses the decoded filter coefficient group and the color component designation information to be the processing target.
  • the filter processing is performed for each color component.
  • the encoded data configured as described above includes color component specifying information for specifying a color component to be subjected to filter processing, and a filter coefficient group.
  • a moving image decoding apparatus for decoding the encoded data includes: Since the color component designation information and the filter coefficient group are decoded and the filter processing is performed on each color component to be processed using the decoded filter coefficient group and the color component designation information, there are a plurality of color components. Even in this case, appropriate filter processing can be performed on each color component. Therefore, according to the encoded data, the encoding efficiency is improved.
  • the data structure of the encoded data according to the present invention is the data structure of the encoded data referred to by the moving image decoding apparatus, and is the filter process performed in each of a plurality of unit areas constituting the target image? Filter on / off information including one or more on / off flags for designating whether or not, and a filter coefficient group, the video decoding device decodes the filter on / off information and the filter coefficient group, and the filter on / off With reference to the information, a color component filtering process is performed for each unit area.
  • the decoding target image may include a region where the encoding efficiency is improved when the filtering process is not performed.
  • the encoded data configured as described above includes filter on / off information including one or more on / off flags for designating whether or not to perform filter processing in each of a plurality of unit regions constituting the target image, and a filter coefficient group.
  • the moving picture decoding apparatus that decodes the encoded data decodes the filter on / off information and the filter coefficient group, and refers to the filter on / off information to perform color component filtering for each unit region. Encoding efficiency is improved.
  • the moving picture decoding apparatus is a moving picture decoding apparatus that decodes encoded data obtained by encoding a target picture composed of a plurality of color components, and a plurality of units constituting the target picture.
  • Filter information decoding means for decoding filter on / off information including one or more on / off flags that specify whether or not to perform filter processing in each of the areas, and a decoded image in each unit area using the decoded filter on / off information
  • a filter means for performing filter processing wherein the filter means performs filter processing independently for each of the color components.
  • the moving image decoding apparatus configured as described above, independent filter processing for each color component using filter on / off information including one or more on / off flags for designating whether or not to perform filter processing for each unit region. Therefore, even when there are a plurality of color components, it is possible to appropriately switch on / off the filter processing for each color component. Therefore, according to the moving image decoding apparatus, the encoding efficiency is improved.
  • the size of the unit area is different for each color component.
  • the coding efficiency is improved by making the size of the unit area, which is an on / off unit of the filter processing, different depending on the color component. According to said structure, since the magnitude
  • the color component preferably includes at least a luminance component and a color difference component.
  • the above configuration it is possible to improve the encoding efficiency of the target image including at least the luminance component and the color difference component for at least the luminance component and the color difference component.
  • the unit region for the color difference component is a maximum coding unit
  • the unit region for the luminance component is a coding unit constituting the maximum coding unit. Is preferred.
  • the encoding efficiency is improved by taking the size of the unit area for the color difference component larger than the size of the unit area for the luminance component.
  • the unit region for the color difference component is a maximum coding unit
  • the unit region for the luminance component is a coding unit constituting the maximum coding unit. Efficiency.
  • the filter unit uses the filter on / off information used in the filter processing on the luminance component.
  • the coding efficiency may be improved by reducing the code amount of the filter on / off information depending on the image characteristics of the target image.
  • the filter unit uses the filter on / off information used in the filtering process on the luminance component. Increase can be suppressed. Therefore, according to the above configuration, the encoding efficiency can be improved.
  • the filter means uses the filter on / off information different from the filter on / off information used in the filter processing on the luminance component. Is preferred.
  • the encoding efficiency may be improved by performing finer on / off processing.
  • the filter means when the filter processing is performed on the color difference component, uses the filter on / off information different from the filter on / off information used in the filter processing on the luminance component. It is possible to improve the conversion efficiency.
  • the filter means uses the filter on / off information only for the luminance component.
  • encoding efficiency may be improved by switching filter processing using the filter on / off information only for the luminance component.
  • the filter means since the filter means uses the filter on / off information only for the luminance component, it is possible to improve the encoding efficiency.
  • a moving image decoding apparatus for decoding an image having a plurality of color components, information for specifying a color component to be filtered, and filter information decoding means for decoding a filter coefficient group;
  • Moving image decoding comprising: filter means for performing filter processing on each color component to be processed using the filter coefficient group decoded by the filter information decoding means and information designating the color component apparatus.
  • the filter information decoding means adds at least one filter coefficient of a filter coefficient group of a certain color component to a filter coefficient of a filter coefficient group of another color component that has already been decoded and a value decoded from encoded data Alternatively, decoding is performed using a subtraction value.
  • the filter information decoding means decodes a filter coefficient group used for filter processing for a plurality of color components, and decodes without using a filter coefficient group of another color component that has already been decoded, and an already decoded One or more subordinate filter coefficient groups to be decoded using the filter coefficient groups of the other color components, the assignment flag indicating the color component to be filtered using the independent filter coefficient group, and the filter The means performs a filtering process using a filter coefficient group derived according to the allocation flag.
  • a moving image decoding apparatus for decoding an input image having a plurality of color components, Filter information decoding means for decoding filter on / off information including one or more on / off flags for designating whether or not to perform filter processing in each of a plurality of unit regions constituting the input image, and a filter coefficient group;
  • a moving picture decoding apparatus comprising: a filter unit that performs filter processing of a color component for each unit region with reference to the filter on / off information.
  • the filter information decoding means decodes a selection flag for selecting whether an on / off flag included in the filter on / off information is used as an on / off flag for a specific color component or as a common on / off flag for a plurality of color components. 5. It is characterized by this.
  • the color component is composed of at least a luminance component and a color difference component, Whether the filter information decoding unit uses, as the selection flag, one filter on / off flag included in the filter on / off information as an on / off flag of a luminance component or an on / off flag common to a luminance component and a color difference component 5. Decoding a flag for selecting The moving picture decoding apparatus described in 1.
  • the filter information decoding means is a control pattern 1 that uses an on / off flag only for a luminance component, a control pattern 2 that uses a common on / off flag for a luminance component and a color difference component, or a control that uses a common on / off flag for a plurality of color difference components.
  • a selection mode indicating at least one of pattern 3 or control pattern 4 using an independent on / off flag with a luminance component and a plurality of color difference components is decoded, and 1 when the selection mode indicates control pattern 1 or control pattern 2
  • One on-off flag is decoded, two on-off flags are decoded when the selection mode indicates a control pattern 2, and three on-off flags are decoded when the selection mode indicates a control pattern 4. 6).
  • the moving picture decoding apparatus described in 1.
  • An encoding device that encodes an image having a plurality of color components, Information specifying a color component to be subjected to filter processing, determining a filter coefficient group used for the filter processing, filter information determining means for determining filter information to be encoded as a filter parameter,
  • a moving image comprising: filter means for performing filter processing on each color component to be processed using the filter coefficient group determined by the filter information determination means; and variable length encoding means for encoding the filter information.
  • the filter information determining means determines at least one element of filter information based on a difference value between a filter coefficient of one determined filter coefficient group and a filter coefficient of another color component filter coefficient group.
  • the filter information determining means determines one independent filter coefficient group to be encoded without using filter coefficient groups of other color components, and a filter coefficient group to be encoded using filter coefficient groups of other color components. Further, an allocation flag indicating a color component to be filtered is determined using an independent filter coefficient group, and the variable-length encoding means encodes the allocation flag.
  • a moving image encoding device for encoding an input image having a plurality of color components, In each of a plurality of unit areas constituting the input image, filter on / off information for specifying whether or not to perform filter processing, filter information determination means for determining filter information to be encoded with filter coefficient groups used for filter processing, Filter means for performing filter processing for each unit region with reference to the filter on / off information and a filter coefficient group, and variable length coding means for encoding a filter parameter including the filter on / off information and filter information.
  • filter on / off information for specifying whether or not to perform filter processing
  • filter information determination means for determining filter information to be encoded with filter coefficient groups used for filter processing
  • Filter means for performing filter processing for each unit region with reference to the filter on / off information and a filter coefficient group
  • variable length coding means for encoding a filter parameter including the filter on / off information and filter information.
  • the moving image encoding device is a moving image encoding device that encodes an image having two or more color components, and determines whether the unit area existing at the same position of the two or more color components is on or off. 12. It comprises filter information determining means for determining the on / off flag.
  • the filter information determining means determines a selection flag for selecting whether to use the already encoded filter on / off information as an on / off flag for a specific color component or as a common on / off flag for a plurality of color components. 13. It is characterized by The moving image encoding apparatus described in 1.
  • the color component is composed of at least a luminance component and a color difference component
  • the filter information determination means determines a flag for selecting whether the already-decoded filter on / off information is used as a luminance component on / off flag or a common on / off flag for a luminance component and a color difference component.
  • Features 14 The moving image encoding apparatus described in 1.
  • the filter information determining means is a control pattern 1 that uses an on / off flag only for a luminance component, a control pattern 2 that uses a common on / off flag for a luminance component and a color difference component, or a control that uses a common on / off flag for a plurality of color difference components.
  • a selection mode indicating at least one of pattern 3 or a control pattern 4 using an independent on / off flag that is independent of a luminance component and a plurality of color difference components is determined, and 1 when the selection mode indicates control pattern 1 or control pattern 2
  • One on / off flag is determined, two on / off flags are determined when the selection mode indicates the control pattern 2, and three on / off flags are determined when the selection mode indicates the control pattern 4. .
  • Moving image decoding comprising: filter means for performing filter processing on each color component to be processed using the filter coefficient group decoded by the filter information decoding means and information designating the color component A data structure of encoded data used for filtering processing of an image having a plurality of color components, which is referred to by the apparatus, and includes information specifying the color components used by the filter means and a filter coefficient group A data structure of encoded data characterized by the above.
  • Filter information decoding means for decoding filter on / off information including one or more on / off flags for designating whether or not to perform filter processing in each of a plurality of unit regions constituting the input image, and a filter coefficient group; Filtering processing of an image having a plurality of color components, which is referred to by a moving image decoding apparatus, comprising: filter means for performing filtering processing of color components for each unit region with reference to the filter on / off information A data structure of encoded data used in the above-mentioned, which includes filter on / off information and a filter coefficient group used by the filter means.
  • the moving picture decoding apparatus and moving picture encoding apparatus according to the present invention can be suitably used as a moving picture decoding apparatus and a moving picture encoding apparatus that include an image filter device that performs image filtering.
  • the data structure according to the present invention can be suitably used as the data structure of encoded data referred to by the video decoding device.
  • Video decoding device DESCRIPTION OF SYMBOLS 11 Variable length code decoding part 12 Motion vector decompression

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Un filtre adaptatif (30) comprend : une unité du filtre adaptatif de différence de couleurs (32) constituée d'une unité de filtrage de Cr (322) qui réalise un filtrage adaptatif sur une composante de différence de couleurs (Cr), et d'une unité de filtrage de Cb (323) qui réalise un filtrage adaptatif sur une composante de différence de couleurs (Cb) ; et une unité de décodage d'informations de filtre adaptatif de différence de couleurs (31) qui décode des informations de coefficient de filtre utilisées lors du filtrage adaptatif des composantes de différence de couleurs.
PCT/JP2012/056029 2011-03-09 2012-03-08 Dispositif de décodage de vidéo, dispositif de codage de vidéo et structure de données WO2012121352A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011051527A JP2014099672A (ja) 2011-03-09 2011-03-09 復号装置、符号化装置、および、データ構造
JP2011-051527 2011-03-09

Publications (1)

Publication Number Publication Date
WO2012121352A1 true WO2012121352A1 (fr) 2012-09-13

Family

ID=46798305

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/056029 WO2012121352A1 (fr) 2011-03-09 2012-03-08 Dispositif de décodage de vidéo, dispositif de codage de vidéo et structure de données

Country Status (2)

Country Link
JP (1) JP2014099672A (fr)
WO (1) WO2012121352A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10097827B2 (en) 2016-04-05 2018-10-09 Fujitsu Limited Apparatus and method for image encoding, apparatus and method for image decoding, and image transmission system
WO2019123768A1 (fr) * 2017-12-22 2019-06-27 ソニー株式会社 Dispositif de traitement d'image et procédé de traitement d'image
CN113891076A (zh) * 2016-02-15 2022-01-04 高通股份有限公司 滤波视频数据的经解码块的方法和装置以及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI604731B (zh) * 2016-08-05 2017-11-01 瑞昱半導體股份有限公司 影像濾波方法及其影像濾波裝置

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010001999A1 (fr) * 2008-07-04 2010-01-07 株式会社 東芝 Procédé et dispositif de codage/décodage d'image dynamique
WO2010076856A1 (fr) * 2009-01-05 2010-07-08 株式会社 東芝 Procédé de codage d'images animées et procédé de décodage d'images animées
WO2010113524A1 (fr) * 2009-04-03 2010-10-07 パナソニック株式会社 Procédé de codage d'images animées, procédé de décodage d'images animées, dispositif de codage d'images animées et dispositif de décodage d'images animées

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010001999A1 (fr) * 2008-07-04 2010-01-07 株式会社 東芝 Procédé et dispositif de codage/décodage d'image dynamique
WO2010076856A1 (fr) * 2009-01-05 2010-07-08 株式会社 東芝 Procédé de codage d'images animées et procédé de décodage d'images animées
WO2010113524A1 (fr) * 2009-04-03 2010-10-07 パナソニック株式会社 Procédé de codage d'images animées, procédé de décodage d'images animées, dispositif de codage d'images animées et dispositif de décodage d'images animées

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Test Model under Consideration", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 2ND MEETING, 6 October 2010 (2010-10-06), GENEVA, CH, pages 29 - 30,59-60,126-127 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113891076A (zh) * 2016-02-15 2022-01-04 高通股份有限公司 滤波视频数据的经解码块的方法和装置以及存储介质
US10097827B2 (en) 2016-04-05 2018-10-09 Fujitsu Limited Apparatus and method for image encoding, apparatus and method for image decoding, and image transmission system
WO2019123768A1 (fr) * 2017-12-22 2019-06-27 ソニー株式会社 Dispositif de traitement d'image et procédé de traitement d'image

Also Published As

Publication number Publication date
JP2014099672A (ja) 2014-05-29

Similar Documents

Publication Publication Date Title
US11627337B2 (en) Image decoding device
JP7200320B2 (ja) 画像フィルタ装置、フィルタ方法および動画像復号装置
US10547861B2 (en) Image decoding device
JP6190361B2 (ja) 算術復号装置、画像復号装置、算術符号化装置、および画像符号化装置
US20180192076A1 (en) Image decoding device image coding device
WO2014115283A1 (fr) Dispositif de décodage d'image et dispositif de codage d'image
WO2017195532A1 (fr) Dispositif de décodage d'image et dispositif de codage d'image
JP2013236358A (ja) 画像フィルタ装置、画像復号装置、画像符号化装置、およびデータ構造
JP5995448B2 (ja) 画像復号装置、および画像符号化装置
JP7139144B2 (ja) 画像フィルタ装置
WO2012121352A1 (fr) Dispositif de décodage de vidéo, dispositif de codage de vidéo et structure de données
JP2013223050A (ja) フィルタ装置、復号装置、および符号化装置
JP2013141094A (ja) 画像復号装置、画像符号化装置、画像フィルタ装置、および符号化データのデータ構造
WO2014007131A1 (fr) Dispositif de décodage d'image et dispositif de codage d'image
JP2013187868A (ja) 画像復号装置、画像符号化装置、およびデータ構造
JP2014176039A (ja) 画像復号装置、および画像符号化装置
JP6162289B2 (ja) 画像復号装置および画像復号方法
WO2012081706A1 (fr) Dispositif de filtre d'image, dispositif de filtre, décodeur, codeur et structure de données
WO2014050554A1 (fr) Dispositif de décodage d'image et dispositif de codage d'image
JP2013251827A (ja) 画像フィルタ装置、画像復号装置、画像符号化装置、およびデータ構造
JP2014082729A (ja) 画像復号装置、および画像符号化装置
WO2012043676A1 (fr) Dispositif de décodage, dispositif de codage et structure de données
WO2012081636A1 (fr) Dispositif de décodage d'image, dispositif de codage d'image et structure de données de données codées
JP2014013976A (ja) 画像復号装置、および画像符号化装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12754235

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12754235

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP