WO2012049876A1 - Procédé de codage et procédé de décodage d'images vidéo - Google Patents

Procédé de codage et procédé de décodage d'images vidéo Download PDF

Info

Publication number
WO2012049876A1
WO2012049876A1 PCT/JP2011/060430 JP2011060430W WO2012049876A1 WO 2012049876 A1 WO2012049876 A1 WO 2012049876A1 JP 2011060430 W JP2011060430 W JP 2011060430W WO 2012049876 A1 WO2012049876 A1 WO 2012049876A1
Authority
WO
WIPO (PCT)
Prior art keywords
filter
information
image
encoded
encoding
Prior art date
Application number
PCT/JP2011/060430
Other languages
English (en)
Japanese (ja)
Inventor
隆志 渡辺
山影 朋夫
中條 健
Original Assignee
株式会社 東芝
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社 東芝 filed Critical 株式会社 東芝
Publication of WO2012049876A1 publication Critical patent/WO2012049876A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/157Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
    • H04N19/159Prediction type, e.g. intra-frame, inter-frame or bidirectional frame prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • Embodiments described herein relate generally to a moving image encoding method and a decoding method used for encoding or decoding a moving image.
  • the encoding side sets the filter coefficient and information indicating the area to which the filter is applied and transmits it to the decoding side.
  • the decoding side performs loop filter processing on the filter application area using the received filter information, and encodes the decoded image.
  • QALF quad-tree based adaptive loop filter
  • QALF can determine by switching whether to apply a filter for each block by dividing an image into variable-size blocks using a tree structure of a quadtree.
  • H H.
  • a block having a fixed size called a macro block is set as a coding block which is a processing unit of coding, and a prediction method, a prediction block size, a transform block size, and the like are set in the block.
  • a prediction method, a prediction block size, a transform block size, and the like are set in the block.
  • There is a method of controlling with a quadtree By recursively expressing this method and the encoded block in a tree structure, the size of the encoded block is variable in the screen. Further, the switching of the filter application is performed for each coding block, and the filter application information is multiplexed in the coded data of the coding block.
  • the present disclosure has been made to solve the above-described problem, and an object thereof is to provide a moving image encoding method and a decoding method capable of reducing the amount of encoding processing.
  • the moving picture coding method transforms and quantizes a prediction error image indicating a difference between an input image including a pixel signal and a predicted image of the input image to generate quantized transform coefficient information, and
  • the quantized transform coefficient information is inversely quantized and inversely transformed to generate a local decoded image, filter coefficient information indicating filter coefficients of one or more filters to be applied to pixel regions in the local decoded image is generated, and encoding Whether to apply filter processing to the encoded block based on block division information indicating how the target input image is divided by the encoded block indicating the pixel region, and when applying a filter Generates filter application information indicating one or more candidates for the applied filter for each processing unit including one or more encoded blocks, and the one or more codes
  • a filter is applied to a processing unit including a block, if there are two or more filters, in addition to the filter application information, an index calculated from a pixel signal included in the processing unit to which the filter is applied Based on this, the filter to be
  • the moving picture decoding method is obtained by dividing the filter coefficient information indicating the filter coefficient of one or more filters and the encoding block indicating the pixel area in the image to be encoded.
  • Block division information indicating whether or not to apply filter processing to the one or more coding blocks, and filter application information indicating one or more candidates for the filter to be applied, and quantization transform Decoding encoded data including coefficient information, obtaining the filter coefficient information, the filter application information, the block division information, and the quantized transform coefficient information in a pixel block having a predetermined size,
  • a region of a decoded image to which a filter is applied is identified by associating block division information with the filter application information, and for the region of the decoded image to which the filter is applied, filter application information and filter coefficient information, and two or more If a filter exists, a restored image is generated by applying a filter specified by
  • FIG. 1 It is a figure which shows the 3rd description example of the syntax structure of a loop filter data syntax. It is a figure which shows the 4th example of a description of the syntax structure of a loop filter data syntax. It is a block diagram which shows another example of an entropy encoding part. It is a block diagram which shows a moving image decoding apparatus. It is a block diagram which shows an entropy decoding part. It is a block diagram which shows another example of an entropy decoding part. It is a block diagram which shows the moving image encoder in the case of including a deblocking filter process. It is a block diagram which shows the moving image decoding apparatus in case a deblocking filter process is included.
  • a moving picture coding apparatus that performs the moving picture coding method according to the present embodiment will be described in detail with reference to FIG.
  • a moving image encoding apparatus 100 includes a predicted image generation unit 101, a subtraction unit 102, a transform and quantization unit 103, an inverse quantization and inverse transform unit 104, an adder 105, a loop filter information generation unit 106, A loop filter processing unit 107 and an entropy encoding unit 108 are included.
  • the entire operation of the moving image encoding apparatus 100 is controlled by the encoding control unit 109.
  • the predicted image generation unit 101 performs a predetermined prediction process on an image including a plurality of pixel signals input from the outside (hereinafter referred to as an input image), and generates a predicted image.
  • an input image for example, a general process such as prediction in the temporal direction by motion compensation or prediction in the spatial direction using encoded pixels in the screen may be used, and detailed description thereof is omitted here.
  • the subtraction unit 102 receives the input image and the predicted image from the predicted image generation unit 101, calculates a difference between the input image and the predicted image, and generates a prediction error image.
  • the transform and quantization unit 103 receives the prediction error image from the subtraction unit 102, performs transform processing on the prediction error image to generate transform coefficients, and then performs quantization processing on the transform coefficients to quantize the transform coefficients.
  • a quantized transform coefficient is generated.
  • orthogonal transformation using DCT Discrete Cosine Transform
  • the transform coefficient may be generated using a method such as wavelet transform or independent component analysis.
  • the quantization process the transform coefficient is quantized based on a quantization parameter set by the encoding control unit 109 described later.
  • the inverse quantization and inverse transform unit 104 receives the quantized transform coefficient from the transform and quantization unit 103, performs inverse quantization based on the quantization parameter, and then performs inverse transform (for example, inverse DCT) on the obtained transform coefficient. Thus, a prediction error image is generated.
  • the inverse quantization and inverse transform unit 104 may perform an inverse process on the process of the transform and quantization unit 103. For example, when the transform and quantization unit 103 performs wavelet transform and quantization, The inverse quantization and inverse transform unit 104 may perform inverse quantization and inverse wavelet transform.
  • the addition unit 105 receives the prediction image from the prediction image generation unit 101 and the prediction error image from the inverse quantization and inverse conversion unit 104, and adds the prediction image and the prediction error image to generate a local decoded image.
  • the loop filter information generation unit 106 receives the input image, the locally decoded image from the addition unit 105, and the encoded block division information from the encoding control unit 109.
  • the coded block division information is information indicating how a coded block that is a coding processing unit is divided. Thereafter, the loop filter information generation unit 106 generates filter coefficient information and filter application information.
  • the filter coefficient information is information indicating a filter coefficient of a filter applied to a pixel area (hereinafter also simply referred to as an area).
  • the filter application information is information indicating whether to apply the filter process to the encoded block. Details of the loop filter information generation unit 106 will be described later with reference to FIG.
  • the loop filter processing unit 107 receives the locally decoded image from the addition unit 105, the filter application information and the filter coefficient information from the loop filter information generation unit 106, and the encoded block division information from the encoding control unit 109, respectively. Thereafter, the loop filter processing unit 107 applies the filter indicated by the filter coefficient information to the region indicated by the filter application information regarding the locally decoded image, and generates a restored image that is an image after the filter application. Further, the generated restored image is referred to when the predicted image is generated by the predicted image generation unit 101.
  • the entropy encoding unit 108 receives the quantized transform coefficient from the transform and quantization unit 103, the filter coefficient information and the filter application information from the loop filter information generation unit 106, and the encoding parameter from the encoding control unit 109, respectively. Thereafter, the entropy encoding unit 108 entropy-encodes (eg, Huffman encoding or arithmetic encoding) the quantized transform coefficient, filter coefficient information, filter application information, and encoding parameter, and outputs the result as encoded data.
  • the encoding parameter is information such as prediction mode information, motion information, encoded block division information, and quantization parameter. Details of the entropy encoding unit 108 will be described later with reference to FIG.
  • the coding control unit 109 performs coding block division control, feedback control of generated code amount, quantization control, mode control, and the like, and performs overall coding control.
  • the loop filter information generation unit 106 includes a filter coefficient information generation unit 201 and a filter application information generation unit 202.
  • the filter coefficient information generation unit 201 receives the input image and the decoded image from the addition unit 105, sets filter coefficients of a loop filter to be applied to the decoded image, and generates filter coefficient information.
  • the filter application information generation unit 202 receives the input image, the decoded image from the addition unit 105, the filter coefficient information from the filter coefficient information generation unit 201, and the encoded block division information from the encoding control unit 109, respectively. To determine whether or not to apply a filter to one or more encoded blocks, and generate filter application information. A determination method related to filter application by the filter application information generation unit 202 will be described later.
  • the entropy encoding unit 108 includes an encoding block level syntax encoding unit 301 and a loop filter data syntax encoding unit 302.
  • the coding block level syntax coding unit 301 receives the quantized transform coefficient from the transform and quantization unit 103, and the coded block division information from the coding control unit 109, and receives the quantized transform coefficient, the coded block division information, and the like. Entropy coding is performed on information including.
  • the loop filter data syntax encoding unit 302 receives the filter coefficient information and the filter application information from the loop filter information generation unit 106, and performs entropy encoding on the filter coefficient information and the filter application information. Detailed operation of the entropy encoding unit 108 will be described later.
  • variable-size coding block assumed in the present embodiment will be described in detail with reference to FIG.
  • the encoding process is performed for each encoded block obtained by dividing an image into a plurality of blocks.
  • H In the conventional video coding standard such as H.264, a fixed-size block called a macro block is used.
  • the video coding method using a variable-size coded block in the screen is used.
  • the present embodiment describes a case where block division is controlled by a tree structure of a quadtree, but any block division method can be applied.
  • variable-size coding blocks can be adjusted by controlling the division according to the tree structure of the quadtree.
  • the coding control unit 109 uses max_coding_block_size which is a parameter in the syntax.
  • max_coding_layer can be controlled to adjust the size of the encoded block.
  • Max_coding_block_size indicates the maximum size of the coding block
  • a schematic diagram 402 schematically represents the tree structure of the quadtree of the encoding block 401.
  • Each coding block can be further divided into four coding blocks, and information on whether or not to further divide a block is added to a coding block having a size larger than min_coding_block_size.
  • max_coding_block_size and max_coding_layer fixed values may be used for the sequence, or may be variable in units such as slices. In these cases, it is necessary to encode max_coding_block_size and max_coding_layer in each unit. Further, a unique parameter may be used on the encoding side and the decoding side. In this case, it is not necessary to encode the parameter.
  • the encoding control unit 109 uses the equation (1) for each of the case where encoding is performed using each encoding block and the case where encoding is further performed by dividing into four encoding blocks.
  • the coding cost is calculated, and the size of the coding block that reduces the coding cost is selected.
  • the size of the coding block variable in the screen it is possible to perform coding in consideration of the characteristics of each region in the image.
  • the moving image encoding apparatus 100 receives an input image, and the subtraction unit 102 performs a subtraction process on the input image and the prediction image from the prediction image generation unit 101 to generate a prediction error image. Subsequently, the generated prediction error image is converted and quantized by the transform and quantization unit 103, and as a result, a quantized transform coefficient is generated. The quantized transform coefficient is encoded by the entropy encoding unit 108. On the other hand, the quantized transform coefficient is inversely quantized and inversely transformed by the inverse quantization and inverse transform unit 104, and is output as a prediction error image. The prediction error image is added to the prediction image from the prediction image generation unit 101 in the addition unit 105, and a local decoded image is generated.
  • the filter coefficient information generation unit 201 sets the filter coefficient so that the mean square error between the image when the local decoded image is filtered and the input image is minimized. To do.
  • the filter application information generation unit 202 applies the filter to one or more encoded blocks based on the filter coefficient information and the encoded block division information, so that the mean square error between the image after the filter application and the input image is increased. It is determined for each coding block whether or not it is reduced, and filter application information for each block is generated. That is, the filter application information is set so that the filter is applied when an error from the original image in each block is reduced by applying the filter. Subsequently, the filter application information generation unit 202 performs the above-described determination on a plurality of max_filtering_layers, and selects max_filtering_layer that minimizes the coding cost.
  • max_filtering_layer is a parameter indicating how many layers in the quadtree in coding block division are used for setting filter application information.
  • an area serving as a unit for determining whether to apply a filter which is indicated by the division shape of the encoded block and max_filtering_layer, is also referred to as a filter application determination area.
  • the filter application information may be set for a processing unit including a plurality of encoded blocks. For example, filter application information may be set for each encoded slice. In this case, the cost when no filter is applied to the encoded slice and the cost when a filter is applied for each block are calculated. Thus, it is possible to set filter application information in the encoded slice.
  • FIG. 5A shows the division shape of the encoding block 401 in FIG.
  • “ON” and “OFF” in each coding block from FIG. 5B to FIG. 5C indicate whether to apply a filter. If “ON”, the filter is applied, and “OFF” ", The filter is not applied.
  • the tree-structured coding block actually used in the coding and the filter application determination region have a depth of 2 (second Whether or not to apply a filter is determined for each coding block.
  • Multiple maximum coding blocks for example, four maximum coding blocks in the horizontal direction, four maximum coding blocks in the vertical direction, and four maximum coding blocks in the horizontal direction and one maximum coding block in the vertical direction can be combined as the maximum filter application determination region. Good.
  • a two-dimensional Wiener filter generally used in image restoration is used as a filter. Further, when the filter is set to be non-applied for each encoded slice, the filter is not applied to all the pixels in the slice, and the filter application information for each block is discarded. On the other hand, when a filter is applied in units of coded slices, the filter is applied according to the filter application information for each block.
  • a plurality of filters may be prepared, and the type of filter may be similarly switched according to the filter application information in addition to whether the filter is applied. Furthermore, whether to apply a filter may be switched according to filter application information, and the type of filter may be switched according to activity or pixel value for each pixel or block of a decoded image.
  • the filter to be applied may be switched not only in units of one pixel but also in units of square blocks such as 8 ⁇ 8 pixels and 4 ⁇ 4 pixels, rectangular blocks such as 8 ⁇ 2 pixels and 4 ⁇ 2 pixels, and pixel lines such as 8 ⁇ 1 pixels and 4 ⁇ 1 pixels. .
  • the result of calculating the difference from the surrounding pixel values using a differential filter generally used in edge detection or the like can be used.
  • a specific example of the switching index when the filter is switched in units of one pixel is given below.
  • Equation 1 D (x, y) indicates the pixel value of the decoded image at the pixel position (x, y), and the absolute value of the differential filter result in the horizontal direction and the absolute value of the differential filter in the vertical direction according to Equation 1.
  • the value represented by Equation 1 may be calculated for a certain range of pixels, for example, a block of surrounding N ⁇ N pixels, and the sum thereof may be used as an index for filter switching.
  • Equation 1 is given as an index for switching the filter in units of one pixel.
  • the filter is switched in units of regions composed of a plurality of pixels, for example, the sum of the indexes obtained in Equation 1 is calculated for each pixel in the region. This can be used as a filter switching index for the entire region.
  • Equation 2 averages the absolute values of the pixel value differences between the pixel at the pixel position (x, y) and the surrounding pixels. This can also be used as a filter switching index as a value representing the local structure of the image. Further, in Equation 2, the average value of the pixel value differences is obtained, but the maximum value may be used instead of the average value.
  • the average value of the number 2 can be calculated within the region and used as a filter switching index for the entire region. The same applies when the maximum value is used instead of the average value.
  • the syntax mainly includes three levels: a high level syntax, a slice level syntax, and a coded block level syntax.
  • the high level syntax 601 describes syntax information of higher layers above the slice
  • the slice level syntax 604 describes necessary information for each slice
  • the coding block level syntax 607 describes coded block division information or coding. A transform coefficient, prediction mode information, a motion vector, and the like required for each block are described.
  • the high level syntax 601 includes sequence or picture level syntax such as a sequence parameter set syntax 602 and a picture parameter set syntax 603.
  • the slice level syntax 604 includes a slice header syntax 605 and a loop filter data syntax 606 including filter coefficient information and filter application information.
  • the encoded block level syntax 607 includes an encoded block layer syntax 608 including encoded block division information and an encoded block prediction syntax 609.
  • the parameters max_coding_block_size and max_coding_layer for controlling the division of the coding block described above are fixed in the sequence, these parameters are added to the sequence parameter set syntax 602, and if the parameter is variable for each slice, the slice header These parameters may be added to the syntax 605.
  • step S701 the encoded block level syntax encoding unit 301 encodes mode information and motion information as a series of encoded data of the encoded block level syntax in addition to the encoded block division information and the quantized transform coefficient information.
  • step S702 the loop filter data syntax encoding unit 302 encodes the filter coefficient information and the filter application information as a series of encoded data of the loop filter data syntax separately from the encoding of the encoding block level syntax.
  • step S703 the encoded data of the loop filter data syntax and the encoded block level syntax are combined and generated as one encoded data to be sent to the decoding side.
  • the filter application information is determined at the time of encoding the encoded block level syntax. Not. Therefore, if filter application information is added to the coding block level syntax, the pixel position of the area to which the filter is applied is stored for each coding block, and the area of the coding block is set after the filter application information is set. It is necessary to rewrite whether or not a filter is applied to the image, which makes the encoding process complicated and increases the amount of processing.
  • the filter application information is encoded together for one slice so that the number of encoded blocks to which the filter is applied is known in the loop filter data syntax instead of the encoded block level syntax.
  • filter_type_idx is an index indicating the shape or tap length of the loop filter.
  • filter_type_idx corresponds to NumOfFilterCoeff, which is the number of filter coefficients, and filter_coeff is a filter coefficient.
  • the filter_coeff in the filter coefficient information indicates the filter coefficient information, but in addition to the method of encoding the filter coefficient itself, a method of encoding the difference with the filter coefficient of a different filter may be used.
  • a method of encoding the difference with the filter coefficient of a different filter may be used.
  • the value of filter_coeff can be reduced, and as a result, the amount of code necessary for encoding filter_coeff can be reduced.
  • a plurality of methods can be considered for selection of a filter serving as a reference for calculating the difference.
  • One is a method of using filter coefficients of a filter used in an already encoded slice.
  • the filter used in the immediately preceding slice may be used, or the filter used in the immediately preceding same slice type may be used. Furthermore, a plurality of already encoded filter coefficients may be held and one of them may be selected and used. However, in this case, it is necessary to separately multiplex information indicating which filter is to be calculated with the encoded data. If the filter coefficient to be encoded differs from the filter coefficient used as a reference, the difference between the filter coefficients is calculated by calculating the difference between the coefficient positions having the same relative position with respect to the center coefficient position. Can be reduced.
  • the filter coefficient is similarly set between the filters in the same slice.
  • the difference value may be calculated.
  • the coefficient difference calculation between the different slices and the difference value calculation within the same slice can be used in combination.
  • the first filter in the slice can be encoded by the difference between the slices, and the second and subsequent filters can be encoded by calculating the difference from the first filter.
  • the difference value of the filter coefficient does not need to be encoded. That is, it is possible to select a reference filter in different slices or the same slice and use the filter coefficients as they are. In this case, it is necessary to separately multiplex switching information indicating whether the difference value of the filter coefficient is encoded or whether the reference filter is used as it is into encoded data.
  • Loop_filter_flag is a 1-bit flag indicating whether or not to apply a loop filter to each coding block. For example, “1” may be set when a filter is applied, and “0” may be set when a filter is not applied. NumOfLoopFilterFlag is the total number of filter application determination areas.
  • the loop_filter_flag may be encoded with a fixed length of 1 bit, or may be encoded by variable length encoding such as CAVLC (Context Adaptive Variable Length Coding) or CABAC (Context Adaptive Binary Arithmetic Coding).
  • a plurality of filters may be prepared and switched for each filter application determination region, and filter switching information may be explicitly multiplexed into encoded data.
  • filter switching information may be explicitly multiplexed into encoded data.
  • One is a method of identifying a filter to be applied by adding a flag or an index when applying the filter after determining whether or not to apply the filter with loop_filter_flag.
  • the other is a method in which one index is used to determine whether or not to apply a filter and which filter to apply if a filter is further applied. For example, when three types of filters are available, there are four types of processing including the case where no filter is applied.
  • the filter may be applied to all areas. That is, the case where the filter is applied and the case where the three types of filters are switched has been described above. However, in the case where the filter is applied in all regions, similarly, up to four types of filters can be selected using an index equivalent to 2 bits. It is possible to switch. However, there is no guarantee that applying the filter reduces the error from the input image for all regions. Therefore, a specific area expressed by a flag or an index may be switched in units of slices or the like between a non-filter application area or a different filter application area. In that case, it is necessary to add information for switching in a corresponding unit, or to switch based on a common index between the encoder and the decoder.
  • NumOfLoopFilterFlag is added before loop_filter_flag.
  • NumOfLoopFilterFlag 10 and loop_filter_flag is represented as “0100101101”, so that only information included in the loop filter data syntax is used on the decoding side.
  • the loop_filter_flag can be correctly decoded. Further, it is possible to determine whether to apply a filter for each corresponding encoded block based on the encoded block division information and max_filtering_layer, and to perform the filtering process.
  • the number of coding blocks is inserted into the loop filter data syntax without performing complicated processing such as storing and rewriting all pixel positions in the coding block area to which the filter is applied on the coding side.
  • the encoding processing amount can be reduced.
  • the NumOfLoopFilterFlag may be encoded by variable length encoding or may be encoded by a fixed length.
  • an encoding method there is a method of changing based on parameters relating to image size or encoding block division. Since one slice does not take an area larger than one screen, the minimum number of blocks and the maximum number of blocks that can exist in one slice, ie, the number of blocks, are selected from at least one of the image size, max_coding_block_size, min_coding_block_size, and max_filtering_layer. A range of values can be obtained.
  • the code table is changed using a probability model corresponding to the range of values that the number of blocks can take.
  • the NumOfLoopFilterFlag is encoded with the minimum bit length that can represent the range of values that the number of blocks can take. In this way, an appropriate encoding method can be selected even when the image size or the block size changes. Furthermore, since the above parameters can also be used on the decoding side, it is possible to correctly decode by selecting the same bit length on the encoding side and the decoding side.
  • FIGS. 9A and 9B a bit string unique to the end of loop_filter_data may be encoded to determine the end of loop_filter_flag.
  • FIG. 9A is a description example when the loop_filter_flag does not need to be set
  • FIG. 9B is a description example when it is assumed that at least one loop_filter_flag is set.
  • the encoded block division information may be described in loop_filter_data in a quadtree tree structure.
  • a description example of the loop filter data syntax 606 in this case is shown in FIG.
  • NumOfParentBlock in FIG. 10 represents the number of max_coding_block_size blocks included in one slice, and there is a quadtree corresponding to NumOfParentBlock.
  • NumOfChildBlock represents the number of coding blocks divided from one max_coding_block_size when division is performed up to a certain hierarchy. Each block of max_coding_block_size is used as a starting point of the quadtree, and whether or not the encoded block is further divided into four by block_partitioning_flag is expressed.
  • the maximum size of the filter application determination area may exceed the maximum size of the encoded block.
  • An entropy encoding unit 1100 illustrated in FIG. 11 includes an encoding block level syntax encoding unit 1101 and a loop filter data syntax encoding unit 1102.
  • the encoded block level syntax encoding unit 1101 performs substantially the same operation as the encoded block level syntax encoding unit 301 shown in FIG. 3, but does not encode the encoded block division information, and converts the quantized transform coefficients. The point of encoding is different.
  • Loop filter data syntax encoding section 1102 performs substantially the same operation as loop filter data syntax encoding section 302 shown in FIG. 3, but includes encoded block division information in addition to filter coefficient information and filter application information. The difference is that it is encoded as a series of encoded data. At this time, since the encoded block division information does not need to be added to the encoded block data syntax, the overhead does not change significantly.
  • the slice header syntax and the loop filter data syntax are different from each other.
  • a part or all of the loop filter data syntax may be included in the slice header syntax.
  • the application of the loop filter can be switched in units of slices, and the filter application information in units of slices is stored in the slice header syntax 605.
  • the loop filter data syntax 606 is stored in the slice level syntax 604.
  • the loop filter may be controlled in a unit independent of the slice. This is called a loop filter slice.
  • loop filter data syntax 606 is generated as many as the number of loop filter slices.
  • the loop filter data syntax 606 is included in the high level syntax 601 such as the picture parameter set syntax 603. Also good.
  • a syntax may be generated for each component, or a common syntax may be generated for two or more components.
  • the moving picture decoding apparatus 1200 includes an entropy decoding unit 1201, a filter information buffer 1202, an inverse quantization and inverse transform unit 1203, an addition unit 1204, a loop filter processing unit 1205, and a predicted image generation unit 1206.
  • the entire operation of the moving picture decoding apparatus 1200 is controlled by the decoding control unit 1207.
  • the operations of the inverse quantization and inverse transform unit 1203, the addition unit 1204, and the predicted image generation unit 1206 are the same as the corresponding units included in the video encoding device 100 according to the present embodiment. Description is omitted.
  • the entropy decoding unit 1201 sequentially decodes a code string of each syntax of encoded data for each of a high level syntax, a slice level syntax, and an encoded block level syntax according to the syntax structure shown in FIG.
  • the filter coefficient information, the filter application information, the encoded block division information, and the like are decoded.
  • the filter information buffer 1202 receives and stores the filter coefficient information and filter application information decoded from the entropy decoding unit 1201.
  • the loop filter processing unit 1205 performs substantially the same operation as the loop filter processing unit 107 according to the present embodiment, and the encoded block division information from the entropy decoding unit 1201, the decoded image from the addition unit 1204, and the decoded image from the filter information buffer 1202. Each receives filter coefficient information and filter application information. Thereafter, the loop filter processing unit 1207 applies a filter indicated by the filter coefficient information to a specific region of the decoded image based on the filter application information, and generates an image after the filter application as a restored image. The restored image is output to the outside as an output image. The restored image is referred to when the predicted image is generated by the predicted image generation unit 1206.
  • the decoding control unit 1207 performs overall decoding control such as encoding block division control or decoding timing control.
  • the entropy decoding unit 1201 includes an encoded block level syntax decoding unit 1301 and a loop filter data syntax decoding unit 1302.
  • the encoded block level syntax decoding unit 1301 receives a code string corresponding to the encoded block level syntax from the encoded data, performs decoding processing, and decodes the quantized transform coefficient and the encoded block division information.
  • the loop filter data syntax decoding unit 1302 receives a code string corresponding to the loop filter data syntax from the encoded data, performs decoding processing, and decodes filter coefficient information and filter application information.
  • the operation of the moving picture decoding apparatus 1200 will be described.
  • the entropy decoding unit 1201 inputs a code string corresponding to the loop filter data syntax of the encoded data to the loop filter data syntax decoding unit 1302, and the syntax of FIG. Decoding processing according to the structure is performed.
  • the obtained filter coefficient information and filter application information are stored in the filter information buffer 1202.
  • a code string corresponding to the encoded block level syntax of the encoded data is input to the encoded block level syntax decoding unit 1301 and subjected to decoding processing.
  • prediction mode information is added.
  • Motion information, coded block division information, quantization parameters, and the like are decoded according to the syntax structure of FIG.
  • the obtained encoded block division information is used when the decoding control unit 1207 performs encoded block division control in the decoding process.
  • the inverse quantization and inverse transform unit 1203 receives the transform coefficient decoded by the entropy decoding unit 1201, performs inverse quantization according to the quantization parameter set by the decoding control unit 1207, and obtains the obtained transform coefficient.
  • inverse transformation for example, discrete cosine transformation
  • the addition unit 1204 receives the prediction error image from the inverse quantization and inverse transformation unit 1203 and the prediction image from the prediction image generation unit 1206 and adds them to generate a decoded image.
  • the loop filter processing unit 1205 receives the decoded image from the adding unit 1204, the filter coefficient information and the filter application information from the filter information buffer 1202, and the encoded block division information from the entropy decoding unit 1201, respectively. Filter processing is performed on the filter. At this time, the loop filter processing unit 1205 can determine the region to which the filter is applied by associating the encoded block division information with the filter application information. Specifically, by obtaining a filter application determination area from max_filtering_layer shown in the syntax structure of FIGS.
  • a filter can be applied to the block set on the encoding side.
  • the pixel value after the filter application is set as the pixel value of the restored image.
  • the pixel value at the same position in the decoded image is set as the pixel value of the restored image.
  • the restored image is output as an output image and is referred to as necessary by the predicted image generation unit.
  • loop filter data syntax follows the syntax structure of FIGS. 8, 9A, and 9B has been described here, it may follow the syntax structure of FIG.
  • the entropy decoding unit 1400 illustrated in FIG. 14 has the same configuration as the entropy decoding unit 1201 illustrated in FIG. 13, but a code string corresponding to the loop filter data syntax is input to the loop filter data syntax decoding unit 1402 to perform decoding processing.
  • the difference is that the encoded block division information is decoded in addition to the filter coefficient information and the filter application information. Therefore, the encoded block division information decoded by the loop filter data syntax decoding unit 1402 is used when the encoded block level syntax decoding unit 1401 decodes a code string corresponding to the encoded block level syntax in addition to the above-described uses. Can be used.
  • the filter application information is not multiplexed in the encoded data of the encoded block, but is combined and encoded as a series of encoded data. There is no need to store and rewrite the position of the filter application information in the encoded data of the encoded block. Therefore, it is only necessary to encode the loop filter data syntax together, and the encoding process becomes simple and the processing amount can be reduced. Further, with respect to the encoded data encoded by the moving image encoding apparatus, the filter application information is stored in the buffer, and the encoded block division information is decoded while decoding the encoded block level syntax, and By performing the association, a filter can be applied to the region set on the encoding side, and a moving image can be decoded.
  • the loop filter information generation unit 106 divides the local decoded image into a plurality of filter application determination regions, and generates filter coefficient information and filter application information of the loop filter for each filter application determination region.
  • the filter application determination area includes one or more encoded blocks of the maximum size, and the division of the filter area is managed by the structure of the quadtree as in the encoded block.
  • the filter coefficient information generation unit 201 sets a filter coefficient to be applied to the local decoded image based on the local decoded image and the input image for each filter application determination region.
  • a filter coefficient setting method for example, an error between the input image and the input image can be reduced by reducing the mean square error between the image when the local decoded image is filtered and the input image. Can do.
  • a plurality of filter coefficients may be set in each filter application determination area. In that case, as described in the embodiment, the filter coefficient is switched according to an index such as an activity in an area including a pixel unit or a plurality of pixels. It's okay. However, here, the description will be made assuming that the region where the filter coefficient is switched does not cross the boundary of the filter application determination region.
  • filter coefficient information is generated for each filter application determination region based on the set filter coefficient.
  • the filter coefficient information is expressed by the difference between the filter coefficient for the filter application determination region and the already encoded filter coefficient.
  • the encoder can apply both the case where the already encoded filter is used as it is and the case where the set filter is used, and can select a more appropriate filter based on the encoding cost.
  • the reference filter may not be the filter used immediately before.
  • information indicating whether or not to use as a reference filter later is required. For example, a 1-bit flag indicating whether or not to hold the reference filter in the buffer and, if there are a plurality of buffers holding the reference filter, index information specifying the buffer to hold is multiplexed into the encoded data.
  • the above processing can be performed.
  • the encoder after setting a filter for a certain filter application determination region, it is possible to determine whether or not it is effective to use the filter in the next filter application determination region. In the filter application determination region, only a filter having a high error reduction effect can be held as a reference filter.
  • the filter application information generation unit 202 applies a filter to the filter application determination region, so that the mean square error between the image after the filter application and the input image is reduced.
  • Filter application information is generated for each filter application determination region. That is, the filter application information is set so that the filter is applied when an error from the original image in each block is reduced by applying the filter.
  • the above-described determination is performed from the maximum size filter application determination region, and is performed until the maximum number of divisions set in advance is reached, in which one-stage division is performed according to the quadtree structure, two-stage division is performed, and encoding is performed. A divided shape with the smallest cost is obtained.
  • the division shape obtained here is multiplexed into the encoded data as part of the filter application information.
  • the filter application determination area is managed in a quadtree structure, and thus the quadtree tree structure is encoded in the same manner as the encoding block.
  • the filter application determination area is set independently of the divided shape of the coding block.
  • the filter application determination may be performed using the divided shape of the encoded block. Good. In this case, it is not necessary to encode the division shape of the filter application region described above.
  • the filter coefficients may be set in units different from the filter application determination area.
  • one or more filter coefficients may be set for a block including one or more filter application determination areas, and the filter coefficient may be switched according to an index such as activity in the block.
  • filter application information is set for each encoded slice.
  • the above-described determination is performed on the entire slice, and the cost when no filter is applied to the slice and the cost when a filter is applied to each block are as follows: Only when the encoding cost is reduced by calculating and applying the filter, the filter can be set to be applied in the slice.
  • the loop filter processing unit 107 receives the locally decoded image, the filter coefficient information, and the filter application information, and performs a filter process for each filter application determination region.
  • the entropy encoding unit 108 includes an encoding block level syntax encoding unit 301 and a loop filter data syntax encoding unit 302 as shown in FIG.
  • an example has been described in which the loop filter data syntax 606 and the encoding block level syntax 607 shown in FIG. 6 are encoded together in units of slices and combined.
  • filter coefficient information can be set for each filter application determination region, and therefore, when transmitting all loop filter data syntax at the head of slice data, a decoder holds a lot of filter information.
  • loop filter data syntax may be inserted at a position corresponding to the maximum size filter application determination region in the coding block level syntax.
  • the maximum size of the filter application determination area described above is a combination of four maximum encoding blocks in the horizontal direction and four maximum encoding blocks in the vertical direction, for example, the code located at the upper left in the filter application determination area
  • the loop filter data syntax is inserted at the syntax position corresponding to the block.
  • the filter application determination area matches the maximum size of the encoded block, the loop filter data syntax and one encoded block level syntax are alternately transmitted.
  • the entropy decoding unit 1201 sequentially decodes a code string of each syntax of encoded data for each of a high level syntax, a slice level syntax, and an encoded block level syntax according to the syntax structure shown in FIG.
  • the filter coefficient information, the filter application information, the encoded block division information, and the like are decoded.
  • the loop filter data syntax and the encoded block level syntax may not be encoded together in units of slices. That is, the loop filter data syntax may be inserted at a position corresponding to the maximum size filter application determination area in the encoded block level syntax.
  • the entropy decoding unit 1201 performs decoding processing according to the above-described syntax configuration, and the loop filter data syntax is decoded for each maximum size of the filter application region.
  • the loop filter processing unit 1205 receives the decoded image, the filter coefficient information, and the filter application information, and performs filter processing by means similar to the encoding device for each filter application determination region included in the filter application information.
  • the local decoding image and the decoding are performed on the local decoding image on the encoding side and the decoding image on the decoding side.
  • the image may be an image after performing a conventional deblocking filter process.
  • FIG. 15 shows a video encoding apparatus 1500, which is different from the video encoding apparatus 100 shown in FIG. 1 in that a deblocking filter processing unit 1501 is included in addition to the video encoding apparatus 100 shown in FIG.
  • FIG. 16 shows a video decoding device 1600, which is different from the video decoding device 1200 shown in FIG. 12 in that it includes a deblocking filter processing unit 1501 in addition to the video decoding device 1200 shown in FIG.
  • the deblocking filter processing unit 1501 receives the decoded image from the adding unit 105 (1204), performs deblocking filter processing on the decoded image, and generates a deblocked decoded image. Thus, it is possible to prevent the predicted image from being deteriorated by performing the loop filter process using the deblocked decoded image.
  • the pixel values of the image before the deblocking filter processing and the image values after the deblocking processing may be used as the pixels to be filtered.
  • the moving image coding apparatus adds the input to the loop filter information generation unit 106 and the loop filter processing unit 107 in the moving image coding apparatus 1500 in addition to the output signal from the deblocking filter processing unit 1501. This is realized by adding an output signal from.
  • the loop filter information generation unit 106 and the loop filter processing unit 107 set and apply filter coefficients using both pixel values of the image before and after the deblocking process. This is equivalent to weighted averaging of pixel values before and after the deblocking process with an appropriate weight.
  • the moving picture decoding apparatus adds the output signal of the adding section 1204 in addition to the output signal from the deblocking filter processing section 1501 with respect to the input to the loop filter processing section 1205 in the moving picture decoding apparatus 1600. It is realized with.
  • block distortion caused by encoding can be reduced by deblocking filter processing, but a strong low-pass filter is applied in deblocking filter processing, so high-frequency components such as edges and textures included in the input image May also be reduced. Therefore, it is possible to prevent deterioration of the predicted image by performing loop filter processing using the decoded images before and after the deblocking filter processing.
  • those used for the loop filter process may be switched in units such as a sequence, a picture, and a slice. In that case, it is necessary to separately multiplex the information for switching into the encoded data in a corresponding unit.
  • the loop filter is used on both the encoding side and the decoding side, but the filter is applied to the decoded image and the filter is applied to the output image. Even when not applied, the moving picture encoding method and decoding method can be used.
  • the moving picture decoding apparatus in this case will be described with reference to FIG.
  • the moving image decoding apparatus 1700 is different from the moving image decoding apparatus 1200 shown in FIG. 12 in that the decoded image from the adding unit 1204 is output as an output image as it is.
  • the moving picture coding apparatus of FIG. 1 can be used as it is.
  • the video encoding device 100 and the video decoding device 1200 may be used as a post filter.
  • a video encoding device and a video decoding device when used as a post filter are shown in FIGS.
  • the moving image encoding apparatus 1800 is realized by removing the loop filter processing unit 107 in the moving image encoding apparatus 100 of FIG. 1 and inputting the decoded image to the predicted image generating unit 1801.
  • the moving image decoding apparatus 1900 is realized by inputting the decoded image from the adding unit 1204 to the predicted image generating unit 1901.
  • FIG. 20 shows an example of a prediction structure called a hierarchical B structure.
  • encoding is performed in order from a slice belonging to Layer 0 to a slice belonging to Layer 3.
  • motion compensation prediction can be performed with reference to a slice belonging to a layer higher than the current layer. Therefore, the slice belonging to Layer 3 which is the lowest layer is not referred to. H.
  • nal_ref_idc a 2-bit index included in a NAL unit that is a transmission unit of encoded data
  • whether or not the slice is referred to in subsequent encoding / decoding processing is explicitly indicated. It is possible to notify the decoder.
  • nal_ref_idc 0, the slice is not referenced, and when nal_ref_idc> 0, the slice is referenced.
  • the filter coefficient after specifying the layer to which the slice to which the filter is applied belongs when considering scalability in the time direction and filter characteristics, it is desirable to predict the filter coefficient after specifying the layer to which the slice to which the filter is applied belongs. For example, it is possible to generate a decoded image with a frame rate of 1/3 by restricting the layers to be decoded up to Layer 1 in FIG. In this case, since the filter coefficients of the filters applied to the slices belonging to Layer 2 and Layer 3 are not decoded, if these are used as the reference filters, the decoding side cannot calculate correct filter coefficients. In general, when encoding is performed in a hierarchical structure, since the prediction method, the image quality of the reference image, the quantization parameter, and the like are different, the image quality may differ depending on the layer.
  • the filter information is encoded by using the filter applied to the slice belonging to the same layer as a reference filter for prediction of filter coefficients. Therefore, it is possible to reduce the amount of code required for this. Therefore, in the present embodiment, the filter coded immediately before is used as the reference filter for calculating the difference between the filter coefficients, among the filters applied to the slices coded in the same layer.
  • a filter may be used as a reference filter.
  • an encoded filter applied to slices belonging to layers Layer 0 to Layer 2 may be used as a reference filter. Since prediction with reference to a higher layer is allowed even during moving image encoding, scalability in the time direction does not deteriorate.
  • the selection of the reference filter considering the layer at the time of encoding may be combined with the above-described reference filter selection method. That is, among the slices belonging to the same layer, a filter used in the same slice type may be used as a reference filter, and a plurality of filter coefficients applied to slices belonging to the same layer are held, One may be selected. When selecting one from a plurality of reference filter candidates, it is necessary to separately multiplex information indicating the reference filter into encoded data.
  • the selection method of the reference filter considering the layer to which the encoded slice belongs has been described.
  • H. In H.264 / AVC the decoding side cannot determine the layer at the time of encoding. Only reference / non-reference can be determined by the nal_ref_idc described above. Therefore, in order to predict the filter coefficient after specifying the layer to which the slice to which the filter is applied belongs as described above, information for specifying the layer to which each slice belongs is necessary. In this embodiment, a method using nal_ref_idc will be described. As mentioned above, H.M.
  • Nal_ref_idc used in H.264 is expressed in 2 bits, and indicates that a NAL unit including slice data is non-referenced when 0, and becomes a reference slice when other than 0. Is not mentioned. Therefore, for example, in FIG. 20, by giving nal_ref_idc with values 0 to 3 from Layer 3 to Layer 0, the decoding side can determine in which layer the slice is encoded.
  • H. An example of using nal_ref_idc was described assuming a syntax configuration similar to that of H.264 / AVC.
  • the encoded data includes information indicating a layer at the time of encoding, such as a syntax called temporal_id in SVC (Scalable Video Coding), which is an extension of H.264 / AVC, it may be used as it is.
  • temporal_id Scalable Video Coding
  • the multiplexing position may be the header of the NAL unit, or the syntax of each level shown in FIG.
  • multiplexing to a picture parameter set syntax that is a high level syntax, a slice header syntax that is a slice level syntax, or the like can be considered.
  • FIG. 21B a delay until the encoded data is output when performing the encoding process is suppressed.
  • FIG. 21A the filter coefficient information and the filter application information need to be combined at the head of the slice data.
  • the filter coefficient when generating encoded data according to the configuration of FIG. 21B, if the filter coefficient is set before encoding the slice, the encoding of the maximum size filter application determination region is completed when the encoding is completed. Encoded data can be generated. At this time, for setting the filter coefficient, the filter coefficient calculated from the image of the same slice type encoded immediately before is used to set a filter having appropriate filter characteristics even before the encoding of the slice. be able to.
  • the instructions shown in the processing procedure shown in the above embodiment can be executed based on a program that is software.
  • a general-purpose computer system stores this program in advance and reads this program, so that the same effect as that obtained by the above-described moving picture encoding apparatus and decoding apparatus can be obtained.
  • the instructions described in the above-described embodiments are, as programs that can be executed by a computer, magnetic disks (flexible disks, hard disks, etc.), optical disks (CD-ROM, CD-R, CD-RW, DVD-ROM, DVD). ⁇ R, DVD ⁇ RW, etc.), semiconductor memory, or a similar recording medium. As long as the recording medium is readable by the computer or the embedded system, the storage format may be any form.
  • the computer reads the program from the recording medium and causes the CPU to execute instructions described in the program based on the program, the same operation as the moving picture encoding apparatus and decoding apparatus of the above-described embodiment is realized. can do.
  • the computer acquires or reads the program, it may be acquired or read through a network.
  • the OS operating system
  • database management software database management software
  • MW middleware
  • a part of each process for performing may be executed.
  • the recording medium in the present disclosure is not limited to a medium independent of a computer or an embedded system, and includes a recording medium in which a program transmitted via a LAN, the Internet, or the like is downloaded and stored or temporarily stored.
  • the number of recording media is not limited to one, and the case where the processing in the present embodiment is executed from a plurality of media is included in the recording media in the present disclosure, and the configuration of the media may be any configuration.
  • the computer or the embedded system in the present disclosure is for executing each process in the present embodiment based on a program stored in a recording medium, and includes a single device such as a personal computer and a microcomputer, Any configuration such as a system in which apparatuses are connected to a network may be used.
  • the computer in the embodiment of the present disclosure is not limited to a personal computer, and includes an arithmetic processing device, a microcomputer, and the like included in an information processing device, and a device capable of realizing the functions in the embodiment of the present disclosure by a program, The device is a general term.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

La présente invention concerne un procédé selon lequel une information de coefficients de conversion de quantification est générée par la conversion et la quantification d'une image d'erreur de prédiction qui indique la différence entre une image d'entrée contenant un signal de pixel et une image de prédiction pour l'image d'entrée. Une image locale décodée est générée par une quantification inverse et une conversion inverse de l'information de coefficients de conversion de quantification. Une information de coefficients de filtre est générée indiquant les coefficients de filtre d'un ou de plusieurs filtre(s) qui est/sont appliqué(s) à une région de pixels dans l'image locale décodée. Des unités de traitement comprennent un ou des bloc(s) de codage représentant des régions de pixels, et une information d'application de filtre est générée pour chaque unité de traitement. L'information d'application de filtre indique si le traitement de filtre doit être appliqué ou non aux blocs de codage, et indique également un ou des candidat(s) pour des filtres à appliquer si un filtre doit être appliqué. L'information d'application de filtre est générée sur la base de l'information de division en blocs qui indique comment l'image d'entrée à coder est divisée en blocs de codage. Lorsqu'un filtre doit être appliqué aux unités de traitement qui comprennent un ou plusieurs bloc(s) codé(s), dans le cas où au moins deux filtres sont présents, les filtres à appliquer sont commutés selon l'information d'application de filtre et également selon un indice calculé à partir du signal de pixel contenu dans les unités de traitement auxquelles les filtres doivent être appliqués.
PCT/JP2011/060430 2010-10-12 2011-04-28 Procédé de codage et procédé de décodage d'images vidéo WO2012049876A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US39229310P 2010-10-12 2010-10-12
US61/392,293 2010-10-12

Publications (1)

Publication Number Publication Date
WO2012049876A1 true WO2012049876A1 (fr) 2012-04-19

Family

ID=45938114

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/060430 WO2012049876A1 (fr) 2010-10-12 2011-04-28 Procédé de codage et procédé de décodage d'images vidéo

Country Status (1)

Country Link
WO (1) WO2012049876A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014006997A1 (fr) * 2012-07-06 2014-01-09 株式会社エヌ・ティ・ティ・ドコモ Dispositif de codage prédictif de vidéo, procédé de codage prédictif de vidéo, programme de codage prédictif de vidéo, dispositif de décodage prédictif de vidéo, procédé de décodage prédictif de vidéo, programme de décodage prédictif de vidéo
WO2017195532A1 (fr) * 2016-05-13 2017-11-16 シャープ株式会社 Dispositif de décodage d'image et dispositif de codage d'image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003179933A (ja) * 2001-09-12 2003-06-27 Matsushita Electric Ind Co Ltd 画像符号化方法および画像復号化方法
WO2009110160A1 (fr) * 2008-03-07 2009-09-11 株式会社 東芝 Procédé et dispositif de codage/décodage dynamique d'une image
WO2009110559A1 (fr) * 2008-03-07 2009-09-11 株式会社 東芝 Dispositif de codage/décodage dynamique d'image

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003179933A (ja) * 2001-09-12 2003-06-27 Matsushita Electric Ind Co Ltd 画像符号化方法および画像復号化方法
WO2009110160A1 (fr) * 2008-03-07 2009-09-11 株式会社 東芝 Procédé et dispositif de codage/décodage dynamique d'une image
WO2009110559A1 (fr) * 2008-03-07 2009-09-11 株式会社 東芝 Dispositif de codage/décodage dynamique d'image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TAKESHI CHUJOH ET AL.: "Specification and experimental results of Quadtree-based Adaptive Loop Filter", ITU - TELECOMMUNICATIONS STANDARDIZATION SECTOR STUDY GROUP 16 QUESTION 6 VIDEO CODING EXPERTS GROUP (VCEG), VCEG-AK22, 37TH MEETING, April 2009 (2009-04-01), YOKOHAMA, JAPAN, pages 1 - 11 *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI650005B (zh) * 2012-07-06 2019-02-01 Ntt都科摩股份有限公司 Dynamic image predictive coding method and dynamic image predictive decoding method
CN107743237A (zh) * 2012-07-06 2018-02-27 株式会社Ntt都科摩 动态图像预测编码方法以及动态图像预测解码方法
TWI513283B (zh) * 2012-07-06 2015-12-11 Ntt Docomo Inc Dynamic image predictive coding device, dynamic image predictive coding method, dynamic image predictive coding program, dynamic image prediction decoding device, dynamic image prediction decoding method and dynamic image prediction decoding program
TWI552582B (zh) * 2012-07-06 2016-10-01 Ntt Docomo Inc 動態影像預測編碼裝置、動態影像預測編碼方法、動態影像預測編碼程式、動態影像預測解碼裝置、動態影像預測解碼方法及動態影像預測解碼程式
RU2601196C2 (ru) * 2012-07-06 2016-10-27 Нтт Докомо, Инк. Устройство кодирования видео с предсказанием, способ кодирования видео с предсказанием, программа кодирования видео с предсказанием, устройство декодирования видео с предсказанием, способ декодирования видео с предсказанием и программа декодирования видео с предсказанием
TWI602427B (zh) * 2012-07-06 2017-10-11 Ntt Docomo Inc 動態影像預測編碼方法、及動態影像預測解碼方法
CN104396258A (zh) * 2012-07-06 2015-03-04 株式会社Ntt都科摩 动态图像预测编码装置、动态图像预测编码方法、动态图像预测编码程序、动态图像预测解码装置、动态图像预测解码方法、动态图像预测解码程序
CN104396258B (zh) * 2012-07-06 2017-12-29 株式会社Ntt都科摩 动态图像预测编码装置、动态图像预测编码方法、动态图像预测编码设备、动态图像预测解码装置、动态图像预测解码方法、动态图像预测解码设备
CN107743237B (zh) * 2012-07-06 2020-09-25 株式会社Ntt都科摩 动态图像预测编码方法以及动态图像预测解码方法
WO2014006997A1 (fr) * 2012-07-06 2014-01-09 株式会社エヌ・ティ・ティ・ドコモ Dispositif de codage prédictif de vidéo, procédé de codage prédictif de vidéo, programme de codage prédictif de vidéo, dispositif de décodage prédictif de vidéo, procédé de décodage prédictif de vidéo, programme de décodage prédictif de vidéo
US10277916B2 (en) 2012-07-06 2019-04-30 Ntt Docomo, Inc. Video predictive encoding device and system, video predictive decoding device and system
US10666965B2 (en) 2012-07-06 2020-05-26 Ntt Docomo, Inc. Video predictive encoding device and system, video predictive decoding device and system
US10666964B2 (en) 2012-07-06 2020-05-26 Ntt Docomo, Inc. Video predictive encoding device and system, video predictive decoding device and system
US10681368B2 (en) 2012-07-06 2020-06-09 Ntt Docomo, Inc. Video predictive encoding device and system, video predictive decoding device and system
WO2017195532A1 (fr) * 2016-05-13 2017-11-16 シャープ株式会社 Dispositif de décodage d'image et dispositif de codage d'image

Similar Documents

Publication Publication Date Title
KR102165340B1 (ko) 색차 성분 양자화 매개 변수 결정 방법 및 이러한 방법을 사용하는 장치
KR101971077B1 (ko) 양자화 행렬의 부호화 방법 및 복호화 방법과 이를 이용하는 장치
RU2736421C1 (ru) Способ кодирования и декодирования изображений и устройство кодирования и декодирования
US11889098B2 (en) Method of coding and decoding images, coding and decoding device and computer programs corresponding thereto
JP7297918B2 (ja) ビデオ符号化のための色変換
JP2023158204A (ja) 変換スキップ及びパレットコーディング関連情報のシグナリングベースの映像又はビデオコーディング
WO2012049876A1 (fr) Procédé de codage et procédé de décodage d'images vidéo
WO2014084674A2 (fr) Procédé de prédiction intra et appareil de prédiction intra utilisant une transformée résiduelle
WO2011161823A1 (fr) Procédé de vidéocodage et procédé de décodage vidéo
KR102228577B1 (ko) 양자화 행렬의 부호화 방법 및 복호화 방법과 이를 이용하는 장치
RU2808075C1 (ru) Способ кодирования и декодирования изображений, устройство кодирования и декодирования и соответствующие компьютерные программы
RU2782400C2 (ru) Способ кодирования и декодирования изображений, устройство кодирования и декодирования и соответствующие компьютерные программы
WO2012169054A1 (fr) Procédé et dispositif de codage de vidéo, et procédé et dispositif de décodage de vidéo

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11832318

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11832318

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP