WO2012169054A1 - Procédé et dispositif de codage de vidéo, et procédé et dispositif de décodage de vidéo - Google Patents

Procédé et dispositif de codage de vidéo, et procédé et dispositif de décodage de vidéo Download PDF

Info

Publication number
WO2012169054A1
WO2012169054A1 PCT/JP2011/063290 JP2011063290W WO2012169054A1 WO 2012169054 A1 WO2012169054 A1 WO 2012169054A1 JP 2011063290 W JP2011063290 W JP 2011063290W WO 2012169054 A1 WO2012169054 A1 WO 2012169054A1
Authority
WO
WIPO (PCT)
Prior art keywords
filter
image
information
unit
encoding
Prior art date
Application number
PCT/JP2011/063290
Other languages
English (en)
Japanese (ja)
Inventor
隆志 渡辺
山影 朋夫
Original Assignee
株式会社東芝
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社東芝 filed Critical 株式会社東芝
Priority to PCT/JP2011/063290 priority Critical patent/WO2012169054A1/fr
Publication of WO2012169054A1 publication Critical patent/WO2012169054A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • Embodiments described herein relate generally to a video encoding method and apparatus, a video decoding method, and an apparatus.
  • an adaptive loop filter (hereinafter referred to as ALF) is known.
  • a moving image encoding apparatus using ALF encodes filter information and a quantized coefficient obtained by transforming and quantizing a difference between an input image and a predicted image, and transmits the encoded information to the moving image decoding apparatus.
  • This filter information includes information indicating a filter coefficient used for filtering the decoded image and a pixel area to be filtered using the filter coefficient.
  • the moving image decoding apparatus performs filter processing on the corresponding pixel region using the received filter information.
  • one filter is used for the filter processing of the slice to be encoded, and the last encoded filter is used as the filter among the filters used in slices of the same slice type. It is disclosed.
  • the moving image encoding method of the embodiment generates an error image indicating a difference between an input image and a predicted image, quantizes the error image to generate a quantized coefficient, and dequantizes the quantized coefficient.
  • An error image is generated by inverse transformation later, and a decoded image is generated by adding the error image and the predicted image; a filtering process is performed on the decoded image to generate a reference image; and the reference Generating a first application information indicating whether or not a reference filter that is a filter applied to an image is applied to the filtering process of the decoded image; and applying the reference filter that can be referred to when generating a predicted image Generating first reference information indicating one of the reference filters, generating the predicted image using the reference image, the quantization coefficient, the first application information, and the first reference.
  • FIG. 1 is a diagram of a moving image encoding apparatus according to Embodiment 1.
  • FIG. The figure of the example of a division
  • FIG. 10 is a diagram of a moving image encoding apparatus according to the third embodiment.
  • FIG. 10 is a diagram of a moving picture decoding apparatus according to the third embodiment.
  • FIG. 10 is a diagram of a moving picture decoding apparatus according to the fourth embodiment.
  • FIG. 10 is a diagram of a moving picture decoding apparatus according to the fifth embodiment.
  • FIG. 10 is a diagram of a moving picture encoding apparatus according to the sixth embodiment.
  • FIG. 16 is a diagram of a moving picture decoding apparatus according to the sixth embodiment.
  • FIG. 1 is a block diagram showing a functional configuration of a moving picture encoding apparatus 100 according to the present embodiment.
  • the moving picture coding apparatus 100 according to the present embodiment includes a moving picture coding unit 110 and a coding control unit 109.
  • the encoding control unit 109 controls the moving image encoding unit 110.
  • the moving image encoding unit 110 encodes an input image to be encoded.
  • the moving image encoding unit 110 divides one frame into slices composed of a plurality of pixel regions and performs encoding for each slice.
  • the moving image encoding unit 110 includes a predicted image generation unit 101, a subtraction unit 102, a transform / quantization unit 103, an entropy encoding unit 104, an inverse quantization / inverse transform unit 105, an addition unit 106, a loop filter unit 107, and A reference image buffer 108 is provided.
  • the input image is input to the moving image encoding unit 110.
  • the subtraction unit 102 obtains a difference between the predicted image generated by the predicted image generation unit 101 and the input image, and generates an error image that is the difference.
  • the transform / quantization unit 103 transforms the error image to generate transform coefficients, and quantizes the transform coefficients to generate quantized coefficients.
  • examples of the error image conversion method include orthogonal transform using DCT (Discrete Cosine Transform), wavelet transform, independent component analysis, and the like.
  • the transform / quantization unit 103 transforms the error image using any one of these transform methods. Also, the transform / quantization unit 103 quantizes the transform coefficient using the quantization parameter set by the encoding control unit 109.
  • the transform / quantization unit 103 outputs the quantized coefficient to the entropy coding unit 104 and the inverse quantization / inverse transform unit 105.
  • the entropy encoding unit 104 receives the quantization coefficient from the transform / quantization unit 103.
  • the entropy encoding unit 104 receives filter information (details will be described later) from the loop filter unit 107, which will be described later in detail.
  • the entropy encoding unit 104 receives an encoding parameter from the encoding control unit 109. Then, the entropy encoding unit 104 entropy encodes the quantization coefficient, the filter information, and the encoding parameter to generate encoded data.
  • the entropy encoding unit 104 uses, for example, Huffman encoding or arithmetic encoding for entropy encoding.
  • the encoding parameter includes known information such as prediction mode information, motion information, quantization parameter, and encoding block division information (details will be described later).
  • the entropy encoding unit 104 outputs the generated encoded data to the video decoding device via a network or a storage medium.
  • the moving image decoding apparatus decodes the received encoded data, generates an output image, and outputs it (details will be described later).
  • the entropy encoding unit 104 outputs the encoded data to the video decoding device in accordance with a predetermined syntax.
  • the syntax is a rule for setting encoded data. Details of the syntax used in the present embodiment will be described later.
  • the inverse quantization / inverse transform unit 105 inversely quantizes the quantization coefficient and then inversely transforms it to generate an error image. That is, the inverse quantization / inverse transform unit 105 performs a process opposite to the process of the transform / quantization unit 103 on the quantization coefficient. Specifically, when the transform / quantization unit 103 performs wavelet transform and quantization, the inverse quantization / inverse transform unit 105 performs inverse quantization and inverse wavelet transform.
  • the adding unit 106 receives an error image from the inverse quantization / inverse transform unit 105 and receives a predicted image from the predicted image generation unit 101. Then, the adding unit 106 adds the error image and the predicted image to generate a decoded image.
  • the decoded image is filtered by the loop filter unit 107 and then stored in the reference image buffer 108 as a reference image.
  • the loop filter unit 107 performs a filtering process on the decoded image generated by the adding unit 106 to generate a restored image. Thereby, the loop filter unit 107 restores the decoded image to an image closer to the input image. In the present embodiment, the loop filter unit 107 performs filter processing in units of slices.
  • the loop filter unit 107 stores the generated restored image in the reference image buffer 108 as a reference image.
  • the loop filter unit 107 stores the reference image buffer 108 in association with the corresponding reference image, using the filter used at the time of the reference image filtering process as a reference filter.
  • loop filter unit 107 The details of the loop filter unit 107 will be described later.
  • the reference image buffer 108 is a storage medium such as a frame memory.
  • the reference image buffer 108 stores the decoded image as reference image 1 to reference image 2 and the like.
  • One reference image is stored in the reference image buffer 108 for each frame.
  • FIG. 1 shows a case where the reference image buffer 108 stores reference images for two images (for two frames) as an example.
  • the number of reference images stored in the reference image buffer 108 is not limited to two, and is determined according to the capacity and setting of the reference image buffer 108.
  • the reference image buffer 108 stores reference filter information.
  • the reference filter information is information indicating the filter coefficient used for the filter processing of each slice of a reference image that is an encoded image, that is, a decoded image that has been filtered.
  • the reference image buffer 108 stores the reference filter information as reference filter information 1 to reference filter information 2 corresponding to each of the reference images 1 to 2.
  • the loop filter unit 107 executes the filtering process for each slice. Therefore, one reference filter information is stored in the reference image buffer 108 for each slice. Therefore, one reference image is associated with a plurality of pieces of reference filter information corresponding to each of a plurality of slices included in the reference image.
  • the predicted image generation unit 101 performs a prediction process and generates a predicted image.
  • a prediction process For generation of a predicted image, one or a plurality of reference images to be referred to during inter-screen prediction (during predicted image generation) among a plurality of reference images stored in the reference image buffer 108 are used.
  • the predicted image generation unit 101 performs a prediction process using the set reference image to generate a predicted image.
  • the prediction process performed by the predicted image generation unit 101 includes a known prediction process.
  • the prediction process in addition to the above-described inter-screen prediction based on motion compensation, intra-screen prediction using encoded pixels in the screen can be cited.
  • the predicted image generation unit 101 acquires a reference image to be referred to during inter-screen prediction from the reference image buffer 108 and performs motion compensation.
  • the predicted image generation unit 101 selects a reference image to be referred to at the time of inter-screen prediction from the reference image buffer 108 under the control of the encoding control unit 109. Then, the predicted image generation unit 101 performs motion compensation prediction by determining which of the reference images is to be referred to and the motion vector for each coding block set by the coding control unit 109.
  • the encoding control unit 109 controls the moving image encoding unit 110. Specifically, the encoding control unit 109 sets encoding block division information, quantization parameters, information indicating a reference image to be referred to during inter-screen prediction, and the like, and outputs the information to the moving image encoding unit 110. Thereby, the encoding control unit 109 controls the encoding process in the moving image encoding unit 110.
  • the encoded block division information is information indicating a processing unit of encoding in the moving image encoding unit 110.
  • the coding block division information is information indicating how to divide a pixel included in a processing target slice into coding processing units.
  • the encoding control unit 109 sets the encoding block division information as follows, for example.
  • the moving image encoding unit 110 divides one frame (one image) into a plurality of blocks, and performs encoding for each encoded block obtained by the division.
  • the size of this encoding block is freely set.
  • moving image encoding apparatus 100 uses a moving image encoding method that uses variable-size encoded blocks in a screen.
  • the encoding block is not limited to such a variable size encoding block, and for example, a fixed size encoding block may be used.
  • the encoding control unit 109 will describe a case where encoded block division information is set by a quadtree tree structure 402 as shown in FIG.
  • FIG. 2 shows an example of coding block division.
  • the encoding control unit 109 can adjust the encoding block of variable size by setting the encoding block division information using the quadtree tree structure 402.
  • the encoding control unit 109 sets the encoding block division information by setting the size of the encoding block based on the encoding cost represented by Expression (1).
  • D represents the sum of squared errors that is the difference between the input image and the predicted image.
  • R represents the code amount regarding prediction images, such as a motion vector and a block shape.
  • is a Lagrange undetermined multiplier determined based on the value of the quantization parameter.
  • the encoding control unit 109 performs encoding using the set encoding block and when encoding is performed by further dividing the set encoding block into four encoding blocks.
  • the encoding cost is calculated using the above equation (1). Then, the encoding control unit 109 selects the size of the encoding block with the lowest encoding cost.
  • the encoding control unit 109 sets the size of a selectable encoding block by adjusting the parameter description indicating the block division in the syntax.
  • the encoding control unit 109 sets the size of the encoding block by controlling the description of max_coding_block_size, which is a parameter in the syntax, and the description of max_coding_layer.
  • Max_coding_block_size indicates the maximum size of the encoded block.
  • max_coding_layer indicates the maximum depth of the quadtree tree structure. From these, min_coding_block_size, which is the minimum size of the encoded block, is determined.
  • each encoded block can be further divided into four encoded blocks. Therefore, the encoding control unit 109 adds information on whether or not to further divide a block to an encoded block having a size larger than min_coding_block_size.
  • max_coding_block_size and max_coding_layer a fixed value may be used for the sequence, or may be variable in units such as slices. In these cases, it is necessary to encode max_coding_block_size and max_coding_layer in each unit.
  • a unique parameter may be used on the encoding side and the decoding side. In this case, it is not necessary to encode the parameter.
  • the moving image encoding unit 110 can perform encoding in consideration of the characteristics of each pixel region in the image. it can.
  • FIG. 3 shows a detailed functional block diagram of the loop filter unit 107.
  • the loop filter unit 107 includes a loop filter information generation unit 111, a loop filter coefficient setting unit 112, and a loop filter processing unit 113.
  • the loop filter information generation unit 111 receives an input image, a decoded image, reference filter information, and encoded block division information. Then, the loop filter information generation unit 111 generates filter information and third application information.
  • Filter information includes prediction information or first filter coefficient.
  • the first filter coefficient is a filter coefficient generated by the loop filter information generation unit 111 for the decoded image to be filtered.
  • Prediction information is information used for prediction of filter coefficients to be applied to decoded image filter processing, and is information including first reference information, first application information, and second application information in detail.
  • the first reference information is information indicating a reference filter to be applied to the decoded image filtering process, and is information indicating one reference filter information among a plurality of reference filter information stored in the reference image buffer 108. is there. More specifically, the first reference information is information indicating reference filter information of one of the reference candidates, with reference filter information applied to a reference image that can be referred to during inter-screen prediction as a reference candidate. That is, the first reference information indicates a reference filter applied to the decoded image filter processing, thereby indicating a filter coefficient prediction method.
  • the first application information is information indicating whether or not the reference filter information is used for filtering the decoded image to be filtered. Specifically, the first application information applies the reference filter information of the filter used for the filtering process of the reference image that is the encoded (filtered) decoded image to the filtering process of the decoded image to be filtered. This is information indicating whether or not to perform. In the present embodiment, the first application information is information indicating that the first filter coefficient separately generated from the decoded image is applied to the filter processing of the decoded image as information indicating that the reference filter information is not used. including.
  • the second application information is for each class according to a classification result (hereinafter sometimes referred to as a class) obtained by classifying each pixel area included in the decoded image (slice) to be filtered according to a predetermined classification method.
  • This information indicates whether or not a filter is applied. That is, in other words, the second application information is information indicating whether or not a filter is applied to each pixel area according to a result of classifying each pixel area of the decoded image using a predetermined classification method.
  • the second application information is created when the filter process is performed using the reference filter information.
  • the second application information is information indicating that a filter is applied to a part of all classes and a filter is not applied to a part of classes. For this reason, although details will be described later, in the present embodiment, when the filtering process is performed using the reference filter information, the filter can be applied to a part of all classes.
  • 3rd application information is the information which showed the presence or absence of filter application for every encoding block.
  • the presence / absence of application of a filter indicates whether or not to perform filter processing using a filter.
  • the generation method of the filter information (prediction information (including first reference information, first application information, and second application information) or first filter coefficient) and third application information will be described in detail later. To do.
  • the loop filter coefficient setting unit 112 receives the first filter coefficient or the prediction information from the loop filter information generation unit 111 and reads the reference filter information stored in the reference image buffer 108. Then, the loop filter coefficient setting unit 112 calculates the filter coefficient of the filter that is actually applied to the decoded image, and outputs it to the loop filter processing unit 113.
  • the loop filter processing unit 113 receives a filter coefficient, a decoded image, third application information indicating the presence / absence of filter application for each encoded block, and encoded block division information. Then, the loop filter processing unit 113 applies the filter specified by the received filter coefficient to the region specified by the third application information, and performs a filter process to generate a restored image.
  • the filter coefficient set by the loop filter coefficient setting unit 112 is stored in the reference image buffer 108 as reference filter information. Also, the filter information including the first filter coefficient or prediction information generated by the loop filter information generation unit 111 and the third application information is output to the entropy encoding unit 104. Further, the restored image generated by the loop filter processing unit 113 is stored in the reference image buffer 108 as a reference image and used for generating a predicted image.
  • the loop filter information generation unit 111 includes a first reference information generation unit 111A, a first application information generation unit 111B, a first filter coefficient generation unit 111C, a second application information generation unit 111D, and a third application information generation unit 111E. .
  • generation part 111C produces
  • the first filter coefficient generation unit 111C is based on the decoded image and the input image, so that the average square error between the input image and the image when the decoded image is subjected to filter processing is one or more.
  • the first filter coefficient is set.
  • a two-dimensional Wiener filter generally used in image restoration is used as a filter.
  • FIG. 4 is a schematic diagram showing switching of the filters for each class.
  • the decoded image to be processed is divided into pixel areas of 4 ⁇ 4 pixels (see FIG. 4A), and classified into a plurality of classes according to the image characteristics in each pixel area.
  • FIG. 4 illustrates a case where the filter to be applied is switched for each class that is the classification result (see FIG. 4B).
  • the first filter coefficient generation unit 111C classifies each pixel area into 16 classes based on the image features in the pixel area (see FIGS. 4A and 4B). Note that the number of classes to be classified is not limited to 16. An activity or the like may be used as the image feature.
  • the filter coefficient generation unit 111C assigns an index of each class from 0 to 15, and assigns one filter to one or more classes having consecutive indexes (see FIG. 4B).
  • the filter assignment is determined according to the coding cost calculation formula of the above formula (1). For example, the encoding cost is calculated by first assigning different filters to all classes, and then calculating the encoding cost when one filter is assigned to two classes having consecutive indexes. Compare encoding costs.
  • the filter coefficient generation unit 111C selects the filter assignment with the lowest encoding cost. Accordingly, the filter coefficient generation unit 111C can perform appropriate filter assignment for each class.
  • the switching of the applied filter is not limited to a square pixel region as shown in FIG. That is, the pixel area which is a unit for dividing the decoded image is a rectangular block such as a rectangular block such as 8 pixels ⁇ 2 pixels or 4 pixels ⁇ 2 pixels, or a rectangular pixel area such as 8 pixels ⁇ 1 pixel or 4 pixels ⁇ 1 pixel.
  • the unit may be a unit or a unit of one pixel.
  • the activity that becomes the switching index may be calculated using a result obtained by calculating a difference from surrounding pixel values using a differential filter used in known edge detection or the like.
  • the following is a specific example of the switching index when the filter is switched in units of one pixel.
  • Expression (2) D (x, y) indicates the pixel value of the decoded image at the pixel position (x, y).
  • the first filter coefficient generation unit 111C can calculate the sum of the absolute value of the horizontal differential filter result and the absolute value of the vertical differential filter.
  • the value obtained from this equation (2) is calculated from the relationship between the pixel value at the pixel position (x, y) and the surrounding pixel values. Therefore, Expression (2) is effective for expressing local features of an image, and can be used as an index for selecting a filter for the pixel position (x, y).
  • the value obtained by Expression (2) may be calculated for a pixel range of a certain range, for example, a surrounding N pixel ⁇ N pixel region, and the sum thereof may be used as a filter switching index.
  • the above formula (2) is given as an index for switching the filter in units of pixels.
  • the filter is switched with a pixel area composed of a plurality of pixels as one unit, for example, the sum of the indices obtained by Expression (2) for each pixel in the pixel area may be used as the filter switching index for the entire area. .
  • the above formula (3) is an average of absolute values of pixel value differences between the pixel at the pixel position (x, y) and the surrounding pixels.
  • This expression (3) can also be used as a filter switching index as a value representing the local structure of the image. Further, although the average value of the pixel value difference is obtained in Expression (3), the maximum value may be used instead of the average value.
  • Expression (3) when the filter is switched with a pixel area composed of a plurality of pixels as one unit, for example, the average value of Expression (3) is calculated in the pixel area, and the filter switching index for the entire pixel area is calculated. It can be. The same applies to the case where the maximum value of equation (3) is used instead of the average value of equation (3).
  • N indicates a pixel range for calculating a switching index.
  • D (x, y) indicates the pixel value of the decoded image at the pixel position (x, y).
  • a switching index that takes into account the activity of each block and the luminance gradient direction may be used according to the following equation (4).
  • Formula (4) shows an example in which fIdx that is an index from 0 to 15 is assigned as a switching index.
  • Clip3 (a, b, c) represents a process of clipping the variable c so that it falls within the range from the minimum value a to the maximum value b.
  • var_tab, avg_var, direction are represented by the following Expressions (5) to (7), respectively.
  • BitDepthY indicates the number of bits representing the pixel value. For example, when BidDepthY is 8, each pixel takes a value from 0 to 255. Further, (x, y) indicates a pixel position (x, y).
  • the first reference information generation unit 111A generates first reference information.
  • the first reference information generation unit 111A first applies reference filter information applied to a reference image that can be referred to by the prediction image generation unit 101 during inter-screen prediction, among the reference filter information stored in the reference image buffer 108. Are set as reference candidates.
  • the reference image buffer 108 stores the reference image and the reference filter information used for the filtering process of the reference image in association with each other. Therefore, in the first reference information generation unit 111A, the reference image information corresponding to the reference image that can be referred to by the prediction image generation unit 101 during inter-screen prediction is read from the reference image buffer 108. Accordingly, the first reference information generation unit 111A sets reference candidates.
  • the first reference information generation unit 111A sets one reference filter information as reference filter information to be applied to the decoded image (slice) to be filtered in the decoded image from the reference filter information set as the reference candidate. To do.
  • the first reference information generation unit 111A may set one reference filter information from the reference filter information set as reference candidates. However, the first reference information generation unit 111A may set the reference filter information with the lowest coding cost as the reference filter information to be applied to the slice to be filtered based on the coding cost. For the calculation of the coding cost, the above equation (1) may be used.
  • the first application information generation unit 111B generates first application information.
  • the entropy encoding unit 104 when the reference filter information stored in the reference image buffer 108 is used as a filter to be applied to the filter processing of the decoded image, the amount of code generated when encoding by the entropy encoding unit 104 is reduced. You can plan. This is because when the reference filter information is used, the entropy encoding unit 104 only has to encode the prediction information, which is a filter coefficient prediction method, instead of the filter coefficient itself used for the filter processing. . However, when the decoded image is filtered using the reference filter information, the filter processing is performed using the filter specified by the first filter coefficient newly calculated from the decoded image to be filtered. In comparison, the effect of reducing the error between the reference image and the input image may be reduced.
  • the first application information generation unit 111B applies the filter specified by the reference filter information indicated by the first reference information and applies the filter specified by the first filter coefficient.
  • the encoding cost is compared with the case where the filtering process is performed.
  • the first application information generation unit 111B As a result of comparing the coding costs, it is assumed that the coding cost when the filter specified by the reference filter information is applied is lower than that when the filter specified by the first filter coefficient is applied. In this case, the first application information generation unit 111B generates first application information indicating that the filter process is performed using the reference filter information. On the other hand, as a result of comparing the coding costs, it is assumed that the coding cost is lower when the filter specified by the first filter coefficient is applied than when the filter specified by the reference filter information is applied. In this case, the first application information generation unit 111B generates first application information indicating that the filter specified by the first filter coefficient generated by the first filter coefficient generation unit 111C is applied to the filter process. .
  • the first application information generation unit 111B Based on the encoding cost, the first application information generation unit 111B generates first application information indicating whether or not the reference filter information is used for the filter processing of the decoded image to be filtered.
  • the second application information generation unit 111D generates second application information.
  • the second application information generation unit 111D generates second application information when performing filter processing using the reference filter information.
  • the reference filter information When the reference filter information is used for filtering the decoded image to be filtered, a filter applied to the already decoded image is used. For this reason, it has already been determined which filter is applied to each class, which is a classification result obtained by classifying the decoded image into a plurality of pixel regions.
  • the first filter coefficient generation unit 111C uses the first filter coefficient newly set for the decoded image for the filtering process, the reference filter information is used for the filtering process. The effect of reducing the error with the decoded image is reduced.
  • the second application information generation unit 111D applies the filter to a part of the classes, and the second application information generation unit 111D Second application information indicating that the filter is not applied to some classes is generated. It should be noted that the second application information generation unit 111D determines whether or not to apply the filter to each class, whether the error with respect to the input image of the restored image generated by the filter application is less than the error with respect to the input image of the decoded image. Judgment by whether or not.
  • FIG. 5 is a diagram illustrating an example of applying a filter to each class, which is a classification result of each pixel area of the decoded image. Specifically, FIG. 5 shows second application information indicating that the second application information generation unit 111D applies a filter to a part of the classes that are the classification results of the pixel regions of the decoded image. It is the figure which showed typically the case where it produced
  • the second application information generation unit 111D generates second application information indicating that no filter is applied to the filters 3 and 4 applied to the classes 9 to 15.
  • the second application information generation unit 111D may set whether to apply the filter for each class of the classification result instead of for each filter. That is, as shown in FIG. 5B, it is assumed that the error with respect to the input image is increased by applying a filter to class 9 to class 10 and class 14 to class 15 among classes 0 to 15. On the other hand, for class 0 to class 15, class 0 to class 8, and class 11 to class 13, the error can be reduced by applying a filter. In this case, the second application information generation unit 111D generates second application information indicating that no filter is applied to classes 9 to 10 and classes 14 to 15.
  • the loop filter unit 107 controls whether to apply a filter according to the class of each pixel area of the decoded image and the second application information. For this reason, in the moving image encoding device 100, it is possible to improve the code amount reduction rate by suppressing an increase in error when performing filter processing using the reference filter information.
  • the moving image encoding apparatus 100 side can determine whether or not the error is reduced by applying a filter to each class.
  • the video image decoding apparatus (detailed later) that decodes the encoded data cannot determine whether to reduce the error. For this reason, it is necessary to explicitly encode information indicating which filter is applied to each class and which filter is not applied to each class.
  • FIG. 5B when indicating whether a filter is applied for each class of the classification result, it is necessary to encode information indicating which class the filter is applied to.
  • the filter information includes second application information in addition to the prediction information or the first filter coefficient. For this reason, when applying the reference filter information to the filtering process, the entropy encoding unit 104 also encodes the second application information indicating whether or not to apply the filter for each class, and outputs the encoded data as encoded data.
  • the entropy encoding unit 104 stores information on the first filter coefficient. Need to be encoded.
  • the entropy encoding unit 104 may encode the value of the first filter coefficient itself, or may execute a predetermined prediction process to encode a difference value from the predicted value.
  • a method of predicting the center coefficient from surrounding coefficients may be used on the assumption that the sum of the first filter coefficients has a substantially constant value. For example, when the first filter coefficient is represented by 8 bits, the sum of the first filter coefficients often takes a value close to 256. Therefore, the result of subtracting the first filter coefficient other than the center coefficient from 256 may be used as the predicted value of the center coefficient.
  • the entropy encoding unit 104 may perform the prediction process between a plurality of filters. For example, when four filters are used for a decoded image (slice) to be encoded, the value of the first filter coefficient is encoded as it is for the first filter, and one is used for the second and subsequent filters. The first filter coefficient of the previously encoded filter may be used as the predicted value.
  • the entropy encoding unit 104 may perform prediction in the time direction in the prediction process. For example, as in the case where the reference filter information is directly applied to the filter processing, the entropy encoding unit 104 may use the filter coefficient of the reference filter as a predicted value.
  • this predicted value when used, it differs from the case where the reference filter information is applied to the filter processing in that the difference value of the filter coefficient is encoded.
  • the predicted value when used, it may be considered that the reference filter information is different from the reference filter information in the number of filters to be applied to the decoded image (slice) to be encoded and the assignment of filters to the class that is the classification result of the pixel region. It is done.
  • the entropy encoding unit 104 may predict, for example, from the most common classes in the applied classes, but on the video encoding device 100 side and the video encoding device side. It is necessary to use the same prediction process.
  • a common prediction process set in advance on the moving image encoding device 100 and the moving image decoding device side may be used, or information indicating how the prediction processing is performed on the moving image encoding device 100 side. It may be encoded and output to the video decoding device side.
  • the third application information generation unit 111E generates third application information indicating whether or not a filter is applied for each encoded block.
  • 3rd application information generation part 111E applies a filter to each coding block of a decoding picture based on prediction information or the 1st filter coefficient, and coding block division information. Then, it is determined for each coding block whether or not the mean square error between the decoded image after application of the filter and the input image is reduced as compared with the case where the filter is not applied. Then, the third application information generating unit 111E generates third application information indicating that the filter is applied to the encoded block in which the mean square error is reduced by applying the filter.
  • the third application information generation unit 111E generates third application information indicating filter application for an encoded block in which an error with an input image is reduced by applying a filter.
  • the third application information generation unit 111E determines whether or not the mean square error is reduced by applying this filter to a plurality of max_filtering_layers, and selects max_filtering_layer that minimizes the coding cost.
  • max_filtering_layer is a parameter indicating how many layers in the quadtree in coding block division are used for setting filter application information. Note that the division shape of the coding block and a region serving as a unit for determining whether or not to apply the filter indicated by max_filtering_layer are referred to as a filter application determination region.
  • the third application information generation unit 111E may generate third application information indicating whether or not to apply a filter for each processing unit including a plurality of encoded blocks, in addition to the filter application information of the encoded blocks. For example, the third application information generation unit 111E may generate third application information indicating whether or not a filter is applied for each slice to be encoded. In this case, the third application information generation unit 111E calculates the cost when no filter is applied to the encoding target slice and the cost when the filter is applied for each encoding block. Then, the third application information generation unit 111E may generate the third application information in the encoding target slice by comparing these costs.
  • FIG. 6 shows an example of the coding block division shape and filter application determination.
  • FIG. 6A shows the division shape of the encoding block 401 in FIG. Further, “ON” and “OFF” in each coding block in FIGS. 6B to 6D indicate whether or not a filter is applied. Specifically, the filter is applied when “ON”, and the filter is not applied when “OFF”.
  • the third application information generation unit 111E sets whether to apply a filter for each coding block.
  • the filter when the filter is set to be non-applied for each slice to be encoded, the filter is not applied to all the pixels in the slice, and the filter application information for each block is discarded.
  • the filter when a filter is applied in units of slices to be encoded, the filter is applied according to the filter application information for each encoding block.
  • the loop filter information generation unit 111 uses the generated first reference information, first application information, and second application information as prediction information including the first reference information, the first application information, and the second application information.
  • the data is output to the loop filter coefficient setting unit 112 and the entropy encoding unit 104.
  • the loop filter information generation unit 111 when information indicating that the reference filter is not used for the filter processing is generated as the first application information, the loop filter information generation unit 111 generates the first filter coefficient generation unit 111C instead of the prediction information.
  • the filtered filter coefficients are output to the loop filter coefficient setting unit 112 and the entropy coding unit 104.
  • the loop filter information generation unit 111 outputs third application information indicating the presence / absence of filter application for each coding block to the loop filter processing unit 113 and the entropy coding unit 104.
  • the filter applied to each class is switched for each class classified according to the image characteristics of the pixel region.
  • the filter may be switched explicitly according to the third application information.
  • loop filter coefficient setting unit 112 will be described in detail.
  • the loop filter coefficient setting unit 112 receives the first filter coefficient or the prediction information, and sets the filter coefficient of the filter to be applied to the decoded image to be filtered. Specifically, it is assumed that the first application information included in the prediction information is information indicating that the reference filter information is used for the filter processing of the decoded image to be filtered. In this case, the loop filter coefficient setting unit 112 sets the filter coefficient of the reference filter information corresponding to the first reference information included in the received prediction information as a filter coefficient to be applied to the filter process.
  • the loop filter coefficient setting unit 112 sets the first filter coefficient calculated based on the decoded image to be filtered as a filter coefficient to be applied to the filter process.
  • the loop filter coefficient setting unit 112 adds the predicted value of the coefficient. The first filter coefficient is restored.
  • the loop filter coefficient setting unit 112 outputs the set filter coefficient to the loop filter processing unit 113 as a filter coefficient to be applied to the filter processing of the decoded image to be filtered.
  • the loop filter processing unit 113 receives the decoded image, the encoded block division information, and the third application information. In addition, the loop filter processing unit 113 receives a filter coefficient from the loop filter coefficient setting unit 112. Then, a filter specified by the received filter coefficient is applied to the received decoded image, and filter processing is performed.
  • the pixel region to which the loop filter processing unit 113 applies the filter is indicated by the encoded block division information and the third application information as described above.
  • an image feature is calculated for a pixel region to which a filter is applied, and a filter to be applied can be specified by a class classified according to the obtained image feature.
  • the loop filter processing unit 113 applies the filter specified by the filter coefficient received from the loop filter coefficient setting unit 112 to the decoded image to be filtered, and generates a restored image that is an image after the filter application. .
  • the loop filter processing unit 113 stores the generated restored image in the reference image buffer 108 as a reference image. Further, the loop filter processing unit 113 stores the filter coefficient used for the filter processing of the reference image in association with the corresponding reference image as reference filter information.
  • the reference image buffer 108 manages the reference image and the reference filter information in synchronization.
  • the loop filter unit 107 entropy-encodes the generated filter information (prediction information (including first reference information, first application information, and second application information) or filter coefficients, and third application information). Output to the unit 104.
  • the filter information is encoded by the entropy encoding unit 104, multiplexed with data or the like obtained by encoding the quantized transform coefficient, and is output to the outside as encoded data.
  • FIG. 7 shows a syntax structure used in the video encoding apparatus 100. Note that the moving picture encoding apparatus 100 according to the present embodiment transmits filter information in units of slices.
  • the syntax mainly includes three elements, and includes three elements of a high level syntax 1900, a slice level syntax 1903, and an encoded block level syntax 1906.
  • the slice level syntax 1903 describes information necessary for each slice.
  • Coding block level syntax 1906 describes coding block division information or transform coefficients required for each coding block, prediction mode information, motion vectors, and the like.
  • Each of the high level syntax 1900, the slice level syntax 1903, and the coding block level syntax 1906 is configured with a more detailed syntax.
  • the high level syntax 1900 includes sequence or picture level syntax.
  • the high level syntax 1900 includes a sequence parameter set syntax 1901 and a picture parameter set syntax 1902.
  • the slice level syntax 1903 includes a slice header syntax 1904 and a loop filter data syntax 1905.
  • the loop filter data syntax 1905 includes the filter information. That is, the loop filter data syntax 1905 includes prediction information or filter coefficients and third application information.
  • the encoded block level syntax 1906 includes an encoded block layer syntax 1907 including encoded block division information and an encoded block prediction syntax 1908.
  • the third application information is described as being included in the loop filter data syntax. However, since the third application information is filter application information in units of encoded blocks, the third application information is included in the encoded block level syntax. It may be included.
  • the entropy encoding unit 104 adds these parameters to the sequence parameter set syntax 1901.
  • the parameter for controlling the division of the coding block is variable for each slice, the entropy coding unit 104 adds these parameters to the slice header syntax 1904.
  • FIG. 8 shows a description example of the loop filter data syntax 1905.
  • the entropy encoding unit 104 describes the prediction information or the filter coefficient generated by the loop filter unit 107 and the third application information in the loop filter data syntax 1905.
  • direct_filter_flag is a 1-bit flag indicating whether or not reference filter information is used for filtering processing of a decoded image to be filtered in the slice.
  • direct_filter_flag 1
  • the entropy encoding unit 104 encodes direct_ref_idx, which is first reference information for specifying reference filter information.
  • direct_ref_idx first reference information for specifying reference filter information.
  • slices that can be referred to in inter-screen prediction are selected and ordered so as to be the same on the video encoding device 100 side and the video decoding device side described later.
  • the reference filter information is managed in synchronization with the reference image. For this reason, in the moving picture decoding apparatus that receives and decodes the encoded data from the moving picture encoding apparatus 100, a filter applied to the reference picture specified by direct_ref_idx can be used as a reference filter. At this time, information regarding other filter coefficients is not encoded.
  • the entropy encoding unit 104 encodes information regarding the filter coefficient set for the encoding target slice.
  • filter_type_idx is an index indicating the shape or tap length of the loop filter. Specifically, filter_type_idx corresponds to NumOfFilterCoeff indicating the number of filter coefficients per filter. In FIG. 8, filter_num indicates the number of filters to be applied to the encoding target slice.
  • filter_coeff is a first filter coefficient, and may be the value of the filter coefficient itself as described above, or may be a difference value obtained by performing a predetermined prediction process. Further, information for specifying the prediction process may be added separately, and a new syntax may be added at the time of addition. In addition, when the addition is performed, a plurality of pieces of prediction information may be represented there by using direct_flag as a multi-bit index, for example.
  • the class number may be indicated by start_second_filter.
  • the entropy encoding unit 104 encodes information indicating whether each class is a filter allocation start class, for example. Thereby, the assignment of the filter can be acquired on the moving image encoding device side.
  • the entropy encoding unit 104 may encode information “1000010001001000” as filter_pattern. Further, since it is determined that class 0 is at the top, “0000010001001000” or “000010001001000” may be used.
  • block_control_flag corresponds to the third application information.
  • block_control_flag 1
  • the entropy encoding unit 104 switches whether to apply a filter for each encoding block, and encodes information necessary for the switching thereafter.
  • block_control_flag 0
  • the entropy encoding unit 104 applies a filter to the entire slice.
  • loop_filter_flag is a 1-bit flag indicating whether or not to apply a loop filter to each coding block.
  • the entropy encoding unit 104 sets “1” when the filter is applied, and “0” when the filter is not applied.
  • NumOfLoopFilterFlag is the total number of filter application determination areas.
  • max_filtering_layer 0, the entropy encoding unit 104 switches application / non-application of the filter for each block that is the maximum size of the encoded block, and does not encode the NumOfLoopFilterFlag because the block is not divided.
  • the entropy encoding unit 104 may encode the loop_filter_flag with a fixed length of 1 bit. Further, the entropy encoding unit 104 may perform encoding by variable length encoding such as CAVLC (Context Adaptive Variable Length Coding) or CABAC (Context Adaptive Binary Arithmetic Coding). Furthermore, a plurality of flags may be combined and encoded as a multi-value. Further, as described above, in the present embodiment, a plurality of filters may be prepared and switched for each filter application determination region, and filter switching information may be explicitly multiplexed into encoded data.
  • CAVLC Context Adaptive Variable Length Coding
  • CABAC Context Adaptive Binary Arithmetic Coding
  • the entropy encoding unit 104 may encode the NumOfLoopFilterFlag with variable length encoding or with a fixed length.
  • an encoding method performed by the entropy encoding unit 104 there is a method of changing based on an image size or a parameter relating to division of an encoded block.
  • one slice does not take a larger area than one screen. For this reason, on the video decoding device side, the minimum number of blocks that can exist in one slice and the maximum number of blocks, that is, the range of values that can be taken from at least one of image size, max_coding_block_size, min_coding_block_size, and max_filtering_layer Can be obtained.
  • the entropy encoding unit 104 changes the code table using a probability model corresponding to the range of values that the number of blocks can take.
  • the entropy encoding unit 104 encodes the NumOfLoopFilterFlag with the minimum bit length that can represent the range of values that the number of blocks can take. By doing so, the entropy encoding unit 104 can select an appropriate encoding method even when the image size or the block size changes.
  • the above parameters can also be used on the video decoding device side. For this reason, by selecting the same bit length on the moving image encoding device 100 side and the moving image decoding device side, the moving image decoding device side can correctly perform decoding.
  • the filter may be applied to all areas. In this case, the effect of reducing the error between the input image and the decoded image is reduced.
  • the entropy encoding unit 104 encodes information indicating that the filter is applied in all regions, it is not necessary to encode the loop_filter_flag, so that overhead can be reduced.
  • encoding can be realized by setting the syntax configuration to FIG. 9, for example.
  • FIG. 9 shows another description example of the loop filter data syntax 1905. 9 is different from FIG. 8 in that it includes use_filter_flag.
  • Use_filter_flag is a 1-bit flag indicating whether or not to apply the reference filter information to the filter processing for each filter.
  • the entropy encoding unit 104 encodes use_filter_flag by the number of filters_num that is the number of filters included in the reference filter information corresponding to one slice.
  • FIG. 10 shows another description example of the loop filter data syntax 1905.
  • the points including use_filter_flag are the same as in FIG. 9, but the number is different.
  • the entropy encoding unit 104 encodes use_filter_flag by the number of NumOfClass that is the number of classes for classifying images. Each flag indicates whether or not a filter is applied to the corresponding class.
  • the loop filter unit 107 may change a part of the filter coefficients of the reference filter information set as a filter to be applied to the filter processing.
  • the loop filter unit 107 generates a changed reference filter in which a part of the filter coefficient of the reference filter information is changed, and uses the changed reference filter for the filtering process.
  • the filter information may be configured to include first change information indicating the change reference filter.
  • the entropy encoding part 104 should just encode the filter information which further contains 1st change information.
  • FIG. 11 shows an example of the filter shape of the filter applied to the filter processing in the present embodiment.
  • the filter shape is switched by filter_type_idx. It is assumed that each filter shape has one coefficient indicating an offset value.
  • the loop filter unit 107 changes the offset value as a method of changing a part of the filter coefficients of the reference filter information.
  • the entropy encoding unit 104 changes the syntax configuration to the content shown in FIG.
  • the filter information includes the first change information indicating the change reference filter.
  • the filter is applied by reading the reference filter information from the buffer and replacing the offset value with the encoded value. That is, on the video decoding device side, the change reference filter can be applied to the decoded image based on the first change information.
  • the offset value is changed has been described as a method of changing a part of the filter coefficient of the reference filter information
  • a coefficient at a different position such as a center coefficient may be changed.
  • the loop filter unit 107 may change the shape of the filter applied to the filtering process.
  • the loop filter unit 107 may generate a modified reference filter by changing the filter shape of the reference filter information and use it for the filter processing.
  • the filter information may be configured to further include deformation information indicating the deformation reference filter.
  • the entropy encoding part 104 should just encode the filter information which further contains deformation
  • the filter shape can be changed and applied to a 5-tap or 7-tap filter. Good.
  • the entropy encoding unit 104 changes the syntax configuration to the content shown in FIG. 13 with the change of the filter shape.
  • FIG. 13 shows a description example when the loop filter data syntax 1905 is changed.
  • the loop filter unit 107 may keep the sum of the filter coefficients constant by using a framework for resetting and encoding the offset value and the coefficient at the specific coefficient position as described above, A change process may be defined as a method.
  • the filter application can be switched in units of slices, and the filter application information in units of slices is stored in the slice header syntax 1904.
  • the entropy encoding unit 104 stores the loop filter data syntax 1905 in the slice level syntax 1903.
  • the loop filter unit 107 may be controlled in units independent of slices. This is called a loop filter slice.
  • the entropy encoding unit 104 When a plurality of loop filter slices are included in one slice, the entropy encoding unit 104 generates the loop filter data syntax 1905 by the number of loop filter slices. Furthermore, when the loop filter slice exists over a plurality of slices and cannot be processed in units of slices, the loop filter data syntax 1905 is included in the high-level syntax 1900 such as the picture parameter set syntax 1902. May be included, or may be included in the slice level syntax of the slice located at the head. Furthermore, when there are a plurality of components constituting an image, the entropy encoding unit 104 may generate a syntax for each component, or may generate a common syntax for two or more components.
  • FIG. 14 is a flowchart illustrating the procedure of the encoding process according to the first embodiment.
  • the predicted image generation unit 101 performs a prediction process to generate a predicted image (step 320).
  • the subtraction unit 102 performs a subtraction process for obtaining a difference between the predicted image generated by the predicted image generation unit 101 and the input image (step S300). Through the processing in step S300, the subtraction unit 102 generates an error image (step S302).
  • the transform / quantization unit 103 transforms the error image to generate transform coefficients, and quantizes the transform coefficients to generate quantized coefficients (step S304 and step S306).
  • the entropy encoding unit 104 encodes the quantization coefficient, the filter information received from the loop filter unit 107, and the encoding parameter received from the encoding control unit 109, and generates encoded data (step S308). ).
  • the inverse quantization / inverse transform unit 105 performs inverse transform after dequantizing the quantization coefficient to generate an error image (step S310, step S312).
  • the adding unit 106 receives an error image from the inverse quantization / inverse transform unit 105, receives a predicted image from the predicted image generation unit 101, and adds the error image and the predicted image to generate a decoded image (steps S314 and S316). ).
  • the loop filter unit 107 performs a filtering process on the decoded image generated by the adding unit 106 to generate a restored image (step S318) (details will be described later).
  • the loop filter unit 107 restores the decoded image to an image closer to the input image.
  • the loop filter unit 107 performs filter processing in units of slices.
  • step S300 to step S320 is a so-called hybrid encoding process that performs a prediction process and a conversion process.
  • encoding processing that performs prediction, transformation, and quantization has been described.
  • DPCM Different Pulse Code Modulation
  • FIG. 15A shows a flowchart showing the procedure of the filter process.
  • the first filter coefficient generation unit 111C of the loop filter unit 107 generates a first filter coefficient of the decoded image based on the decoded image and the input image (step S400).
  • the first filter coefficient generation unit 111C determines filter assignment for each class according to a class that is a classification result obtained by classifying each pixel region included in the slice to be filtered according to a predetermined classification method. (Step S401) (see FIG. 4B).
  • the reference filter information applied by the first reference information generation unit 111A to the reference image that can be referred to by the prediction image generation unit 101 during inter-screen prediction among the reference filter information stored in the reference image buffer 108. are set as reference candidates (step S402).
  • the first reference information generation unit 111A selects one reference filter information from the reference candidates (step S404). Then, the first reference information generation unit 111A generates first reference information indicating the selected one reference filter information (step S406).
  • the second application information generation unit 111D indicates whether or not the filter is applied for each class according to the class that is a classification result obtained by classifying each pixel area included in the slice to be filtered according to a predetermined classification method.
  • the second application information is generated (step S408).
  • the loop filter information generation unit 111 sets prediction information including the generated first reference information, first application information, and second application information (step S409).
  • the third application information generation unit 111E applies the filter specified by the first filter coefficient generated in step S400 and the filter specified by the one reference filter information selected in step S404.
  • the third application information is generated for both the case where it is applied (step S410).
  • the first application information generation unit 111B determines whether or not the reference filter is applied, and generates first application information (step S412). Whether the reference filter is applied is determined by applying a filter specified by the reference filter information indicated by the first reference information and applying a filter specified by the first filter coefficient as described above. And based on the comparison result of comparing the coding costs.
  • the loop filter information generation unit 111 generates filter information (step S414). Specifically, the loop filter information generation unit 111 generates filter information including prediction information and third application information when the first application information generated in step S412 indicates that the reference filter is applied. . On the other hand, if the first application information generated in step S412 indicates that the reference filter is not applied, filter information including the filter coefficient and the third application information is generated.
  • the loop filter coefficient setting unit 112 determines whether or not to apply a reference filter to the decoded image (slice) to be processed (step S415).
  • the loop filter coefficient setting unit 112 makes an affirmative determination (step S415: Yes).
  • step S415 the loop filter coefficient setting unit 112 sets the filter coefficient of the reference filter information indicated by the first reference information as the filter coefficient of the filter to be applied to the decoded image (step S416).
  • step S415 when the first application information received from the loop filter information generation unit 111 is information indicating that the reference filter is not applied, the loop filter coefficient setting unit 112 makes a negative determination (step S415: No). If a negative determination is made in step S415, the loop filter coefficient setting unit 112 sets the first filter coefficient set in step S400 as the filter coefficient of the filter to be applied to the decoded image (step S418).
  • the loop filter processing unit 113 performs a filtering process on the decoded image using a filter specified by the filter coefficient set by the processing in step S416 or step S418, and generates a restored image (step S420). .
  • the loop filter processing unit 113 stores the generated restored image in the reference image buffer 108 as a reference image. Further, the loop filter processing unit 113 stores the filter coefficient used for the filter processing of the reference image as reference filter information in association with the corresponding reference image (step S422).
  • the loop filter unit 107 outputs the filter information generated in step S414 to the entropy encoding unit 104 (step S424).
  • the filter information is encoded by the entropy encoding unit 104, multiplexed with data or the like obtained by encoding the quantized transform coefficient, and is output to the outside as encoded data.
  • the moving picture coding apparatus 100 when performing filtering processing on a decoded image to be filtered using reference filter information that is a filter applied to a reference image, reference is made.
  • the reference image information applied to the reference image that can be referred to by the predicted image generation unit 101 during inter-screen prediction is set as a reference candidate.
  • the moving picture coding apparatus 100 selects one reference filter information from the reference candidates, and generates first reference information indicating the selected one reference filter information.
  • the prediction information containing this 1st reference information and the 1st application information which shows the presence or absence of application of a reference filter is included in filter information, This filter information is encoded and output.
  • the moving picture coding apparatus 100 performs the filtering process using one reference filter information selected from the reference candidates.
  • the reference candidate is synchronized with a reference image that can be referred to in inter-screen prediction, and there is no reference in the time direction that exceeds the range assumed in the prediction process. For this reason, error tolerance and increase in memory access can be suppressed.
  • the second application information may be generated after the third application information is generated. That is, it is also possible to perform filter application determination for a specific class after performing filter application determination for each coding block. Furthermore, the generation of the third application information may be performed after the generation of the first application information as well. In this case, the number of times of generating the third application information can be reduced, so that the amount of processing in encoding is reduced. be able to.
  • the second embodiment is a moving picture decoding apparatus that decodes encoded data transmitted from a moving picture encoding apparatus.
  • FIG. 16 is a block diagram illustrating a functional configuration of the moving picture decoding apparatus according to the second embodiment.
  • the moving image decoding apparatus 200 includes a moving image decoding unit 210 and a decoding control unit 202.
  • the decoding control unit 202 controls the moving image decoding unit 210. Specifically, the decoding control unit 202 performs overall decoding control such as encoding block division control or decoding timing control.
  • the video decoding unit 210 decodes the encoded data received from the video encoding device 100.
  • the moving image decoding unit 210 includes an entropy decoding unit 204, an inverse quantization / inverse conversion unit 205, an addition unit 206, a loop filter processing unit 213, a loop filter coefficient setting unit 212, a reference image buffer 208, and a predicted image generation unit 201. Prepare.
  • the entropy decoding unit 204 receives encoded data from the moving image encoding apparatus 100. Then, the entropy decoding unit 204 sequentially decodes the code string of each syntax of the encoded data for each of the high level syntax 1900, the slice level syntax 1903, and the encoded block level syntax 1906 according to the syntax structure shown in FIG. To do. Thereby, the entropy decoding unit 204 decodes quantization coefficients, filter information, encoded block division information, and the like from the encoded data. Since the quantization coefficient, filter information, and coding block division information are the same as those in the first embodiment, detailed description thereof is omitted.
  • the inverse quantization / inverse transform unit 205 receives the transform coefficient decoded by the entropy decoding unit 204. Then, the inverse quantization / inverse transform unit 205 performs inverse transform (for example, discrete cosine transform) after performing inverse transform on the transform coefficient in accordance with the quantization parameter set by the decoding control unit 202, and generates an error image. Is generated.
  • the adding unit 206 receives an error image from the inverse quantization / inverse transform unit 205 and receives a predicted image from the predicted image generation unit 201. Then, the adding unit 206 adds the received error image and the predicted image to generate a decoded image.
  • the reference image buffer 208 stores the reference filter information as reference filter information 1 to reference filter information 2 corresponding to each of the reference images 1 to 2. These reference filter information 1 to reference filter information 2 are obtained by storing, as reference filter information, a filter applied to a reference image based on the first filter coefficient or prediction information received from the entropy decoding unit 204. As in the moving picture encoding apparatus, the association is performed.
  • the predicted image generation unit 201 performs a prediction process and generates a predicted image.
  • a prediction process For the generation of the predicted image, one or a plurality of reference images set in advance as reference images to be used when generating the predicted image among the plurality of reference images stored in the reference image buffer 208 are used.
  • the loop filter coefficient setting unit 212 receives the first filter coefficient or the prediction information included in the filter information from the entropy decoding unit 204.
  • the loop filter coefficient setting unit 212 sets a filter coefficient to be applied to the decoded image generated by the adding unit 206. Specifically, when the first application information included in the prediction information is information indicating that the reference filter information is applied to the filtering process of the decoded image, the loop filter coefficient setting unit 212 includes the first reference information. Set the filter coefficient of the corresponding reference filter information. On the other hand, when the first application information included in the prediction information is information indicating that the reference filter is not used for the filtering process, the loop filter coefficient setting unit 212 sets the received first filter coefficient.
  • the loop filter coefficient setting unit 212 may The first filter coefficient is restored by adding the predicted values.
  • the loop filter coefficient setting unit 212 outputs the set filter coefficient to the loop filter processing unit 213.
  • the loop filter coefficient The setting unit 212 changes a part of the filter coefficient of the reference filter information. Then, the loop filter coefficient setting unit 212 outputs the changed filter coefficient to the loop filter processing unit 213.
  • the loop filter processing unit 213 reads the reference filter from the buffer, and uses the replaced offset value shown in the syntax structure shown in FIG. Change the department. Then, the changed filter coefficient is output to the loop filter processing unit 213. Accordingly, the moving image decoding apparatus 200 can apply the change reference filter to the decoded image based on the first change information.
  • the loop filter coefficient setting unit 212 After changing the filter shape of the reference filter information, the reference filter information is output to the loop filter processing unit 213.
  • the loop filter coefficient setting unit 212 changes the filter coefficient of the reference filter information into a filter having a different shape based on the value (shape information) indicated by filter_type_idx shown in FIG.
  • the filter coefficient of the reference filter information is output to the loop filter processing unit 213.
  • the loop filter processing unit 213 receives the decoded image, the encoded block division information, the third application information, and the filter coefficient, and applies a filter to the decoded image. At this time, the loop filter processing unit 213 can determine the filter application determination region by associating the encoded block division information with the third application information.
  • the filter application determination area is the same as that in the first embodiment.
  • the loop filter processing unit 213 obtains a filter application determination region from max_filtering_layer shown in the syntax structure of FIG. 8 and the coding block division shape. Then, the loop filter processing unit 213 associates loop_filter_flag with each filter application determination region. Thereby, the moving picture decoding apparatus 200 side can apply the filter to the block set on the moving picture encoding apparatus 100 side. For the area to which the filter is applied, the pixel value after the filter is applied is the pixel value of the restored image. On the other hand, for a region to which no filter is applied, the pixel value at the same position in the decoded image is used as the pixel value of the restored image. At this time, the filter to be applied can be specified by calculating the image feature in the filter application determination region and the classification according to the obtained feature.
  • the restored image generated by the loop filter processing unit 213 is output as an output image and stored in the reference image buffer 208 as a reference image. Further, the first filter coefficient or the prediction information is also stored in the reference image buffer 208 and managed in synchronization with the reference image.
  • the functions of the inverse quantization / inverse transform unit 205, the addition unit 206, the loop filter processing unit 213, the predicted image generation unit 201, the loop filter coefficient setting unit 212, and the reference image buffer 208 are described in the first embodiment. This is the same as the inverse quantization / inverse transform unit 105, the addition unit 106, the loop filter processing unit 113, the predicted image generation unit 101, the loop filter coefficient setting unit 112, and the reference image buffer 108. For this reason, detailed description of these each part is abbreviate
  • FIG. 17 is a flowchart showing the procedure of the decoding process according to the present embodiment.
  • the entropy decoding unit 204 receives encoded data from the moving image encoding apparatus 100 and performs decoding (step S500).
  • the entropy decoding unit 204 performs decoding processing on the received encoded data according to the syntax configuration of FIGS.
  • the entropy decoding unit 204 interprets the number of filters filter_num included in the reference filter specified by direct_ref_idx in FIG. And the entropy decoding part 204 determines whether each filter is applied by decoding use_filter_flag.
  • the entropy decoding unit 204 decodes use_filter_flag by the number of NumOfClass that is the number of classes for classifying pixels. As a result, the entropy decoding unit 204 determines whether to apply a filter to the pixel regions classified into the respective classes.
  • the entropy decoding unit 204 further decodes prediction mode information, motion information, quantization parameters, and the like according to the syntax structure shown in FIG. 7 in addition to the transform coefficient and the encoded block division information.
  • the decoding control unit 202 uses the encoded block division information obtained by decoding when performing encoded block division control in the decoding process.
  • the inverse quantization / inverse transform unit 205 generates an error image (step S504).
  • the adding unit 206 adds the error image and the predicted image to generate a decoded image (S508).
  • the loop filter coefficient setting unit 212 sets a filter coefficient to be applied to the decoded image to be filtered (step S510).
  • the loop filter processing unit 213 applies a filter specified by the filter coefficient set by the loop filter coefficient setting unit 212 to the decoded image to generate a restored image (steps S512 and S514).
  • the loop filter processing unit 213 outputs the generated restored image as an output image and stores the generated restored image in the reference image buffer 208 as a reference image (step S516).
  • the filter coefficient or the prediction information is also stored as reference filter information in the reference image buffer 208 and managed in synchronization with the corresponding reference image.
  • the first application information included in the encoded data received from the moving picture encoding apparatus 100 converts the reference filter information into the decoded image filtering process.
  • the filtering process is performed using the reference filter information indicated by the first reference information.
  • the reference candidate is synchronized with a reference image that can be referred to in inter-screen prediction, and there is no reference in the time direction that exceeds the range assumed in the prediction process.
  • FIG. 18 is a block diagram showing a functional configuration of the moving picture coding apparatus 100A according to the present embodiment.
  • the moving image encoding apparatus 100A according to the present embodiment includes a moving image encoding unit 110A and an encoding control unit 109A.
  • the encoding control unit 109A controls the moving image encoding unit 110A.
  • the moving image encoding unit 110A includes a predicted image generation unit 101, a subtraction unit 102, a transform / quantization unit 103, an entropy encoding unit 104, an inverse quantization / inverse transform unit 105, an addition unit 106, a loop filter unit 107, and A reference image buffer 108 and a deblocking filter processing unit 114 are provided.
  • the configuration of the reference image buffer 108 is the same as that in the first embodiment.
  • the deblocking filter processing unit 114 receives the decoded image from the addition unit 106. Then, a known deblocking filter process is performed on the decoded image received from the adding unit 106 to generate a deblocked decoded image, which is output to the loop filter unit 107.
  • the moving image encoding process executed by the moving image encoding device 100A is the same as that of the moving image encoding device 100 of the first embodiment except that the processing in the deblocking filter processing unit 114 is different. Therefore, detailed description is omitted.
  • FIG. 19 is a block diagram showing a functional configuration of the moving picture decoding apparatus 200A.
  • the moving image decoding apparatus 200A receives encoded data from the moving image encoding apparatus 100A and performs a moving image decoding process.
  • the moving picture decoding apparatus 200A includes a moving picture decoding unit 210A and a decoding control unit 202A as shown in FIG.
  • the decoding control unit 202A controls the moving image decoding unit 210A.
  • the moving image decoding unit 210A decodes the encoded data received from the moving image encoding device 100A.
  • the moving image decoding unit 210A includes an entropy decoding unit 204, an inverse quantization / inverse conversion unit 205, an addition unit 206, a loop filter processing unit 213, a loop filter coefficient setting unit 212, a reference image buffer 208, a predicted image generation unit 201, and A deblocking filter processing unit 214 is provided.
  • the decoding control unit 202A, entropy decoding unit 204, inverse quantization / inverse transformation unit 205, addition unit 206, loop filter processing unit 213, loop filter coefficient setting unit 212, reference image buffer 208, and predicted image generation unit 201 This is the same as the second embodiment.
  • the deblocking filter processing unit 214 receives the decoded image from the addition unit 206. Then, a known deblocking filter process is performed on the decoded image received from the adding unit 206 to generate a deblocked decoded image, which is output to the loop filter processing unit 213.
  • the moving picture decoding process executed by the moving picture decoding apparatus 200A is the same as that of the moving picture encoding apparatus 200 of the second embodiment, except that the processing in the deblocking filter processing unit 214 is different. Therefore, detailed description is omitted.
  • the moving picture coding apparatus 100A and the moving picture decoding apparatus 200A according to the present embodiment can perform loop filter processing using the decoded image that has been deblocked. Therefore, in addition to the effects in the first and second embodiments, it is possible to further suppress the degradation of the predicted image.
  • Embodiment 4 In the moving image coding apparatus 100A and the moving image decoding apparatus 200A according to Embodiment 3, the case has been described in which the loop filter unit 107 performs the filtering process on the decoded image generated by the adding unit 106 or the adding unit 206. In the third embodiment, after the deblocking filter process is performed on the decoded image, the filter process is performed.
  • FIG. 20 is a block diagram illustrating a functional configuration of the moving image encoding device 100B.
  • the moving picture coding apparatus 100B includes a moving picture coding unit 110B and a coding control unit 109B.
  • the encoding control unit 109B controls the moving image encoding unit 110B.
  • the moving image coding unit 110B includes a prediction image generation unit 101, a subtraction unit 102, a transform / quantization unit 103, an entropy coding unit 104, an inverse quantization / inverse transform unit 105, an addition unit 106B, and a loop filter unit 107B.
  • An image buffer 108 and a deblocking filter processing unit 114 are provided.
  • the encoding control unit 109B the predicted image generation unit 101, the subtraction unit 102, the transform / quantization unit 103, the entropy encoding unit 104, the inverse quantization / inverse transform unit 105, the deblocking filter processing unit 114, and the reference image
  • the configuration of the buffer 108 is the same as that of the moving image encoding apparatus 100A of the third embodiment.
  • the adding unit 106B adds the error image and the predicted image to generate a decoded image, and outputs the generated decoded image to the deblocking filter processing unit 114 and the loop filter unit 107B.
  • the loop filter unit 107B uses the pixel values of both the decoded image before the deblocking process by the deblocking filter processing unit 114 and the decoded image after the deblocking process by the deblocking filter processing unit 114, and the loop filter unit
  • the filter coefficient is set and applied in the same manner as 107B. This process is equivalent to a weighted average of pixel values before and after the deblocking process with an appropriate weight.
  • the moving image encoding processing executed by the moving image encoding device 100B is the same as that of the moving image encoding device 100 of the first embodiment, except that the processing in the adding unit 106B and the loop filter unit 107B is different as described above. It is. Therefore, detailed description is omitted.
  • FIG. 21 is a block diagram showing a functional configuration of the moving picture decoding apparatus 200B.
  • the video decoding device 200B receives the encoded data from the video encoding device 100B and performs a decoding process.
  • the moving picture decoding apparatus 200B includes a moving picture decoding unit 210B and a decoding control unit 202B as shown in FIG.
  • the decoding control unit 202B controls the moving image decoding unit 210B.
  • the moving image decoding unit 210B decodes the encoded data received from the moving image encoding device 100B.
  • the moving image decoding unit 210B includes an entropy decoding unit 204, an inverse quantization / inverse conversion unit 205, an addition unit 206B, a loop filter processing unit 213B, a loop filter coefficient setting unit 212, a reference image buffer 208, a predicted image generation unit 201, and A deblocking filter processing unit 214 is provided.
  • the decoding control unit 202B, the entropy decoding unit 204, the inverse quantization / inverse transformation unit 205, the loop filter coefficient setting unit 212, the reference image buffer 208, and the predicted image generation unit 201 are the same as those in the moving image decoding apparatus 200A according to the third embodiment. It is the same.
  • the adding unit 206B adds the error image and the predicted image to generate a decoded image, and outputs the generated decoded image to the deblocking filter processing unit 214 and the loop filter processing unit 213B.
  • the loop filter processing unit 213B uses the pixel values of both the decoded image before the deblocking process by the deblocking filter processing unit 214 and the decoded image after the deblocking process by the deblocking filter processing unit 214, Filter processing is performed in the same manner as the processing unit 213.
  • the moving picture decoding process executed by the moving picture decoding apparatus 200B is the same as that of the moving picture encoding apparatus 200 of the second embodiment, except that the processes in the adding unit 206B and the loop filter processing unit 213B are different as described above. It is. Therefore, detailed description is omitted.
  • block distortion caused by encoding can be reduced by deblocking filter processing.
  • a strong low-pass filter is applied in the deblocking filter process, high-frequency components such as edges and textures included in the input image may be reduced. Therefore, in the present embodiment, loop filter processing is performed using decoded images before and after deblocking filter processing. For this reason, in addition to the effect in the said Embodiment 3, in the moving image encoder 100B and the moving image decoder 200B of this Embodiment, degradation of a prediction image can be prevented further.
  • those applied to the loop filter process may be switched in units such as a sequence, a picture, and a slice. In that case, it is necessary to separately multiplex the information for switching into the encoded data in a corresponding unit.
  • the decoded image generated by the adding unit 206 by adding the error image and the predicted image is output as an output image without performing the filtering process.
  • the restored image generated by filtering the decoded image is used as a reference image.
  • FIG. 22 is a block diagram showing a functional configuration of the moving picture decoding apparatus 200C according to the present embodiment.
  • the moving picture decoding apparatus 200C includes a moving picture decoding unit 210C and a decoding control unit 202C as shown in FIG.
  • the decoding control unit 202C controls the moving image decoding unit 210C.
  • the moving image decoding unit 210C decodes the encoded data received from the moving image encoding device.
  • the moving image decoding unit 210C includes an entropy decoding unit 204, an inverse quantization / inverse conversion unit 205, an addition unit 206C, a loop filter processing unit 213C, a loop filter coefficient setting unit 212, a reference image buffer 208, and a predicted image generation unit 201. Prepare.
  • the decoding control unit 202C, the entropy decoding unit 204, the inverse quantization / inverse conversion unit 205, the loop filter coefficient setting unit 212, the reference image buffer 208, and the predicted image generation unit 201 are the same as those in the moving image decoding apparatus 200 according to the second embodiment. It is the same.
  • the adding unit 206C adds the error image and the predicted image to generate a decoded image, outputs the generated decoded image to the outside as an output image, and outputs the decoded image to the loop filter processing unit 213C.
  • the loop filter processing unit 213C applies a filter to the decoded image in the same manner as the loop filter processing unit 213 to generate a restored image. Then, the loop filter processing unit 213C stores the restored image in the reference image buffer 208 as a reference image.
  • the moving image decoding process executed by the moving image decoding apparatus 200C is the same as that of the moving image decoding apparatus 200 of the second embodiment, except that the processes in the adding unit 206C and the loop filter processing unit 213C are different as described above. . For this reason, detailed description of the video decoding process is omitted.
  • the moving picture decoding apparatus 200C has a configuration in which a filter is applied to a decoded image and no filter is applied to an output image. Also in the moving picture decoding apparatus 200C of the present embodiment, it is possible to suppress a reduction in error resistance, as in the above embodiment.
  • a moving image encoding device and a moving image decoding device are used as post filters.
  • the post filter indicates a method of applying a filter only to an output image.
  • FIG. 23 is a block diagram showing a functional configuration of the moving picture coding apparatus 100D of the present embodiment.
  • the moving image encoding apparatus 100D according to the present embodiment includes a moving image encoding unit 110D and an encoding control unit 109D.
  • the encoding control unit 109D controls the moving image encoding unit 110D.
  • the moving image encoding unit 110D includes a predicted image generation unit 101, a subtraction unit 102, a transform / quantization unit 103, an entropy encoding unit 104, an inverse quantization / inverse transform unit 105, an addition unit 106, and a loop filter information generation unit 111. , And a reference image buffer 108.
  • the moving image encoding unit 110D is the same as the moving image encoding unit 110 of Embodiment 1 except that the moving image encoding unit 110D includes a loop filter information generation unit 111 instead of the loop filter unit 107.
  • the function of the loop filter information generation unit 111 is the same as that of the first embodiment.
  • FIG. 24 is a block diagram showing a functional configuration of the video decoding device 200D.
  • the video decoding device 200D receives the encoded data from the video encoding device 100D and performs a decoding process.
  • the moving picture decoding apparatus 200D includes a moving picture decoding unit 210D and a decoding control unit 202D as shown in FIG.
  • the decoding control unit 202D controls the moving image decoding unit 210D.
  • the moving image decoding unit 210D decodes the encoded data received from the moving image encoding device 100D.
  • the moving image decoding unit 210D includes an entropy decoding unit 204, an inverse quantization / inverse conversion unit 205, an addition unit 206D, a loop filter processing unit 213, a loop filter coefficient setting unit 212, a reference image buffer 208, and a predicted image generation unit 201.
  • the moving image encoding unit 210D includes an adding unit 206D instead of the adding unit 206, and the moving image decoding unit according to the second embodiment is changed except that the input to the reference image buffer 208 is changed from the restored image to the decoded image. The same as 210.
  • the addition unit 206D adds the error image and the predicted image to generate a decoded image, outputs the generated decoded image to the loop filter processing unit 213, and stores the decoded image in the reference image buffer 208 as a reference image. .
  • the moving picture coding apparatus 100D and the moving picture decoding apparatus 200D according to the present embodiment can be used as a post filter.
  • the moving image encoding process executed by the moving image encoding device 100D is the same as that of the moving image encoding device 100 of the first embodiment. For this reason, detailed description of the moving image encoding process is omitted. Also, the moving picture decoding process executed by the moving picture decoding apparatus 200D is the same as that of the moving picture decoding apparatus 200 of the second embodiment. For this reason, detailed description of the video decoding process is also omitted.
  • the moving picture coding apparatus and the moving picture decoding apparatus according to Embodiments 1 to 6 can suppress a reduction in error resistance.
  • a moving image encoding program for executing the moving image encoding process executed by the moving image encoding apparatuses 100, 100A, 100B, and 100D according to the first, third, fourth, and sixth embodiments is stored in advance in a ROM or the like. Provided embedded.
  • a moving picture decoding program for executing the moving picture decoding process executed by the moving picture decoding apparatuses 200, 200A, 200B, 200C, and 200D according to the second, third, fourth, fifth, and sixth embodiments is stored in a ROM or the like. Provided in advance.
  • the moving image decoding program executed in 200, 200A, 200B, 200C, and 200D is an installable or executable file in a CD-ROM, flexible disk (FD), CD-R, DVD (Digital Versatile Disk). ) Or the like may be recorded and provided on a computer-readable recording medium.
  • the moving image decoding program executed by the decoding devices 200, 200A, 200B, 200C, and 200D may be stored on a computer connected to a network such as the Internet and provided by being downloaded via the network. Good.
  • the moving image decoding program executed by the decoding devices 200, 200A, 200B, 200C, and 200D may be configured to be provided or distributed via a network such as the Internet.
  • the moving picture coding program executed by the moving picture coding apparatuses 100, 100A, 100B, and 100D according to the first, third, fourth, and sixth embodiments includes the above-described units (subtraction unit, transform / quantization unit, entropy coding). Unit, predicted image generation unit, inverse quantization / inverse conversion unit, addition unit, deblocking filter processing unit, reference image buffer, loop filter unit, loop filter information generation unit, loop filter coefficient setting unit, loop filter processing unit)
  • the CPU reads out and executes the moving image encoding program from the ROM, and the respective units are loaded onto the main storage device, and the respective units are stored in the main memory. It is generated on the device.
  • the moving picture decoding program executed by the moving picture decoding apparatuses 200, 200A, 200B, 200C, and 200D according to the second, third, fourth, fifth, and sixth embodiments includes the above-described units (entropy decoding unit, inverse quantization / inverse transform).
  • Unit, addition unit, loop filter processing unit, predicted image generation unit, loop filter coefficient setting unit, reference image buffer, deblocking filter processing unit), and the actual hardware is a CPU (processor) Reads out and executes the moving picture decoding program from the ROM, so that the respective units are loaded onto the main storage device, and the respective units are generated on the main storage device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Le procédé de codage de vidéo des modes de réalisation de l'invention consiste à : générer une image d'erreur qui représente la différence entre une image d'entrée et une image prédite, et quantifier l'image d'erreur pour générer un coefficient de quantification ; générer une image d'erreur par quantification inverse puis transformation inverse du coefficient de quantification, puis ajouter l'image d'erreur et l'image prédite pour générer une image décodée ; effectuer un traitement de filtrage sur l'image décodée, et générer des images de référence ; générer des premières données d'application qui indiquent s'il faut ou non appliquer des filtres de référence, qui sont des filtres qui ont été appliqués aux images de référence, au traitement de filtrage de l'image décodée ; générer des premières informations de référence qui représentent un filtre de référence parmi les filtres de référence qui ont été appliqués aux images de référence qui peuvent être référencées lors de la génération d'une image prédite ; générer l'image prédite au moyen des images de référence ; et coder le coefficient de quantification et les informations de filtrage, qui comprennent les premières informations d'application et les premières informations de référence.
PCT/JP2011/063290 2011-06-09 2011-06-09 Procédé et dispositif de codage de vidéo, et procédé et dispositif de décodage de vidéo WO2012169054A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2011/063290 WO2012169054A1 (fr) 2011-06-09 2011-06-09 Procédé et dispositif de codage de vidéo, et procédé et dispositif de décodage de vidéo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2011/063290 WO2012169054A1 (fr) 2011-06-09 2011-06-09 Procédé et dispositif de codage de vidéo, et procédé et dispositif de décodage de vidéo

Publications (1)

Publication Number Publication Date
WO2012169054A1 true WO2012169054A1 (fr) 2012-12-13

Family

ID=47295656

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/063290 WO2012169054A1 (fr) 2011-06-09 2011-06-09 Procédé et dispositif de codage de vidéo, et procédé et dispositif de décodage de vidéo

Country Status (1)

Country Link
WO (1) WO2012169054A1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010516082A (ja) * 2007-01-09 2010-05-13 テレフオンアクチーボラゲット エル エム エリクソン(パブル) 適応フィルタ表現
JP2010529777A (ja) * 2007-06-08 2010-08-26 トムソン ライセンシング 多格子スパース性ベースのフィルタリングに基づくインループ・アーチファクト解除フィルタリングのための方法及び装置
JP2011049740A (ja) * 2009-08-26 2011-03-10 Sony Corp 画像処理装置および方法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010516082A (ja) * 2007-01-09 2010-05-13 テレフオンアクチーボラゲット エル エム エリクソン(パブル) 適応フィルタ表現
JP2010529777A (ja) * 2007-06-08 2010-08-26 トムソン ライセンシング 多格子スパース性ベースのフィルタリングに基づくインループ・アーチファクト解除フィルタリングのための方法及び装置
JP2011049740A (ja) * 2009-08-26 2011-03-10 Sony Corp 画像処理装置および方法

Similar Documents

Publication Publication Date Title
US11936858B1 (en) Constrained position dependent intra prediction combination (PDPC)
JP6324481B2 (ja) 再構成された画像のサンプルのセットのための補償オフセットを符号化するための方法および装置
KR101680949B1 (ko) 이미지들을 인코딩 및 디코딩하기 위한 방법, 인코딩 및 디코딩 디바이스, 및 대응하는 컴퓨터 프로그램들
CN107623853B (zh) 视频编码和解码方法及非暂时性计算机可读存储介质
KR101442127B1 (ko) 쿼드트리 구조 기반의 적응적 양자화 파라미터 부호화 및 복호화 방법 및 장치
US20110150080A1 (en) Moving-picture encoding/decoding method and apparatus
JPWO2017068856A1 (ja) 予測画像生成装置、画像復号装置および画像符号化装置
WO2010076856A1 (fr) Procédé de codage d'images animées et procédé de décodage d'images animées
KR20180008822A (ko) 다단계 예측을 이용한 영상 부/복호화 방법 및 이러한 방법을 사용하는 부/복호화 장치
EP3560204A1 (fr) Filtre d'élimination d'artefacts de bord ("deringing") pour codage vidéo
US12015773B2 (en) Image encoding apparatus, image encoding method, image decoding apparatus, image decoding method, and non-transitory computer-readable storage medium
US11849114B2 (en) Image encoding apparatus, image decoding apparatus, control methods thereof, and non-transitory computer-readable storage medium
WO2018043256A1 (fr) Dispositif de codage d'image, et dispositif de décodage d'image
JP6200220B2 (ja) 画像処理装置、符号化装置、復号装置、及びプログラム
CN116391355A (zh) 视频编码中边界处理的方法和设备
WO2013145174A1 (fr) Procédé de codage vidéo, procédé de décodage vidéo, dispositif de codage vidéo et dispositif de décodage vidéo
WO2012169054A1 (fr) Procédé et dispositif de codage de vidéo, et procédé et dispositif de décodage de vidéo
WO2012049876A1 (fr) Procédé de codage et procédé de décodage d'images vidéo
KR20140073472A (ko) 쿼드트리 구조 기반의 적응적 양자화 파라미터 부호화 및 복호화 방법 및 장치
KR20200004348A (ko) 타겟 영역 수정을 통해 비디오 신호를 처리하는 방법 및 장치
JP7408834B2 (ja) ビデオフィルタリングのための方法および装置
JP6917718B2 (ja) 予測装置、符号化装置、復号装置、及びプログラム
JPWO2011161823A1 (ja) 動画像符号化方法および復号方法
WO2020262370A1 (fr) Dispositif et procédé de traitement d'image
JP6886825B2 (ja) 予測装置、符号化装置、復号装置、及びプログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11867164

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11867164

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP