WO2013098937A1 - 動画像符号化方法、動画像復号方法、動画像符号化装置及び動画像復号装置 - Google Patents
動画像符号化方法、動画像復号方法、動画像符号化装置及び動画像復号装置 Download PDFInfo
- Publication number
- WO2013098937A1 WO2013098937A1 PCT/JP2011/080206 JP2011080206W WO2013098937A1 WO 2013098937 A1 WO2013098937 A1 WO 2013098937A1 JP 2011080206 W JP2011080206 W JP 2011080206W WO 2013098937 A1 WO2013098937 A1 WO 2013098937A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- offset
- unit
- classes
- class
- image
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
- H04N19/82—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
Definitions
- Embodiments relate to a moving image encoding technique and a decoding technique.
- SAO pixel adaptive offset
- the encoding side sets a plurality of offset values for each predetermined region (for example, a pixel block) including a plurality of pixels in a (local) decoded image, and sends information indicating the plurality of offset values to the decoding side. To transmit. Then, the decoding side switches these plural offset values in units of pixels and applies (adds) to the decoded image.
- the amount of calculation and memory usage for the offset value setting process are generally large. Specifically, the total number of areas in which a plurality of offset values are set in one frame is, for example, 256 at maximum.
- the total number of offset values that can be set in each region is, for example, 16 at maximum.
- the encoding side selects a suitable region from the viewpoint of improving the encoding efficiency by trying to set a plurality of offset values for various regions and performing rate distortion optimization. Therefore, the amount of calculation and memory usage for the offset value setting process increase according to the total number of offset values that can be set in each of the various areas. Furthermore, since the total number of offset values set in one frame is 4096 at the maximum, the overhead of information indicating such offset values becomes a problem.
- ALF adaptive loop filter
- the encoding side sets a filter coefficient set and transmits information indicating the filter coefficient set to the decoding side. Then, the decoding side performs loop filter processing on the decoded image using the information indicating the transmitted filter coefficient set.
- SAO and ALF are typically combined sequentially.
- OSALF One Stage Adaptive Loop Filter
- ALF processing in OSALF includes not only processing for setting a filter coefficient set for each pixel block, but processing for setting a plurality of offset values, as in SAO processing.
- Information indicating the plurality of offset values is transmitted to the decoding side, and the decoding side switches the plurality of offset values in units of pixels and applies (adds) to the decoded image.
- the setting of the filter coefficient set and the offset value can be realized, for example, by solving the Wiener-Hopf equation. Note that the calculation amount for solving the Wiener-Hopf equation is the order of the cube of the sum of the total number of filter coefficients and the total number of offset values set in each pixel block.
- the coefficient in the Wiener-Hopf equation corresponds to the square of the sum of the total number of filter coefficients and the total number of offset values set in each pixel block. Therefore, in such ALF processing, processing for setting a filter coefficient set and a plurality of offset values generally requires a large amount of calculation and memory usage.
- EO Edge Offset
- an offset value is switched based on a size comparison between a target pixel in a decoded image and its surrounding pixels.
- EO a technique for reducing the overhead of information indicating an offset value is known. Specifically, when EO is employed, the optimum offset value tends to follow a statistical distribution. Therefore, this technique reduces the overhead of information indicating the offset value by predicting the offset value based on the average value of this distribution and signaling the prediction residual. However, this technique does not reduce the calculation amount and memory usage of the offset value setting process.
- one of the purposes is to reduce the overhead of information indicating the offset value.
- an object of the embodiment is to reduce the amount of calculation and memory usage for the offset value setting process on the encoding side.
- the moving image encoding method is any one of a plurality of first offset classes based on an index indicating an image characteristic of a unit for each unit including one or more pixels in a decoded image. Including setting one.
- the moving image encoding method includes setting one second offset class including a first offset class set in a unit among one or more second offset classes for each unit.
- the moving image encoding method includes setting an offset value corresponding to each of the one or more second offset classes based on the input image and the decoded image.
- the moving image encoding method corresponds to each of one or more first offset classes included in the second offset class based on an offset value corresponding to the second offset class for each second offset class. Including calculating an offset value.
- the moving image encoding method includes, for each unit, adding an offset value corresponding to the first offset class set in the unit to obtain an offset processed image.
- the moving image encoding method includes encoding information indicating an offset value corresponding to each of the one or more second offset classes, and generating encoded data.
- At least one of the one or more second offset classes includes two or more first offset classes, and at least two of the two or more first offset classes included in the same second offset class. The offset values corresponding to one offset class are different.
- FIG. 1 is a block diagram illustrating a moving image encoding apparatus according to a first embodiment.
- 3 is a flowchart illustrating an operation of the moving image encoding apparatus in FIG. 1.
- the block diagram which illustrates the SAO processing part of FIG. The figure which illustrates the reference table holding the correspondence of a parameter
- 4 is a flowchart illustrating an operation of the SAO processing unit in FIG. 3.
- 1 is a block diagram illustrating a moving image decoding apparatus according to a first embodiment.
- FIG. 8 is a block diagram illustrating a SAO processing unit in FIG. 7.
- FIG. 11 is a block diagram illustrating an ALF processing unit in FIG. 10.
- FIG. 13 is a block diagram illustrating an ALF processing unit in FIG. 12.
- the block diagram which illustrates the SAO processing part with which the moving picture coding device concerning a 4th embodiment is provided.
- the block diagram which illustrates the SAO processing part with which the video decoding device concerning a 4th embodiment is provided.
- the block diagram which illustrates the SAO processing part with which the moving picture coding device concerning a 5th embodiment is equipped. Explanatory drawing regarding the prediction process of the offset value corresponding to a 1st offset class.
- the block diagram which illustrates the SAO processing part with which the moving picture decoding apparatus concerning a 5th embodiment is equipped.
- the moving picture coding apparatus includes a moving picture coding unit 100 and a coding control unit 110.
- the moving image encoding unit 100 includes a predicted image generation unit 101, a subtraction unit 102, a transform and quantization unit 103, an inverse quantization and inverse transform unit 104, an addition unit 105, and a deblocking filter (Deblocking Filter; DF) processing unit 106, SAO processing unit 107, ALF processing unit 108, and entropy encoding unit 109.
- the encoding control unit 110 controls the operation of each unit of the moving image encoding unit 100.
- the predicted image generation unit 101 performs a prediction process on the input image 11 in units of pixel blocks, for example, and generates a predicted image.
- the input image 11 includes a plurality of pixel signals and is input from the outside.
- the predicted image generation unit 101 may perform a prediction process on the input image 11 based on an ALF processed image 17 described later.
- the prediction process may be a general process such as a temporal direction prediction process using motion compensation, a spatial direction prediction process using encoded pixels in the screen, and the like. Therefore, the detailed description of the prediction process is omitted.
- the predicted image generation unit 101 outputs the predicted image to the subtraction unit 102 and the addition unit 105.
- the subtraction unit 102 acquires the input image 11 from the outside, and inputs the predicted image from the predicted image generation unit 101.
- the subtraction unit 102 subtracts the predicted image from the input image 11 to generate a prediction error image.
- the subtraction unit 102 outputs the prediction error image to the transform and quantization unit 103.
- the transform / quantization unit 103 receives the prediction error image from the subtraction unit 102.
- the transform and quantization unit 103 performs transform processing on the prediction error image to generate transform coefficients. Further, the transform and quantization unit 103 quantizes the transform coefficient to generate a quantized transform coefficient.
- the transform and quantization unit 103 outputs the quantized transform coefficient to the inverse quantization and inverse transform unit 104 and the entropy encoding unit 109.
- the transformation process is typically orthogonal transformation such as Discrete Cosine Transform (DCT).
- the conversion process is not limited to DCT, and may be wavelet conversion, independent component analysis, or the like.
- the quantization process is performed based on the quantization parameter set by the encoding control unit 110.
- the inverse quantization and inverse transform unit 104 inputs the quantized transform coefficient from the transform and quantization unit 103.
- the inverse quantization and inverse transform unit 104 dequantizes the quantized transform coefficient and decodes the transform coefficient. Further, the inverse quantization and inverse transform unit 104 performs an inverse transform process on the transform coefficient to decode the prediction error image.
- the inverse quantization and inverse transform unit 104 outputs the prediction error image to the addition unit 105.
- the inverse quantization and inverse transform unit 104 performs an inverse process of the transform and quantization unit 103. That is, the inverse quantization is performed based on the quantization parameter set by the encoding control unit 110. Further, the inverse transformation process is determined by the transformation process performed by the transformation and quantization unit 103.
- the inverse transform process includes inverse DCT (Inverse DCT; IDCT), inverse wavelet transform, and the like.
- the addition unit 105 inputs a prediction image from the prediction image generation unit 101 and inputs a prediction error image from the inverse quantization and inverse conversion unit 104.
- the adding unit 105 adds the prediction error image to the prediction image to generate a (local) decoded image 12.
- the adding unit 105 outputs the decoded image 12 to the DF processing unit 106.
- the DF processing unit 106 inputs the decoded image 12 from the addition unit 105.
- the DF processing unit 106 performs DF processing on the decoded image 12 to generate a DF processed image 13.
- the DF processing unit 106 outputs the DF processed image 13 to the SAO processing unit 107.
- the DF processing performed by the DF processing unit 106 may be a conventionally known one. In general, the DF processing can be expected to have an image quality improvement effect such as suppression of block distortion included in the decoded image 12.
- the SAO processing unit 107 acquires the input image 11 from the outside, and inputs the DF processed image 13 from the DF processing unit 106.
- the SAO processing unit 107 sets an offset value for each pixel or each pixel block in a predetermined area (for example, a slice) of the DF processed image 13 based on the input image 11 and the DF processed image 13, and the set offset
- the SAO processing image 15 is generated by applying the value.
- the SAO processing unit 107 outputs the SAO processing image 15 to the ALF processing unit 108. Further, the SAO processing unit 107 outputs offset information 14 described later to the entropy encoding unit 109. Details of the SAO processing unit 107 will be described later.
- the ALF processing unit 108 acquires the input image 11 from the outside, and inputs the SAO processing image 15 from the SAO processing unit 107.
- the ALF processing unit 108 sets a filter (for example, a filter coefficient set including a plurality of filter coefficient values) for a predetermined region (for example, a slice) of the SAO processed image 15 based on the input image 11 and the SAO processed image 15. Then, the ALF processed image 17 is generated by applying the set filter.
- the ALF processing unit 108 outputs the ALF processed image 17 to the predicted image generation unit 101. Further, the ALF processing unit 108 outputs the filter information 16 indicating the set filter to the entropy encoding unit 109.
- the ALF processed image 17 may be stored in a storage unit (not shown) (for example, a buffer) that can be accessed by the predicted image generation unit 101.
- the ALF processed image 17 is read as a reference image by the predicted image generation unit 101 as necessary, and is used for the prediction process.
- the entropy encoding unit 109 receives the quantized transform coefficient from the transform and quantization unit 103, receives the offset information 14 from the SAO processing unit 107, receives the filter information 16 from the ALF processing unit 108, and receives the encoding control unit
- the encoding parameter is input from 110.
- the encoding parameter may include, for example, mode information, motion information, encoded block division information, quantization parameter, and the like.
- the entropy encoding unit 109 performs entropy encoding (for example, Huffman encoding, arithmetic encoding, etc.) on the quantized transform coefficient, the offset information 14, the filter information 16, and the encoding parameter, and generates encoded data 18.
- the entropy encoding unit 109 outputs the encoded data 18 to the outside (for example, communication system, storage system, etc.).
- the encoded data 18 is decoded by a moving picture decoding apparatus described later.
- the encoding control unit 110 performs encoding block division control, generated code amount feedback control, quantization control, mode control, and the like for the moving image encoding unit 100.
- the encoding control unit 110 outputs the encoding parameters to the entropy encoding unit 109.
- the moving picture encoding unit 100 operates as shown in FIG. 2, for example. Specifically, the subtraction unit 102 subtracts the prediction image from the input image 11 to generate a prediction error image (step S201).
- the transform and quantization unit 103 performs transform and quantization on the prediction error image generated in step S201, and generates a quantized transform coefficient (step S202).
- the inverse quantization and inverse transform unit 104 performs inverse quantization and inverse transform on the quantized transform coefficient generated in step S202, and decodes the prediction error image (step S203).
- the adding unit 105 adds the prediction error image decoded in step S203 to the prediction image to generate a (local) decoded image 12 (step S204).
- the DF processing unit 106 performs DF processing on the decoded image 12 generated in step S204 to generate a DF processed image 13 (step S205).
- step S206 the SAO processing unit 107 performs SAO processing.
- step S206 the offset information 14 and the SAO processed image 15 are generated based on the input image 11 and the DF processed image 13 generated in step S205.
- the ALF processing image 108 performs ALF processing (step S207). That is, in step S207, the filter information 16 and the ALF processed image 17 are generated based on the input image 11 and the SAO processed image 15 generated in step S206.
- the entropy encoding unit 109 entropy encodes the quantized transform coefficient generated in step S202, the offset information 14 generated in step S206, the filter information 16 generated in step S207, and the encoding parameter. (Step S208). These series of processes are repeated until encoding of the input image 11 is completed.
- the operation illustrated in FIG. 2 corresponds to so-called hybrid coding including prediction processing and conversion processing.
- the video encoding apparatus according to the present embodiment does not necessarily need to perform hybrid encoding.
- hybrid coding is replaced with DPCM (Differential Pulse Code Modulation)
- unnecessary processing may be omitted while prediction processing based on neighboring pixels is performed.
- the SAO processing unit 107 includes a first offset class setting unit 301, a second offset class setting unit 302, an offset value setting unit 303, an offset value calculation unit 304, an offset A value adding unit 305.
- the second offset class setting unit 302, the offset value setting unit 303, and the offset value calculation unit 304 may be referred to as an offset information generation unit.
- the first offset class setting unit 301 inputs a predetermined region (for example, a slice) of the DF processed image 13 from the DF processing unit 106, and sets the first offset class based on an index for each unit in the predetermined region. To do.
- the first offset class setting unit 301 generates first offset class information indicating the first offset class set for each unit.
- the first offset class setting unit 301 outputs the first offset class information to the second offset class setting unit 302 and the offset value adding unit 305.
- the unit may be one pixel or a region (for example, a pixel block) including a plurality of pixels.
- the unit is basically one pixel.
- the unit may be appropriately expanded to a region including a plurality of pixels.
- the index is a value indicating a unit image feature.
- the index may be an activity of each unit image.
- the first offset class setting unit 301 may calculate the index k (x, y) of the pixel specified by the position (x, y) by the following formula (1).
- S dec (x, y) represents the pixel value at the position (x, y) in the DF processed image 13.
- the index k (x, y) represents the activity at the position (x, y).
- the activity may be calculated based on the absolute value of the difference between the pixel specified by the position (x, y) and one adjacent pixel.
- This adjacent pixel is designated in advance among neighboring pixels in eight directions (up, down, left, right, upper left, upper right, lower left, lower right) based on the pixel specified by the position (x, y). It may be in one direction. Further, the direction of the adjacent one pixel may be determined in sequence units, frame units, slice units, or pixel block units, and information indicating the direction may be encoded.
- the activity may be calculated based on a sum of absolute values of differences between a pixel specified by the position (x, y) and neighboring pixels in four directions (up / down / left / right directions or four oblique directions) or eight directions. It should be noted that by using the above formulas (1), (2), etc., pixels within a certain range around the pixel of interest, for example, a block around the pixel of interest N (N is an integer of 2 or more) ⁇ N pixels The activity may be calculated for the pixel, and the sum of these may be used as the index k (x, y).
- the first offset class setting unit 301 can calculate an index based on a comparison between a pixel of interest and surrounding pixels, as in EO in Non-Patent Document 1, instead of an activity. For example, when the target pixel and surrounding pixels are ranked in descending order of the pixel value, the index may increase as the target pixel rank increases. Specifically, the first offset class setting unit 301 may calculate the index k (x, y) by the following mathematical formula (3).
- the function sign ( ⁇ ) returns 1 if ⁇ is positive, 0 if ⁇ is 0, and ⁇ 1 if ⁇ is negative.
- the index k (x, y) is 8 if the pixel value of the target pixel is larger than any of the surrounding four pixels, and the pixel value of the target pixel is If all four pixels are the same, the value is 4. If the pixel value of the target pixel is smaller than any of the surrounding four pixels, the value is 0.
- the above mathematical formula (3) can be modified.
- the above formula (3) is based on a size comparison between a pixel specified by the position (x, y) and four pixels adjacent in the vertical and horizontal directions, but is specified by the position (x, y).
- the index k (x, y) may be calculated based on a size comparison between the pixel and the four pixels adjacent in the diagonal direction.
- an index k (x, y) is calculated based on a size comparison between a pixel specified by the position (x, y) and two adjacent pixels. Also good. Note that although Equation (4) focuses on adjacent pixels in the horizontal direction (horizontal direction), adjacent pixels in the vertical direction (vertical direction), diagonal directions (upper left and lower right directions, upper right and lower left directions, etc.) You may pay attention to adjacent pixels. Alternatively, the index k (x, y) may be calculated based on a magnitude comparison between the pixel specified by the position (x, y) and the eight pixels adjacent thereto.
- directions indicating adjacent pixels may be determined in sequence units, frame units, slice units, or pixel block units, and information indicating the directions may be encoded. For example, an index may be calculated for each pixel in a pixel block based on two adjacent pixels in the horizontal direction, and an index may be calculated for each pixel in another pixel block based on two adjacent pixels in the vertical direction. .
- the pixel value S dec (x, y) of the target pixel may be used as the index k (x, y).
- the scan order of the target pixel in the picture, the slice, or the pixel block (that is, the position of the target pixel) may be used as the index k (x, y).
- the scan order may be an order based on raster scan, zigzag scan, Hilbert scan, or the like.
- the unit in which the first offset class is set is not limited to one pixel and may be an area including a plurality of pixels.
- the first offset class setting unit 301 calculates an index for each pixel in the area by the above-described method, and calculates an index for each unit based on these. Good.
- the first offset class setting unit 301 calculates indices for all or some of the pixels included in the unit, and calculates the sum, average value, median value, mode value, minimum value, or maximum value of the index. It may be calculated as a unit index.
- the scan order of the unit may be used as an index.
- the scan order may be an order based on raster scan, zigzag scan, Hilbert scan, or the like.
- the first offset class setting unit 301 may set the first offset class based on the index for each unit, for example, according to the following mathematical formula (5).
- offset_idx (x, y) represents the first offset class of the unit to which the pixel specified by the position (x, y) belongs.
- k (x, y) represents an index for setting the first offset class in the unit.
- ⁇ represents a real number of 1 or more.
- the first offset class setting unit 301 may prepare a reference table that holds the correspondence relationship between the index and the first offset class, as shown in FIG. According to the above formula (5), the range of the index corresponding to an arbitrary offset class is constant. On the other hand, according to the reference table, the index range corresponding to a certain first offset class can be narrowed, or the index range corresponding to another first offset class can be expanded.
- the first offset class setting unit 301 may fix the index type to any one or may switch between them.
- the first offset class setting unit 301 may switch the index type in units of slices or other units.
- the encoding control unit 110 may select an optimal index type for each slice.
- Information indicating the type of the selected index is entropy encoded by the entropy encoding unit 109 and output as a part of the encoded data 18.
- the optimum index type may be one that minimizes the encoding cost represented by the following formula (6), for example.
- Cost represents the coding cost
- D represents the residual sum of squares
- R represents the code amount
- the second offset class setting unit 302 receives the first offset class information from the first offset class setting unit 301, and sets the second offset class for each unit based on the first offset class information. .
- the second offset class setting unit 302 sets a second offset class that includes the first offset class set for each unit.
- the second offset class setting unit 302 generates second offset class information indicating the first offset class and the second offset class set for each unit.
- the second offset class setting unit 302 outputs the second offset class information to the offset value setting unit 303.
- each of the plurality of first offset classes is included in one second offset class.
- Each second offset class can include one or more first offset classes.
- the second offset class can be generated by merging one or more first offset classes.
- the total number of second offset classes is 1 or more and is smaller than the total number of first offset classes. Therefore, at least one second offset class includes a plurality of first offset classes.
- a plurality of first offset classes included in a given second offset class can be determined based on, for example, statistical properties of offset values corresponding to the plurality of first offset classes. For example, a first offset value corresponding to a certain first offset class may have a strong correlation with a second offset value corresponding to another first offset class. In such a case, since the first offset value can be reasonably predicted by the third offset value obtained by applying a predetermined function to the second offset value, the process of setting the first offset value is omitted. be able to. Therefore, if the offset values corresponding to the plurality of first offset classes have a strong correlation, the plurality of first offset classes may be included in the same second offset class.
- the first offset value corresponding to a certain first offset class is appropriately predicted by the third offset value obtained by sign inverting the second offset value corresponding to another first offset class. If possible, one first offset class and another first offset class may be included in the same second offset class.
- a first offset value corresponding to a certain first offset class can be appropriately predicted by a third offset value obtained by multiplying a second offset value corresponding to another first offset class by a constant.
- one first offset class and another first offset class may be included in the same second offset class.
- a first offset value corresponding to a certain first offset class can be reasonably predicted by a third offset value obtained by adding a constant to a second offset value corresponding to another first offset class.
- one first offset class and another first offset class may be included in the same second offset class.
- some tendency may occur over the offset values corresponding to three or more first offset classes. Even in such a case, by applying a predetermined function to the offset value corresponding to a certain first offset class, it is possible to appropriately predict the offset values corresponding to the plurality of remaining first offset classes. Therefore, three or more first offset classes may be included in the same second offset class.
- the correspondence between the plurality of first offset classes included in the given second offset class may be uniquely determined in advance.
- the correspondence relationship refers to information for specifying a plurality of first offset classes included in a given second offset class, and the plurality of these offsets based on the offset value set in the second offset class.
- Information for example, a function
- a plurality of correspondence relationships may be prepared, and any one may be selected.
- information indicating which correspondence relationship has been selected may be signaled as one element of the offset information 14, for example.
- the statistical property of the offset value to be set may be predicted based on the offset value encoded in the past, and the correspondence relationship may be determined based on the statistical property.
- the offset value setting unit 303 acquires the input image 11 from the outside, inputs the DF processed image 13 from the DF processing unit 106, and inputs the second offset class information from the second offset class setting unit 302.
- the offset value setting unit 303 sets an offset value corresponding to each of the second offset classes based on the input image 11 and the DF processed image 13.
- the offset value setting unit 303 outputs the offset information 14 indicating the offset value corresponding to each of the second offset classes to the offset value calculation unit 304 and the entropy encoding unit 109.
- the offset value setting unit 303 uses the offset value corresponding to each of the second offset classes as a variable so that the square error sum between the SAO processed image 15 and the input image 11 is minimized. An offset value corresponding to each offset class is set.
- the offset value setting unit 303 sets the offset value corresponding to the first offset class included in the given second offset class as a variable representing the offset value corresponding to the second offset class. It expresses using. Note that an offset value corresponding to a given second offset class can be defined to match an offset value corresponding to one first offset class included in the second offset class.
- the offset value setting unit 303 evaluates the square error sum when the offset value corresponding to the first offset class corresponding to the unit is added for each unit in the DF processed image 13.
- the evaluation function of the square error sum can be defined using a variable representing an offset value corresponding to the second offset class.
- the offset value calculation unit 304 inputs the offset information 14 from the offset value setting unit 303.
- the offset value calculation unit 304 calculates an offset value corresponding to the first offset class included in the second offset class based on the offset value corresponding to the second offset class for each second offset class. To do.
- the offset value calculation unit 304 outputs offset information indicating an offset value corresponding to each of the plurality of first offset classes to the offset value addition unit 305.
- a predetermined function for example, a function ( ⁇ y ) for inverting the sign
- the offset value addition unit 305 inputs the DF processed image 13 from the DF processing unit 106, inputs first offset class information from the first offset class setting unit 301, and inputs offset information from the offset value calculation unit 304. .
- the offset value adding unit 305 adds the offset value corresponding to the first offset class set for each unit in the DF processed image 13 to generate the SAO processed image 15.
- the offset value adding unit 305 outputs the SAO processed image 15 to the ALF processing unit 108.
- the SAO processing unit 107 operates as illustrated in FIG. Specifically, the first offset class setting unit 301 sets the first offset class for each pixel or each pixel block in the DF processed image 13 (step S601).
- the second offset class setting unit 302 further sets a second offset class for each pixel or each pixel block in the DF processed image 13 based on the first offset class information set in step S601. (Step S602).
- the offset value setting unit 303 sets an offset value corresponding to each of the one or more second offset classes set in step S602 (step S603).
- the offset value calculation unit 304 determines the first offset included in the second offset class based on the offset value corresponding to the second offset class set in step S603.
- An offset value corresponding to the class is calculated (step S604).
- the offset value addition unit 305 adds an offset value corresponding to the first offset class set for the pixel or pixel block to each pixel or each pixel block in the DF processed image 13, thereby obtaining a SAO processed image. 15 is generated.
- one or both of the DF processing unit 106 and the ALF processing unit 108 may be omitted.
- the SAO processing unit 107 may perform SAO processing on the decoded image 12 instead of the DF processed image 13.
- the SAO processing unit 107 may output the SAO processed image 15 to the predicted image generation unit 101.
- filter processing such as DF processing and ALF processing is omitted, the amount of calculation for the filter processing can be reduced instead of obtaining an image quality improvement effect by the filter processing.
- the application order of the SAO process, the ALF process, and the DF process may be changed from that illustrated in FIG.
- the moving picture decoding apparatus includes a moving picture decoding unit 700 and a decoding control unit 708.
- the video decoding unit 700 includes an entropy decoding unit 701, an inverse quantization and inverse transformation unit 702, an addition unit 703, a DF processing unit 704, a SAO processing unit 705, an ALF processing unit 706, and a predicted image generation unit. 707.
- the decoding control unit 708 controls the operation of each unit of the moving image decoding unit 700.
- the entropy decoding unit 701 inputs the encoded data 21 from the outside (for example, a communication system or a storage system).
- the encoded data 21 is the same as or similar to the encoded data 18 described above.
- the entropy decoding unit 701 performs entropy decoding on the encoded data 21, and generates quantized transform coefficients, an encoding parameter 22, offset information 23, and filter information 24.
- the offset information 23 may be the same as or similar to the offset information 14.
- the filter information 24 may be the same as or similar to the filter information 16.
- the entropy decoding unit 701 outputs the quantized transform coefficient to the inverse quantization and inverse transform unit 702, outputs the encoding parameter 22 to the decoding control unit 708, and outputs the offset information 23 to the SAO processing unit 705.
- the filter information 24 is output to the ALF processing unit 706.
- the inverse quantization and inverse transform unit 702 receives the quantized transform coefficient from the entropy decoding unit 701. The inverse quantization and inverse transform unit 702 performs inverse quantization on the quantized transform coefficient and decodes the transform coefficient. Further, the inverse quantization and inverse transform unit 702 performs an inverse transform process on the transform coefficient to decode the prediction error image. The inverse quantization and inverse transform unit 702 outputs the prediction error image to the addition unit 703. Basically, the inverse quantization and inverse transform unit 702 performs the same or similar processing as the inverse quantization and inverse transform unit 104 described above. That is, the inverse quantization is performed based on the quantization parameter set by the decoding control unit 708. Further, the inverse conversion process is determined by the conversion process performed on the encoding side. For example, the inverse transform process is IDCT, inverse wavelet transform, or the like.
- the addition unit 703 receives a prediction image from the prediction image generation unit 707 and inputs a prediction error image from the inverse quantization and inverse transformation unit 702. The adding unit 703 adds the prediction error image to the prediction image to generate a decoded image 25. The adding unit 703 outputs the decoded image 25 to the DF processing unit 704.
- the DF processing unit 704 inputs the decoded image 25 from the adding unit 703.
- the DF processing unit 704 performs DF processing on the decoded image 25 to generate a DF processed image 26. That is, the DF processing unit 704 performs the same or similar processing as the DF processing unit 106.
- the DF processing unit 704 outputs the DF processed image 26 to the SAO processing unit 705.
- the SAO processing unit 705 inputs the offset information 23 from the entropy decoding unit 701 and inputs the DF processed image 26 from the DF processing unit 704.
- the SAO processing unit 705 generates the SAO processed image 27 by applying the offset value to the DF processed image 26 based on the offset information 23.
- the SAO processing unit 705 outputs the SAO processed image 27 to the ALF processing unit 706. Details of the SAO processing unit 705 will be described later.
- the ALF processing unit 706 receives the filter information 24 from the entropy decoding unit 701 and the SAO processing image 27 from the SAO processing unit 705.
- the ALF processing unit 706 generates the ALF processed image 28 by applying a filter to the SAO processed image 27 based on the filter information 24.
- the ALF processing unit 706 outputs the ALF processed image 28 to the predicted image generation unit 707. Further, the ALF processing unit 706 may provide the ALF processed image 28 to the outside (for example, a display system) as an output image.
- the ALF processed image 28 may be stored in a storage unit (not shown) (for example, a buffer) that can be accessed by the predicted image generation unit 707.
- the ALF processed image 28 is read as a reference image by the predicted image generation unit 707 as necessary, and is used for the prediction process.
- the predicted image generation unit 707 performs prediction processing of the output image in units of pixel blocks or different units, and generates a predicted image.
- the predicted image generation unit 707 may perform output image prediction processing based on the ALF processed image 28 described above. That is, the predicted image generation unit 707 performs the same or similar processing as the predicted image generation unit 101 described above.
- the predicted image generation unit 707 outputs the predicted image to the adding unit 703.
- the decoding control unit 708 receives the encoding parameter 22 from the entropy decoding unit 701.
- the decoding control unit 708 performs coding block division control, quantization control, mode control, and the like based on the coding parameter 22.
- the moving picture decoding unit 700 operates as shown in FIG. 8, for example.
- the entropy decoding unit 701 performs entropy decoding on the encoded data 21, and generates quantized transform coefficients, encoding parameters 22, offset information 23, and filter information 24 (step S801).
- the inverse quantization and inverse transform unit 702 performs inverse quantization and inverse transform on the quantized transform coefficient generated in step S801, and decodes a prediction error image (step S802).
- the adding unit 703 adds the prediction error image decoded in step S802 to the prediction image to generate a decoded image 25 (step S803).
- the DF processing unit 704 performs DF processing on the decoded image 25 generated in step S803 to generate a DF processed image 26 (step S804).
- step S805 the SAO processing unit 705 performs SAO processing.
- step S805 the SAO processing image 27 is generated based on the offset information 23 generated in step S801 and the DF processing image 26 generated in step S803.
- step S806 the ALF processing image 706 performs ALF processing (step S806). That is, in step S806, the ALF processed image 28 is generated based on the filter information 24 generated in step S801 and the SAO processed image 27 generated in step S805. These series of processes are repeated until the output image is completely decoded.
- the SAO processing unit 705 includes a first offset class setting unit 901, an offset value calculation unit 902, and an offset value addition unit 903.
- the first offset class setting unit 901 inputs a predetermined region (for example, a slice) of the DF processed image 26 from the DF processing unit 704, and sets a first offset class based on an index for each unit in the predetermined region. To do.
- the first offset class setting unit 901 may perform the same or similar processing as the first offset class setting unit 301.
- the first offset class setting unit 901 generates first offset class information indicating the first offset class set for each unit.
- the first offset class setting unit 901 outputs the first offset class information to the offset value adding unit 903.
- the offset value calculation unit 902 receives the offset information 23 from the entropy decoding unit 701. For each second offset class, the offset value calculation unit 902 calculates an offset value corresponding to the first offset class included in the second offset class based on the offset value corresponding to the second offset class. calculate. Note that the offset value calculation unit 902 may perform the same or similar processing as the offset value calculation unit 304. The offset value calculation unit 902 outputs offset information indicating the offset value corresponding to each of the plurality of first offset classes to the offset value addition unit 903.
- the offset value adding unit 903 inputs the DF processed image 26 from the DF processing unit 704, inputs first offset class information from the first offset class setting unit 901, and inputs offset information from the offset value calculating unit 902. .
- the offset value adding unit 903 adds the offset value corresponding to the first offset class set for each unit in the DF processed image 26 to generate the SAO processed image 27.
- the offset value adding unit 903 outputs the SAO processed image 27 to the ALF processing unit 706.
- one or both of the DF processing unit 704 and the ALF processing unit 706 may be omitted.
- the SAO processing unit 705 may perform SAO processing on the decoded image 25 instead of the DF processed image 26.
- the SAO processing unit 705 may output the SAO processed image 27 to the predicted image generation unit 707. Further, the application order of the SAO process, the ALF process, and the DF process may be changed from that illustrated in FIG.
- the video encoding apparatus assigns each of the one or more second offset classes to each of the one or more second offset classes instead of setting an offset value corresponding to each of the plurality of first offset classes.
- Set the corresponding offset value the total number of second offset classes is smaller than the total number of first offset classes. Therefore, according to the moving picture encoding apparatus, it is possible to reduce the calculation amount and the memory usage amount for the offset value setting process.
- the offset value corresponding to each of the one or more first offset classes included in the given second offset class can be calculated based on the offset value set in the second offset class. .
- the moving picture decoding apparatus since it is only necessary to signal the offset value corresponding to each of the one or more second offset classes, the overhead of information indicating the offset value can be reduced.
- the moving picture decoding apparatus according to the present embodiment sets an offset value corresponding to each of one or more first offset classes included in a given second offset class in the second offset class. Calculation is performed based on the offset value. Therefore, according to this video decoding device, SAO processing can be performed based on information indicating an offset value from the video encoding device according to the present embodiment.
- OSALF Synchronization-based Filtering
- SAO processing A technique called OSALF is applied by switching between ALF processing and SAO processing in units of pixel blocks, for example.
- the first embodiment may be combined with OSALF.
- the second embodiment uses the first embodiment described above for one or both of ALF processing and SAO processing in OSALF.
- the moving picture coding apparatus includes a moving picture coding unit 1000 and a coding control unit 1010.
- the video encoding unit 1000 includes a predicted image generation unit 1001, a subtraction unit 102, a transform and quantization unit 103, an inverse quantization and inverse transform unit 104, an adder 105, a DF processing unit 1006, an SAO A processing unit 1007, an ALF processing unit 1008, and an entropy encoding unit 1009 are provided.
- the predicted image generation unit 1001 performs a prediction process on the input image 11 in units of pixel blocks, for example, and generates a predicted image. As will be described later, in the present embodiment, either one of the ALF processed image 32 and the SAO processed image 34 is generated in units of pixel blocks, for example.
- the predicted image generation unit 1001 may perform a prediction process for each predetermined region (for example, a pixel block) of the input image 11 based on one of the ALF processed image 32 and the SAO processed image 34. Note that the predicted image generation unit 1001 may perform the same or similar prediction processing as the predicted image generation unit 101.
- the predicted image generation unit 1001 outputs the predicted image to the subtraction unit 102 and the addition unit 105.
- the DF processing unit 1006 receives the decoded image 12 from the adding unit 105.
- the DF processing unit 1006 performs DF processing on the decoded image 12 to generate a DF processed image 13.
- the DF processing unit 1006 may perform the same or similar DF processing as the DF processing unit 106.
- the DF processing unit 1006 outputs each predetermined region (for example, a pixel block) of the DF processed image 13 to either the SAO processing unit 1007 or the ALF processing unit 1008 in accordance with the control from the encoding control unit 1010.
- the SAO processing unit 1007 acquires the input image 11 from the outside, and inputs the DF processed image 13 from the DF processing unit 1006.
- the SAO processing unit 1007 performs SAO processing on the DF processed image 13 based on the input image 11 to generate offset information 33 and SAO processed image 34.
- the SAO processing unit 1007 may perform the same or similar processing as the SAO processing unit 107, or may perform conventional SAO processing.
- the SAO processing unit 1007 outputs the offset information 33 to the entropy encoding unit 1009, and outputs the SAO processed image 34 to the predicted image generation unit 1001.
- the ALF processing unit 1008 acquires the input image 11 from the outside, and inputs the DF processed image 13 from the DF processing unit 1006.
- the ALF processing unit 1008 performs ALF processing on the DF processed image 13 based on the input image 11, and generates filter information and offset information 31 and an ALF processed image 32.
- the ALF processing unit 1008 is assumed to be exemplified in FIG. 11, but may be the same as or similar to the ALF processing unit 108. Details of the ALF processing unit 1008 shown in FIG. 11 will be described later.
- the ALF processing unit 1008 outputs the filter information and the offset information 31 to the entropy encoding unit 1009, and outputs the ALF processed image 32 to the predicted image generation unit 1001.
- the ALF processed image 32 or the SAO processed image 34 may be stored in a storage unit (not shown) (for example, a buffer) accessible by the predicted image generation unit 1001.
- the ALF processed image 32 or the SAO processed image 34 is read as a reference image by the predicted image generation unit 1001 as necessary and used for the prediction process.
- the entropy encoding unit 1009 receives the quantized transform coefficient from the transform and quantization unit 103, and receives the encoding parameter from the encoding control unit 1010. Furthermore, the entropy encoding unit 1009 inputs, for example, any one of the filter information from the ALF processing unit 1008 and the offset information 31 and the offset information 33 from the SAO processing unit 1007 for each pixel block. The entropy encoding unit 1009 entropy encodes the quantized transform coefficient, the encoding parameter, the filter information and the offset information 31 (or offset information 33), and generates encoded data 35. The entropy encoding unit 1009 outputs the encoded data 35 to the outside. The encoded data 35 is decoded by a moving picture decoding apparatus described later.
- the coding control unit 1010 performs coding block division control, generated code amount feedback control, quantization control, mode control, ALF processing, SAO processing selection control, and the like for the moving image coding unit 1000.
- the encoding control unit 1010 outputs the encoding parameter to the entropy encoding unit 1009.
- the ALF processing unit 1008 includes a first offset class setting unit 301, a second offset class setting unit 302, a filter coefficient set and offset value setting unit 1103, and an offset value calculation unit. 304 and a filter processing unit 1105 can be provided.
- the second offset class setting unit 302, the filter coefficient set and offset value setting unit 1103, and the offset value calculation unit 304 may be referred to as a filter information and offset information generation unit.
- the filter coefficient set and offset value setting unit 1103 acquires the input image 11 from the outside, inputs the DF processed image 13 from the DF processing unit 1006, and inputs the second offset class information from the second offset class setting unit 302. To do.
- the offset value setting unit 1103 sets a filter coefficient set and an offset value corresponding to each of the second offset classes based on the input image 11 and the DF processed image 13.
- the filter coefficient set and offset value setting unit 1103 outputs the filter information 36 indicating the set filter coefficient set to the filter processing unit 1105. Further, the filter coefficient set and offset value setting unit 1103 outputs the offset information 37 indicating the offset value set for each of the second offset classes to the offset value calculation unit 304.
- the filter coefficient set and offset value setting unit 1103 outputs the filter information including the filter information 36 and the offset information 37 and the offset information 31 to the entropy encoding unit 1009.
- the filter coefficient set and offset value setting unit 1103 uses a plurality of filter coefficient values provided in the filter coefficient set and offset values corresponding to each of the second offset classes as variables, and the ALF processed image 32 and the input image 11.
- the filter coefficient set and the offset value corresponding to each of the second offset classes are set so that the square error sum between the two is minimized.
- Such filter coefficient sets and offset values can be derived by solving the Wiener-Hopf equation.
- the filter processing unit 1105 receives the DF processed image 13 from the DF processing unit 1006, receives the first offset class information from the first offset class setting unit 301, and receives a plurality of first offsets from the offset value calculation unit 304. Offset information indicating an offset value corresponding to each class is input, and filter information 36 is input from the filter coefficient set and offset value setting unit 1103.
- the filter processing unit 1105 performs a filter operation based on the filter information 36 for the DF processed image 13, and then sets an offset value corresponding to the first offset class set for the unit for each unit in the DF processed image 13. Addition is performed to generate an ALF processed image 32.
- the filter processing unit 1105 outputs the ALF processed image 32 to the predicted image generation unit 1001.
- the DF processing unit 1006 may be omitted in the video encoding device according to the present embodiment.
- the ALF processing unit 1008 (or SAO processing unit 1007) may perform ALF processing (or SAO processing) on the decoded image 12 instead of the DF processing image 13.
- filter processing such as DF processing
- the application order of the SAO processing, ALF processing, and DF processing may be changed from that illustrated in FIG.
- the moving image decoding apparatus includes a moving image decoding unit 1200 and a decoding control unit 1208.
- the video decoding unit 1200 includes an entropy decoding unit 1201, an inverse quantization and inverse transformation unit 702, an addition unit 703, a DF processing unit 1204, an ALF processing unit 1205, an SAO processing unit 1206, and a predicted image generation unit. 1207.
- the entropy decoding unit 1201 inputs the encoded data 41 from the outside (for example, a communication system or a storage system).
- the encoded data 41 is the same as or similar to the encoded data 35 described above.
- the entropy decoding unit 1201 entropy-decodes the encoded data 41 to generate a quantized transform coefficient, an encoding parameter 42, filter information and offset information 43 (or offset information 44).
- the entropy decoding unit 1201 outputs the quantized transform coefficient to the inverse quantization and inverse transform unit 702, outputs the encoding parameter 42 to the decoding control unit 1208, and outputs the filter information and offset information 43 to the ALF processing unit 1205.
- the offset information 44 is output to the SAO processing unit 1206.
- the filter information and offset information 43 includes offset information 46 and filter information 47.
- the offset information 46 may be the same as or similar to the offset information 37 described above.
- the filter information 47 may be the same as or similar to the filter information 36 described above. That is, the filter information and offset information 43 may be the same as or similar to the filter information and offset information 31. Further, the offset information 44 may be the same as or similar to the offset information 33.
- the DF processing unit 1204 receives the decoded image 25 from the adding unit 703.
- the DF processing unit 1204 performs DF processing on the decoded image 25 to generate a DF processed image 26.
- the DF processing unit 1204 may perform the same or similar DF processing as the DF processing unit 704.
- the DF processing unit 1204 outputs each predetermined region (for example, a pixel block) of the DF processed image 26 to one of the ALF processing unit 1205 and the SAO processing unit 1206 in accordance with control from the decoding control unit 1208.
- the ALF processing unit 1205 may be the one illustrated in FIG. 13, and may be the same as or similar to the ALF processing unit 706. In the description of the present embodiment, it is assumed that the ALF processing unit 1205 is exemplified in FIG. Details of the ALF processing unit 1205 shown in FIG. 13 will be described later.
- the ALF processing unit 1205 receives the filter information and the offset information 43 from the entropy decoding unit 1201 and receives the DF processed image 26 from the DF processing unit 1204.
- the ALF processing unit 1205 generates the ALF processed image 45 by applying a filter to the DF processed image 26 based on the filter information and the offset information 43.
- the ALF processing unit 1205 outputs the ALF processed image 45 to the predicted image generation unit 1207. Further, the ALF processing unit 1205 may provide the ALF processed image 45 to the outside (for example, a display system) as an output image.
- the SAO processing unit 1206 receives the offset information 44 from the entropy decoding unit 1201 and inputs the DF processed image 26 from the DF processing unit 1204.
- the SAO processing unit 1206 generates the SAO processed image 46 by applying the offset value to the DF processed image 26 based on the offset information 44.
- the SAO processing unit 1206 may perform the same or similar processing as the SAO processing unit 705, or may perform conventional SAO processing.
- the SAO processing unit 1206 outputs the SAO processed image 46 to the predicted image generation unit 1207. Further, the SAO processing unit 1207 may provide the SAO processed image 46 to the outside (for example, a display system) as an output image.
- the ALF processed image 45 or the SAO processed image 46 may be stored in a storage unit (not shown) (for example, a buffer) accessible by the predicted image generation unit 1207.
- the ALF processed image 45 or the SAO processed image 46 is read as a reference image by the predicted image generation unit 1207 as necessary and used for the prediction process.
- the predicted image generation unit 1207 performs output image prediction processing in units of pixel blocks or different units, and generates a predicted image. As described above, in the present embodiment, for example, one of the ALF processed image 45 and the SAO processed image 46 is generated in units of pixel blocks.
- the predicted image generation unit 1207 may perform output image prediction processing based on one of the ALF processed image 45 and the SAO processed image 46. That is, the predicted image generation unit 1207 performs the same or similar processing as that of the predicted image generation unit 1001 described above.
- the predicted image generation unit 1207 outputs the predicted image to the adding unit 703.
- the decoding control unit 1208 receives the encoding parameter 42 from the entropy decoding unit 1201. Based on the encoding parameter 42, the decoding control unit 1208 performs encoding block division control, quantization control, mode control, ALF processing, SAO processing selection control, and the like.
- the ALF processing unit 1205 can include a first offset class setting unit 901, an offset value calculation unit 902, and a filter processing unit 1303, as illustrated in FIG.
- the filter processing unit 1303 receives the DF processed image 26 from the DF processing unit 1204, inputs first offset class information from the first offset class setting unit 901, and receives a plurality of first offsets from the offset value calculation unit 902. Information indicating an offset value corresponding to each class is input, and filter information 47 is input from the entropy decoding unit 1201. The filter processing unit 1303 performs a filter operation on the DF processed image 26 based on the filter information 47, and then adds an offset value corresponding to the first offset class set for the unit for each unit in the DF processed image 26. Then, an ALF processed image 45 is generated. The filter processing unit 1303 outputs the ALF processed image 45 to the predicted image generation unit 1207.
- the DF processing unit 1204 may be omitted in the video decoding device according to the present embodiment.
- the ALF processing unit 1205 (or SAO processing unit 1206) may perform ALF processing (or SAO processing) on the decoded image 25 instead of the DF processing image 26.
- ALF processing or SAO processing
- filter processing such as DF processing
- the amount of calculation for the filter processing can be reduced instead of obtaining the image quality improvement effect by the filter processing.
- the application order of the SAO process, the ALF process, and the DF process may be changed from that illustrated in FIG.
- the moving picture coding apparatus and the moving picture decoding apparatus according to the second embodiment use the first embodiment described above for one or both of the ALF process and the SAO process in OSALF. Therefore, according to these video encoding device and video decoding device, it is possible to obtain the effect of OSALF and the same or similar effect as in the first embodiment.
- processing corresponding to SAO processing may be performed within the framework of ALF processing.
- the third embodiment uses the first embodiment described above for such ALF processing.
- the moving image encoding apparatus includes a moving image encoding unit 1400 and an encoding control unit 1410.
- the moving image encoding unit 1400 includes a predicted image generation unit 101, a subtraction unit 102, a transformation and quantization unit 103, an inverse quantization and inverse transformation unit 104, an addition unit 105, a DF processing unit 106, an ALF A processing unit 1408 and an entropy encoding unit 1409 are provided.
- the ALF processing unit 1408 acquires the input image 11 from the outside, and inputs the DF processed image 13 from the DF processing unit 106.
- the ALF processing unit 1408 performs ALF processing on the DF processed image 13 based on the input image 11 to generate filter information / offset information 51 and an ALF processed image 52.
- the filter information and offset information 51 may be the same as or similar to the filter information and offset information 31.
- the ALF processed image 52 may be the same as or similar to the ALF processed image 32. That is, the ALF processing unit 1408 may be the same as or similar to that shown in FIG.
- the ALF processing unit 1408 outputs the filter information and the offset information 51 to the entropy encoding unit 1409, and outputs the ALF processed image 52 to the predicted image generation unit 101.
- the ALF processed image 52 may be stored in a storage unit (not shown) (for example, a buffer) accessible by the predicted image generation unit 101.
- the ALF processed image 52 is read as a reference image by the predicted image generation unit 101 as necessary, and is used for the prediction process.
- the entropy coding unit 1409 receives the quantized transform coefficient from the transform and quantization unit 103, receives the filter information and offset information 51 from the ALF processing unit 1408, and receives the coding parameters from the coding control unit 1410.
- the entropy encoding unit 1409 entropy-encodes the quantized transform coefficient, the filter information / offset information 51, and the encoding parameter to generate encoded data 53.
- the entropy encoding unit 1409 outputs the encoded data 53 to the outside.
- the encoded data 53 is decoded by a moving picture decoding apparatus described later.
- the encoding control unit 1410 performs encoding block division control, generated code amount feedback control, quantization control, mode control, and the like for the moving image encoding unit 1400.
- the encoding control unit 1410 outputs the encoding parameter to the entropy encoding unit 1409.
- the DF processing unit 106 may be omitted in the video encoding device according to the present embodiment.
- the ALF processing unit 1408 may perform ALF processing on the decoded image 12 instead of the DF processed image 13.
- filter processing such as DF processing
- the amount of calculation for the filter processing can be reduced instead of obtaining the image quality improvement effect by the filter processing.
- the application order of the ALF process and the DF process may be changed from that illustrated in FIG.
- the moving picture decoding apparatus includes a moving picture decoding unit 1500 and a decoding control unit 1508.
- the moving image decoding unit 1500 includes an entropy decoding unit 1501, an inverse quantization and inverse transformation unit 702, an addition unit 703, a DF processing unit 704, an ALF processing unit 1505, and a predicted image generation unit 707.
- the entropy decoding unit 1501 inputs the encoded data 61 from the outside (for example, a communication system or a storage system).
- the encoded data 61 is the same as or similar to the encoded data 53 described above.
- the entropy decoding unit 1501 entropy-decodes the encoded data 61 to generate a quantized transform coefficient, an encoding parameter 62, filter information, and offset information 63.
- the entropy decoding unit 1501 outputs the quantized transform coefficient to the inverse quantization and inverse transform unit 702, outputs the encoding parameter 62 to the decoding control unit 1508, and outputs the filter information and offset information 63 to the ALF processing unit 1505. Is output.
- the filter information and offset information 63 may be the same as or similar to the filter information and offset information 43.
- the ALF processing unit 1505 receives the filter information and the offset information 63 from the entropy decoding unit 1501 and inputs the DF processed image 26 from the DF processing unit 704.
- the ALF processing unit 1505 generates the ALF processed image 64 by applying a filter to the DF processed image 26 based on the filter information and the offset information 63.
- the ALF processed image 64 may be the same as or similar to the ALF processed image 45. That is, the ALF processing unit 1505 may be the same as or similar to that shown in FIG.
- the ALF processing unit 1505 outputs the ALF processed image 64 to the predicted image generation unit 707.
- the ALF processing unit 1505 may provide the ALF processed image 64 to the outside (for example, a display system) as an output image.
- the ALF processed image 64 may be stored in a storage unit (not shown) (for example, a buffer) accessible by the predicted image generation unit 707.
- the ALF processed image 64 is read as a reference image by the predicted image generation unit 707 as necessary, and is used for the prediction process.
- the decoding control unit 1508 receives the encoding parameter 62 from the entropy decoding unit 1501.
- the decoding control unit 1508 performs coding block division control, quantization control, mode control, and the like based on the coding parameter 62.
- the DF processing unit 704 may be omitted in the video decoding device according to the present embodiment.
- the ALF processing unit 1505 may perform ALF processing on the decoded image 25 instead of the DF processed image 26. Further, the application order of the ALF process and the DF process may be changed from that illustrated in FIG.
- the video encoding device and the video decoding device according to the third embodiment use the first embodiment in a process corresponding to the SAO process performed in the framework of the ALF process. Therefore, according to these moving image encoding apparatus and moving image decoding apparatus, it is possible to obtain the effect by the ALF process accompanied by the process corresponding to the SAO process and the same or similar effect as the first embodiment.
- the video encoding apparatus performs processing corresponding to SAO processing within the framework of ALF processing, but additional SAO processing may be added.
- This SAO process may be a conventional one or the one described in the first embodiment.
- parameters such as the total number of offset values to be set and an index for switching the offset value are determined to be different from those in the ALF process.
- the SAO process may be added before or after the ALF process, or may be added before or after the DF process.
- the video encoding apparatus according to the fourth embodiment is different from the video encoding apparatus according to the first to third embodiments in the SAO processing unit.
- the video encoding apparatus according to the fourth embodiment can include a SAO processing unit exemplified in FIG.
- the SAO processing unit in FIG. 16 includes a first offset class setting unit 301, a second offset class setting unit 302, an offset value setting unit 1603, an offset value calculating unit 1604, and an offset value adding unit 305.
- the second offset class setting unit 302, the offset value setting unit 1603, and the offset value calculation unit 1604 may be referred to as an offset information generation unit.
- the offset value setting unit 1603 acquires the input image 11 from the outside, inputs the DF processed image 13 from the DF processing unit 106, and inputs the second offset class information from the second offset class setting unit 302.
- the offset value setting unit 1603 sets an offset value corresponding to each of the second offset classes based on the input image 11 and the DF processed image 13. Note that the offset value setting unit 1603 may set an offset value corresponding to each of the second offset classes by performing the same or similar processing as the offset value setting unit 303.
- the offset value setting unit 1603 outputs the offset information 14 indicating the offset value corresponding to each of the second offset classes to the offset value calculation unit 1604.
- the offset value calculation unit 1604 receives the offset information 14 from the offset value setting unit 1603. For each second offset class, the offset value calculation unit 1604 calculates an offset value corresponding to the first offset class included in the second offset class based on the offset value corresponding to the second offset class. calculate. Note that the offset value calculation unit 1604 may calculate an offset value corresponding to each of the plurality of first offset classes by performing the same or similar processing as the offset value calculation unit 304. The offset value calculation unit 1604 outputs offset information 71 indicating the offset value corresponding to each of the plurality of first offset classes to the offset value addition unit 305 and the entropy encoding unit 109. The offset information 71 is signaled instead of the offset information 14 described above.
- the offset value adding unit 1605 receives the DF processed image 13 from the DF processing unit 106, inputs first offset class information from the first offset class setting unit 301, and inputs offset information 71 from the offset value calculating unit 1604. To do.
- the offset value adding unit 1605 adds the offset value corresponding to the first offset class set for each unit in the DF processed image 13 to generate the SAO processed image 15.
- the offset value adding unit 1605 outputs the SAO processed image 15 to the ALF processing unit 108.
- the moving picture decoding apparatus according to the fourth embodiment is different from the moving picture decoding apparatus according to the first to third embodiments in the SAO processing unit.
- the moving picture decoding apparatus according to the fourth embodiment can include the SAO processing unit illustrated in FIG.
- the SAO processing unit in FIG. 17 includes a first offset class setting unit 901 and an offset value adding unit 1703.
- the offset value adding unit 1703 inputs the DF processed image 26 from the DF processing unit 704 and inputs the first offset class information from the first offset class setting unit 901. Further, the offset value adding unit 1703 receives the offset information 81 indicating the offset value corresponding to each of the plurality of first offset classes from the entropy decoding unit 701. The offset information 81 is signaled instead of the offset information 23 described above. The offset information 81 may be the same as or similar to the offset information 71.
- the offset value addition unit 1703 adds the offset value corresponding to the first offset class set for each unit in the DF processed image 26 to generate the SAO processed image 27.
- the offset value adding unit 1703 outputs the SAO processed image 27 to the ALF processing unit 706.
- the moving picture encoding apparatus assigns an offset value corresponding to each of the plurality of first offset classes to each of the one or more second offset classes. Set the corresponding offset value.
- the total number of second offset classes is smaller than the total number of first offset classes. Therefore, according to this moving image encoding apparatus, it is possible to reduce the calculation amount and the memory usage amount for the process of setting the offset value. Further, the encoding apparatus calculates offset values corresponding to the plurality of first offset classes based on the offset values corresponding to the set second offset class, and includes offset information indicating the calculated offset values. Signal.
- the moving picture decoding apparatus can perform the signaled offset even if the offset values corresponding to the plurality of first offset classes cannot be calculated based on the offset values set in the second offset class.
- SAO processing can be performed based on the information.
- a process corresponding to the SAO process described in this embodiment may be performed within the framework of the ALF process in the second or third embodiment.
- the encoding side sets an offset value corresponding to one or more second offset classes.
- the encoding side sets offset values corresponding to a plurality of first offset classes instead of one or more second offset classes, but the overhead of offset information indicating the offset values will be described later. Can be reduced by using the prediction process.
- the video encoding apparatus according to the fifth embodiment is different from the video encoding apparatus according to the first to third embodiments in the SAO processing unit.
- the video encoding apparatus according to the fifth embodiment can include a SAO processing unit exemplified in FIG.
- the SAO processing unit in FIG. 18 includes a first offset class setting unit 301, an offset value setting unit 1801, an offset value prediction unit 1802, and an offset value addition unit 1803.
- the offset value setting unit 1801 acquires the input image 11 from the outside, inputs the DF processed image 13 from the DF processing unit 106, and inputs the first offset class information from the first offset class setting unit 301.
- the offset value setting unit 1801 sets an offset value corresponding to each of the plurality of first offset classes based on the input image 11 and the DF processed image 13.
- the offset value setting unit 1801 outputs offset information indicating an offset value corresponding to each of the plurality of first offset classes to the offset value prediction unit 1802 and the offset value addition unit 1803.
- the offset value setting unit 1801 uses a plurality of offset values corresponding to each of the plurality of first offset classes as variables so that the square error sum between the SAO processed image 92 and the input image 11 is minimized.
- An offset value corresponding to each of the first offset classes is set.
- the offset value prediction unit 1802 inputs offset information from the offset value setting unit 1801.
- the offset value predicting unit 1802 is based on an offset value set in one first offset class included in a given second offset class (hereinafter also referred to as a reference offset value).
- An offset value corresponding to the remaining first offset class included in the second offset class is predicted, and a prediction residual is calculated.
- the offset value prediction process may be the same as or similar to the offset value calculation process in the first to fourth embodiments described above. That is, in the offset value calculation process in the first to fourth embodiments described above, the offset value set in a given second offset class may be replaced as the reference offset value.
- the offset value prediction unit 1802 outputs the offset information 91 indicating either the reference offset value or the prediction residual corresponding to each of the plurality of first offset classes to the entropy encoding unit 109.
- the offset information 91 is signaled instead of the offset information 14 described above.
- the correspondence between the plurality of first offset classes included in the given second offset class may be uniquely determined in advance.
- the correspondence relationship refers to information for specifying a plurality of first offset classes included in a given second offset class, and the plurality of these offsets based on the offset value set in the second offset class.
- Information for example, a function
- a plurality of correspondence relationships may be prepared, and any one may be selected.
- information indicating which correspondence relationship has been selected may be signaled as one element of the offset information 91, for example. However, such information need not be signaled if the decoding side can uniquely derive which correspondence is selected.
- the statistical property of the offset value to be set may be predicted based on the offset value encoded in the past, and the correspondence relationship may be determined based on the statistical property.
- the offset value prediction unit 1802 may select one correspondence that minimizes overhead from a plurality of correspondences, for example.
- the offset value adding unit 1803 inputs the DF processed image 13 from the DF processing unit 106, inputs first offset class information from the first offset class setting unit 301, and inputs offset information from the offset value setting unit 1801. .
- the offset value adding unit 1803 adds the offset value corresponding to the first offset class set for each unit in the DF processed image 13 to generate the SAO processed image 92.
- the offset value adding unit 1803 outputs the SAO processed image 92 to the ALF processing unit 108.
- the video decoding device according to the fifth embodiment is different from the video decoding device according to the first to third embodiments in the SAO processing unit.
- the moving picture decoding apparatus according to the fifth embodiment can include the SAO processing unit illustrated in FIG.
- the SAO processing unit in FIG. 20 includes a first offset class setting unit 901, an offset value restoring unit 2001, and an offset value adding unit 2002.
- the offset value restoration unit 2001 receives the offset information 93 from the entropy decoding unit 701.
- the offset information 93 is signaled instead of the offset information 23.
- the offset information 93 may be the same as or similar to the offset information 91.
- the offset value restoration unit 2001 uses the first offset class included in the second offset class based on the reference offset value set in one first offset class included in the given second offset class.
- the offset value corresponding to the first offset class is predicted, and the offset value set in the first offset class of the residual is restored by adding the prediction residual.
- the offset value restoration unit 2001 may perform a prediction process that is the same as or similar to the offset value prediction unit 1802.
- the offset value restoration unit 2001 outputs offset information indicating an offset value corresponding to each of the plurality of first offset classes to the offset value addition unit 2002.
- the offset value adding unit 2002 receives the DF processed image 26 from the DF processing unit 704, inputs first offset class information from the first offset class setting unit 901, and inputs offset information from the offset value restoring unit 2001. .
- the offset value addition unit 2002 adds the offset value corresponding to the first offset class set for each unit in the DF processed image 26 to generate the SAO processed image 94.
- the offset value adding unit 2002 outputs the SAO processed image 94 to the ALF processing unit 706.
- the video encoding apparatus sets an offset value for each of a plurality of first offset classes. Therefore, according to this moving image encoding apparatus, the image quality of the SAO processed image is improved as compared with the first to fourth embodiments described above. Further, the moving picture encoding apparatus predicts an offset value set for a part of the plurality of first offset classes, and signals a prediction residual. Therefore, according to this moving image encoding apparatus, the overhead of information indicating an offset value can be reduced. In addition, the video decoding device according to the present embodiment can restore the offset value corresponding to each of the plurality of first offset classes based on the reference offset value and the prediction residual. Therefore, according to this video decoding device, SAO processing can be performed based on information indicating an offset value from the video encoding device according to the present embodiment.
- a process corresponding to the SAO process described in this embodiment may be performed within the framework of the ALF process in the second or third embodiment.
- the processing of each of the above embodiments can be realized by using a general-purpose computer as basic hardware.
- the program for realizing the processing of each of the above embodiments may be provided by being stored in a computer-readable storage medium.
- the program is stored in the storage medium as an installable file or an executable file. Examples of the storage medium include a magnetic disk, an optical disk (CD-ROM, CD-R, DVD, etc.), a magneto-optical disk (MO, etc.), and a semiconductor memory.
- the storage medium may be any as long as it can store the program and can be read by the computer.
- the program for realizing the processing of each of the above embodiments may be stored on a computer (server) connected to a network such as the Internet and downloaded to the computer (client) via the network.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
(第1の実施形態)
(動画像符号化装置)
図1に例示されるように、第1の実施形態に係る動画像符号化装置は、動画像符号化部100と、符号化制御部110とを含む。動画像符号化部100は、予測画像生成部101と、減算部102と、変換及び量子化部103と、逆量子化及び逆変換部104と、加算部105と、デブロッキングフィルタ(Deblocking Filter;DF)処理部106と、SAO処理部107と、ALF処理部108と、エントロピー符号化部109とを含む。符号化制御部110は、動画像符号化部100の各部の動作を制御する。
上記数式(1)において、Sdec(x,y)は、DF処理画像13における位置(x,y)の画素値を表す。上記数式(1)によれば、指標k(x,y)は、位置(x,y)におけるアクティビティを表す。
或いは、位置(x,y)によって特定される画素と4方向(上下左右方向または斜め4方向)または8方向の近傍画素との間の差の絶対値和によってアクティビティが算出されてもよい。尚、上記数式(1),(2)などを利用して、注目画素の周囲の一定範囲内の画素、例えば、注目画素の周囲N(Nは2以上の整数)×N画素のブロック内の画素についてアクティビティを算出し、これらの和を指標k(x,y)としてもよい。
上記数式(3)において、関数sign(α)は、αが正値であれば1を返し、αが0であれば0を返し、αが負値であれば-1を返す。具体的には、上記数式(3)によれば、指標k(x,y)は、注目画素の画素値が周囲の4画素のいずれよりも大きければ8となり、注目画素の画素値が周囲の4画素のいずれとも同じであれば4となり、注目画素の画素値が周囲の4画素のいずれよりも小さければ0となる。
数式(3),(4)などにおいて隣接画素を示す方向は、シーケンス単位、フレーム単位、スライス単位または画素ブロック単位で決定され、その方向を示す情報が符号化されてもよい。例えば、ある画素ブロック内の各画素について水平方向の2つの隣接画素に基づいて指標が算出され、別の画素ブロック内の各画素について垂直方向の2つの隣接画素に基づいて指標が算出されてよい。
上記数式(3)において、offset_idx(x,y)は位置(x,y)によって特定される画素が属する単位の第1のオフセットクラスを表す。k(x,y)は、上記単位に第1のオフセットクラスを設定するための指標を表す。δは、1以上の実数を表す。
上記数式(6)において、Costは符号化コストを表し、Dは残差二乗和を表し、Rは符号量を表す。
図7に例示されるように、第1の実施形態に係る動画像復号装置は、動画像復号部700と、復号制御部708とを含む。動画像復号部700は、エントロピー復号部701と、逆量子化及び逆変換部702と、加算部703と、DF処理部704と、SAO処理部705と、ALF処理部706と、予測画像生成部707とを含む。復号制御部708は、動画像復号部700の各部の動作を制御する。
OSALFと呼ばれる技法は、例えば画素ブロック単位でALF処理及びSAO処理を切り替えて適用するものである。第1の実施形態は、OSALFと組み合わせられてもよい。第2の実施形態は、OSALFにおけるALF処理及びSAO処理の一方または両方に対して前述の第1の実施形態を利用する。
図10に例示されるように、第2の実施形態に係る動画像符号化装置は、動画像符号化部1000と、符号化制御部1010とを含む。動画像符号化部1000は、予測画像生成部1001と、減算部102と、変換及び量子化部103と、逆量子化及び逆変換部104と、加算部105と、DF処理部1006と、SAO処理部1007と、ALF処理部1008と、エントロピー符号化部1009とを備える。
図12に例示されるように、第2の実施形態に係る動画像復号装置は、動画像復号部1200と、復号制御部1208とを含む。動画像復号部1200は、エントロピー復号部1201と、逆量子化及び逆変換部702と、加算部703と、DF処理部1204と、ALF処理部1205と、SAO処理部1206と、予測画像生成部1207とを備える。
第2の実施形態において説明したように、ALF処理の枠組みの中でSAO処理に相当する処理(即ち、復号画像内の単位毎に複数のオフセット値を設定及び適用する処理)を行うことがある。第3の実施形態は、係るALF処理に対して前述の第1の実施形態を利用する。
図14に例示されるように、第3の実施形態に係る動画像符号化装置は、動画像符号化部1400と、符号化制御部1410とを含む。動画像符号化部1400は、予測画像生成部101と、減算部102と、変換及び量子化部103と、逆量子化及び逆変換部104と、加算部105と、DF処理部106と、ALF処理部1408と、エントロピー符号化部1409とを備える。
図15に例示されるように、第3の実施形態に係る動画像復号装置は、動画像復号部1500と、復号制御部1508とを含む。動画像復号部1500は、エントロピー復号部1501と、逆量子化及び逆変換部702と、加算部703と、DF処理部704と、ALF処理部1505と、予測画像生成部707とを備える。
前述の第1乃至第3の実施形態において1以上の第2のオフセットクラスに対応するオフセット値を示すオフセット情報がシグナリングされる。第4の実施形態は、複数の第1のオフセットクラスに対応するオフセット値を示すオフセット情報がシグナリングされるように第1乃至第3の実施形態を変形するものである。
第4の実施形態に係る動画像符号化装置は、第1乃至第3の実施形態に係る動画像符号化装置とSAO処理部において異なる。第4の実施形態に係る動画像符号化装置は、図16に例示されるSAO処理部を備えることができる。図16のSAO処理部は、第1のオフセットクラス設定部301と、第2のオフセットクラス設定部302と、オフセット値設定部1603と、オフセット値算出部1604と、オフセット値加算部305とを備える。尚、第2のオフセットクラス設定部302と、オフセット値設定部1603と、オフセット値算出部1604とが、オフセット情報生成部と称されてもよい。
第4の実施形態に係る動画像復号装置は、第1乃至第3の実施形態に係る動画像復号装置とSAO処理部において異なる。第4の実施形態に係る動画像復号装置は、図17に例示されるSAO処理部を備えることができる。図17のSAO処理部は、第1のオフセットクラス設定部901と、オフセット値加算部1703とを備える。
前述の第1乃至第4の実施形態において、符号化側は1以上の第2のオフセットクラスに対応するオフセット値を設定する。第5の実施形態において、符号化側は1以上の第2のオフセットクラスの代わりに複数の第1のオフセットクラスに対応するオフセット値を設定するものの、オフセット値を示すオフセット情報のオーバーヘッドを後述される予測処理を利用して削減できる。
第5の実施形態に係る動画像符号化装置は、第1乃至第3の実施形態に係る動画像符号化装置とSAO処理部において異なる。第5の実施形態に係る動画像符号化装置は、図18に例示されるSAO処理部を備えることができる。図18のSAO処理部は、第1のオフセットクラス設定部301と、オフセット値設定部1801と、オフセット値予測部1802と、オフセット値加算部1803とを備える。
第5の実施形態に係る動画像復号装置は、第1乃至第3の実施形態に係る動画像復号装置とSAO処理部において異なる。第5の実施形態に係る動画像復号装置は、図20に例示されるSAO処理部を備えることができる。図20のSAO処理部は、第1のオフセットクラス設定部901と、オフセット値復元部2001と、オフセット値加算部2002とを備える。
12,25・・・復号画像
13,26・・・DF処理画像
14,23,33,37,44,46,71,81,91,93・・・オフセット情報
15,27,34,46,92,94・・・SAO処理画像
16,24,36,47・・・フィルタ情報
17,28,32,45,52,64・・・ALF処理画像
18,21,35,41,53,61・・・符号化データ
22,42,62・・・符号化パラメータ
31,43,51,63・・・フィルタ情報及びオフセット情報
100,1000・・・動画像符号化部
101,707,1001,1207・・・予測画像生成部
102・・・減算部
103・・・変換及び量子化部
104,702・・・逆量子化及び逆変換部
105,703・・・加算部
106,704,1006,1204・・・DF処理部
107,705,1007,1206・・・SAO処理部
108,706,1008,1205,1408,1505・・・ALF処理部
109,1009,1409・・・エントロピー符号化部
110,1010,1410・・・符号化制御部
301,901・・・第1のオフセットクラス設定部
302・・・第2のオフセットクラス設定部
303,1603,1801・・・オフセット値設定部
304,902,1604・・・オフセット値算出部
305,903,1605,1703,1803,2002・・・オフセット値加算部
306・・・オフセット情報生成部
700・・・動画像復号部
701,1201,1501・・・エントロピー復号部
708,1208,1508・・・復号制御部
1103・・・フィルタ係数セット及びオフセット値設定部
1105,1303・・・フィルタ処理部
1802・・・オフセット値予測部
2001・・・オフセット値復元部
Claims (10)
- 復号画像内の1以上の画素を含む単位毎に、当該単位の画像特徴を示す指標に基づいて複数の第1のオフセットクラスのうちのいずれか1つを設定することと、
前記単位毎に、1以上の第2のオフセットクラスのうち当該単位に設定された第1のオフセットクラスを包含する1つの第2のオフセットクラスを設定することと、
入力画像及び前記復号画像に基づいて、前記1以上の第2のオフセットクラスの各々に対応するオフセット値を設定することと、
前記第2のオフセットクラス毎に、当該第2のオフセットクラスに対応するオフセット値に基づいて当該第2のオフセットクラスに包含される1以上の第1のオフセットクラスの各々に対応するオフセット値を算出することと、
前記単位毎に、当該単位に設定された第1のオフセットクラスに対応するオフセット値を加算し、オフセット処理画像を得ることと、
前記1以上の第2のオフセットクラスの各々に対応するオフセット値を示す情報を符号化し、符号化データを生成することと
を具備し、
前記1以上の第2のオフセットクラスのうち少なくとも1つは2以上の第1のオフセットクラスを包含し、同じ前記第2のオフセットクラスに包含された当該2以上の第1のオフセットクラスのうち少なくとも2つの前記第1のオフセットクラスに対応するオフセット値が相異なる、
動画像符号化方法。 - 前記1以上の第2のオフセットクラスのうち少なくとも1つは2以上の第1のオフセットクラスを包含し、同じ前記第2のオフセットクラスに包含された当該2以上の第1のオフセットクラスに対応するオフセット値は第1の値及び前記第1の値の符号を反転した第2の値を含む、請求項1の動画像符号化方法。
- 復号画像内の1以上の画素を含む単位毎に、前記単位の画像特徴を示す指標に基づいて複数の第1のオフセットクラスのうちのいずれか1つを設定することと、
前記単位毎に、1以上の第2のオフセットクラスのうち当該単位に設定された第1のオフセットクラスを包含する1つの第2のオフセットクラスを設定することと、
入力画像及び前記復号画像に基づいて、前記1以上の第2のオフセットクラスの各々に対応するオフセット値を設定することと、
前記第2のオフセットクラス毎に、当該第2のオフセットクラスに対応するオフセット値に基づいて当該第2のオフセットクラスに包含される1以上の第1のオフセットクラスの各々に対応するオフセット値を算出することと、
前記単位毎に、当該単位に設定された第1のオフセットクラスに対応するオフセット値を加算し、オフセット処理画像を得ることと、
前記複数の第1のオフセットクラスの各々に対応するオフセット値を示す情報を符号化し、符号化データを生成することと
を具備し、
前記1以上の第2のオフセットクラスのうち少なくとも1つは2以上の第1のオフセットクラスを包含し、同じ前記第2のオフセットクラスに包含された当該2以上の第1のオフセットクラスのうち少なくとも2つの前記第1のオフセットクラスに対応するオフセット値が相異なる、
動画像符号化方法。 - 復号画像内の1以上の画素を含む単位毎に、前記単位の画像特徴を示す指標に基づいて複数の第1のオフセットクラスのうちのいずれか1つを設定することと、
前記単位毎に、1以上の第2のオフセットクラスのうち当該単位に設定された第1のオフセットクラスを包含する1つの第2のオフセットクラスを設定することと、
入力画像及び前記復号画像に基づいて、前記複数の第1のオフセットクラスの各々に対応するオフセット値を設定することと、
前記第2のオフセットクラス毎に、当該第2のオフセットクラスに包含される1つの第1のオフセットクラスに対応する参照オフセット値に基づいて当該第2のオフセットクラスに包含される残余の第1のオフセットクラスの各々に対応するオフセット値を予測し、予測残差を算出することと、
前記単位毎に、当該単位に設定された第1のオフセットクラスに対応するオフセット値を加算し、オフセット処理画像を得ることと、
前記複数の第1のオフセットクラスの各々に対応する参照オフセット値及び予測残差のいずれか一方を示す情報を符号化し、符号化データを生成することと
を具備する、動画像符号化方法。 - 符号化データを復号し、1以上の第2のオフセットクラスの各々に対応するオフセット値を示す情報を得ることと、
復号画像内の1以上の画素を含む単位毎に、前記単位の画像特徴を示す指標に基づいて複数の第1のオフセットクラスのうちのいずれか1つを設定することと、
前記第2のオフセットクラス毎に、当該第2のオフセットクラスに対応するオフセット値に基づいて当該第2のオフセットクラスに包含される1以上の第1のオフセットクラスの各々に対応するオフセット値を算出することと、
前記単位毎に、当該単位に設定された第1のオフセットクラスに対応するオフセット値を加算し、オフセット処理画像を得ることと
を具備し、
前記1以上の第2のオフセットクラスのうち少なくとも1つは2以上の第1のオフセットクラスを包含し、同じ前記第2のオフセットクラスに包含された当該2以上の第1のオフセットクラスのうち少なくとも2つの前記第1のオフセットクラスに対応するオフセット値が相異なる、
動画像復号方法。 - 前記1以上の第2のオフセットクラスのうち少なくとも1つは2以上の第1のオフセットクラスを包含し、同じ前記第2のオフセットクラスに包含された当該2以上の第1のオフセットクラスに対応するオフセット値は第1の値及び前記第1の値の符号を反転した第2の値を含む、請求項5の動画像復号方法。
- 符号化データを復号し、複数の第1のオフセットクラスの各々に対応する参照オフセット値及び予測残差のいずれか一方を示す情報を示す情報を得ることと、
復号画像内の1以上の画素を含む単位毎に、前記単位の画像特徴を示す指標に基づいて前記複数の第1のオフセットクラスのうちのいずれか1つを設定することと、
前記複数の第1のオフセットクラスの各々に対応する参照オフセット値及び予測残差のいずれか一方に基づいて前記複数の第1のオフセットクラスの各々に対応するオフセット値を復元することと、
前記復号画像内の単位毎に当該単位に設定された第1のオフセットクラスに対応するオフセット値を加算し、オフセット処理画像を得ることと
を具備する、動画像復号方法。 - 復号画像内の1以上の画素を含む単位毎に、当該単位の画像特徴を示す指標に基づいて複数の第1のオフセットクラスのうちのいずれか1つを設定する第1の設定部と、
前記単位毎に、1以上の第2のオフセットクラスのうち当該単位に設定された第1のオフセットクラスを包含する1つの第2のオフセットクラスを設定する第2の設定部と、
入力画像及び前記復号画像に基づいて、前記1以上の第2のオフセットクラスの各々に対応するオフセット値を設定する第3の設定部と、
前記第2のオフセットクラス毎に、当該第2のオフセットクラスに対応するオフセット値に基づいて当該第2のオフセットクラスに包含される1以上の第1のオフセットクラスの各々に対応するオフセット値を算出する算出部と、
前記単位毎に、当該単位に設定された第1のオフセットクラスに対応するオフセット値を加算し、オフセット処理画像を得る加算部と、
前記1以上の第2のオフセットクラスの各々に対応するオフセット値を示す情報を符号化し、符号化データを生成する符号化部と
を具備し、
前記1以上の第2のオフセットクラスのうち少なくとも1つは2以上の第1のオフセットクラスを包含し、同じ前記第2のオフセットクラスに包含された当該2以上の第1のオフセットクラスのうち少なくとも2つの前記第1のオフセットクラスに対応するオフセット値が相異なる、
動画像符号化装置。 - 復号画像内の1以上の画素を含む単位毎に、前記単位の画像特徴を示す指標に基づいて複数の第1のオフセットクラスのうちのいずれか1つを設定する第1の設定部と、
前記単位毎に、1以上の第2のオフセットクラスのうち当該単位に設定された第1のオフセットクラスを包含する1つの第2のオフセットクラスを設定する第2の設定部と、
入力画像及び前記復号画像に基づいて、前記1以上の第2のオフセットクラスの各々に対応するオフセット値を設定する第3の設定部と、
前記第2のオフセットクラス毎に、当該第2のオフセットクラスに対応するオフセット値に基づいて当該第2のオフセットクラスに包含される1以上の第1のオフセットクラスの各々に対応するオフセット値を算出する算出部と、
前記単位毎に、当該単位に設定された第1のオフセットクラスに対応するオフセット値を加算し、オフセット処理画像を得る加算部と、
前記複数の第1のオフセットクラスの各々に対応するオフセット値を示す情報を符号化し、符号化データを生成する符号化部と
を具備し、
前記1以上の第2のオフセットクラスのうち少なくとも1つは2以上の第1のオフセットクラスを包含し、同じ前記第2のオフセットクラスに包含された当該2以上の第1のオフセットクラスのうち少なくとも2つの前記第1のオフセットクラスに対応するオフセット値が相異なる、
動画像符号化装置。 - 符号化データを復号し、1以上の第2のオフセットクラスの各々に対応するオフセット値を示す情報を得る復号部と、
復号画像内の1以上の画素を含む単位毎に、前記単位の画像特徴を示す指標に基づいて複数の第1のオフセットクラスのうちのいずれか1つを設定する設定部と、
前記第2のオフセットクラス毎に、当該第2のオフセットクラスに対応するオフセット値に基づいて当該第2のオフセットクラスに包含される1以上の第1のオフセットクラスの各々に対応するオフセット値を算出する算出部と、
前記単位毎に、当該単位に設定された第1のオフセットクラスに対応するオフセット値を加算し、オフセット処理画像を得る加算部と
を具備し、
前記1以上の第2のオフセットクラスのうち少なくとも1つは2以上の第1のオフセットクラスを包含し、同じ前記第2のオフセットクラスに包含された当該2以上の第1のオフセットクラスのうち少なくとも2つの前記第1のオフセットクラスに対応するオフセット値が相異なる、
動画像復号装置。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2011/080206 WO2013098937A1 (ja) | 2011-12-27 | 2011-12-27 | 動画像符号化方法、動画像復号方法、動画像符号化装置及び動画像復号装置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2011/080206 WO2013098937A1 (ja) | 2011-12-27 | 2011-12-27 | 動画像符号化方法、動画像復号方法、動画像符号化装置及び動画像復号装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013098937A1 true WO2013098937A1 (ja) | 2013-07-04 |
Family
ID=48696507
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/080206 WO2013098937A1 (ja) | 2011-12-27 | 2011-12-27 | 動画像符号化方法、動画像復号方法、動画像符号化装置及び動画像復号装置 |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2013098937A1 (ja) |
-
2011
- 2011-12-27 WO PCT/JP2011/080206 patent/WO2013098937A1/ja active Application Filing
Non-Patent Citations (3)
Title |
---|
CHIH-MING FU ET AL.: "CE8 Subset3: Picture Quadtree Adaptive Offset", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 4TH MEETING, 20 January 2011 (2011-01-20), DAEGU, KR, XP030008162 * |
CHIH-MING FU ET AL.: "Non-CE8: Offset coding in SAO", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 7TH MEETING, 21 November 2011 (2011-11-21), GENEVA, CH, XP030110206 * |
CHIH-MING FU ET AL.: "Sample adaptive offset for HEVC", 2011 IEEE 13TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 17 October 2011 (2011-10-17), pages 1 - 5, XP032027547 * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2023179682A (ja) | 隣接サンプル減少を伴う線形またはアフィン変換を使用するイントラ予測 | |
US8111914B2 (en) | Method and apparatus for encoding and decoding image by using inter color compensation | |
JP6159376B2 (ja) | 画像フィルタ及び復号装置 | |
JP2021129313A (ja) | 映像コーディングシステムにおけるイントラ予測による映像のデコーディング方法及び装置 | |
WO2019010267A1 (en) | POST-FILTERING FOR WEIGHTED ANGULAR PREDICTION | |
WO2018125944A1 (en) | Improved video bitstream coding | |
KR20080088040A (ko) | 영상의 부호화, 복호화 방법 및 장치 | |
KR101973571B1 (ko) | 영상 변환 부호화/복호화 방법 및 장치 | |
KR20220013941A (ko) | 행렬 기반 인트라 예측 | |
JP5594841B2 (ja) | 画像符号化装置及び画像復号装置 | |
WO2019126163A1 (en) | System and method for constructing a plane for planar prediction | |
WO2012173109A1 (ja) | 動画像符号化装置、動画像復号装置、動画像符号化方法、動画像復号方法、動画像符号化プログラム及び動画像復号プログラム | |
JP2024024080A (ja) | 画像符号化装置、画像符号化方法、画像復号装置、画像復号方法 | |
WO2014084674A2 (ko) | 잔차 변환을 이용한 인트라 예측 방법 및 장치 | |
WO2013098937A1 (ja) | 動画像符号化方法、動画像復号方法、動画像符号化装置及び動画像復号装置 | |
JP5358485B2 (ja) | 画像符号化装置 | |
JPWO2015045301A1 (ja) | 映像符号化装置、映像符号化方法および映像符号化プログラム | |
JP2013138502A (ja) | 符号化装置及びその制御方法、コンピュータプログラム | |
WO2013030902A1 (ja) | 動画像符号化方法、動画像復号方法、動画像符号化装置及び動画像復号装置 | |
JP2018110313A (ja) | 動画像符号化装置、動画像符号化方法、動画像符号化用コンピュータプログラム、動画像復号装置及び動画像復号方法ならびに動画像復号用コンピュータプログラム | |
JP2017212555A (ja) | 符号化装置、復号装置及びプログラム | |
WO2021111595A1 (ja) | フィルタ生成方法、フィルタ生成装置及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11878440 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11878440 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11878440 Country of ref document: EP Kind code of ref document: A1 |