WO2012001818A1 - Dispositif de codage vidéo et dispositif de décodage vidéo - Google Patents

Dispositif de codage vidéo et dispositif de décodage vidéo Download PDF

Info

Publication number
WO2012001818A1
WO2012001818A1 PCT/JP2010/061350 JP2010061350W WO2012001818A1 WO 2012001818 A1 WO2012001818 A1 WO 2012001818A1 JP 2010061350 W JP2010061350 W JP 2010061350W WO 2012001818 A1 WO2012001818 A1 WO 2012001818A1
Authority
WO
WIPO (PCT)
Prior art keywords
scaling
unit
pixel group
information
bit length
Prior art date
Application number
PCT/JP2010/061350
Other languages
English (en)
Japanese (ja)
Inventor
中條 健
太一郎 塩寺
Original Assignee
株式会社 東芝
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社 東芝 filed Critical 株式会社 東芝
Priority to PCT/JP2010/061350 priority Critical patent/WO2012001818A1/fr
Priority to PCT/JP2010/073604 priority patent/WO2012001833A1/fr
Publication of WO2012001818A1 publication Critical patent/WO2012001818A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding

Definitions

  • the embodiment relates to encoding and decoding of moving images.
  • H. is one of the international standards for video coding.
  • H.264 / MPEG-4 AVC was jointly established by ITU-T (International Telecommunication Union-Telecommunication Standardization Sector) and ISO (International Organization for Standardization) / IEC (International Electrotechnical Commission).
  • H. Video coding standards such as H.264 / MPEG-4 AVC usually store local decoded images (encoding side) of already encoded images or decoded images (decoding side) in an image buffer. And has a mechanism to refer to generate a predicted image.
  • the image buffer Since the image buffer stores a large number of reference images, a main memory having a large storage capacity is required on both the encoding side and the decoding side. In addition, since a large amount of access to the image buffer occurs to generate a predicted image, a wide memory bandwidth is required on both the encoding side and the decoding side. Such a problem of hardware requirements regarding the image buffer becomes more prominent as the pixel bit length increases.
  • various internal processes on the encoding side and the decoding side such as a motion estimation process, a predicted image generation process (prediction process), and a filter process (for example, loop filter process) usually increase as the pixel bit length increases. Easy to achieve with high accuracy. Therefore, increasing the pixel bit length helps improve the coding efficiency.
  • Enhancing coding efficiency can be expected by applying a large pixel bit length to various internal processes including prediction processing.
  • a small pixel bit length to the image buffer, it is possible to expect a reduction in hardware requirements related to the image buffer.
  • the embodiment aims to apply a larger pixel bit length to various internal processes including a prediction process while applying a smaller pixel bit length to the image buffer.
  • the moving image encoding apparatus (a) performs filter processing on the target pixel group if the filter application of the target pixel group in the locally decoded image is effective, and performs the first processing based on the distribution of the filter processing result group.
  • 1 scaling information is derived, scaling for reducing the pixel bit length is applied to the filtering result group according to the first scaling information to generate a first scaled reference pixel group, and
  • a target pixel If the group filter application is invalid, second scaling information is derived based on the distribution of the target pixel group, and scaling for reducing the pixel bit length is applied to the target pixel group according to the second scaling information.
  • a loop filter unit for generating a second scaled reference pixel group is included.
  • the moving image encoding apparatus stores a first buffer for storing a first scaled reference pixel group and a second scaled reference pixel group, and first scaling information and second scaling information. And a second buffer.
  • the moving image encoding apparatus performs first scaling information or second scaling information by performing inverse scaling for extending a pixel bit length with respect to the first scaled reference pixel group or the second scaled reference pixel group.
  • a predicting unit that restores the reference image by applying according to the above and generates a predicted image based on the reference image.
  • This moving image encoding device includes an encoding unit that encodes information indicating a difference between an input image and a predicted image.
  • the moving picture decoding apparatus (a) performs filtering on the target pixel group if filter application of the target pixel group in the decoded image is effective, and performs first processing based on the distribution of the filter processing result group.
  • a loop filter unit that generates two scaled reference pixel groups is included.
  • the moving picture decoding apparatus stores a first buffer for storing a first scaled reference pixel group and a second scaled reference pixel group, and first scaling information and second scaling information. And a second buffer.
  • the moving image decoding apparatus performs first scaling information or second scaling information by performing inverse scaling for extending a pixel bit length with respect to the first scaled reference pixel group or the second scaled reference pixel group.
  • a predicting unit that restores the reference image by applying according to the above and generates a predicted image based on the reference image.
  • This moving image decoding apparatus includes a decoding unit that decodes information indicating a difference between an input image and a predicted image.
  • FIG. 1 is a block diagram showing a moving image encoding apparatus according to a first embodiment.
  • the block diagram which shows the moving image decoding apparatus which concerns on 1st Embodiment.
  • the block diagram which shows the loop filter part of FIG. Explanatory drawing of an object pixel group.
  • FIG. 4 is a flowchart showing an operation of a filter processing / scaling processing unit in FIG. 3.
  • FIG. 4 is a flowchart showing the operation of the scaling processing unit in FIG. 3.
  • the block diagram which shows the estimation part of FIG. 8 is a flowchart showing the operation of the inverse scaling processing unit of FIG.
  • the block diagram which shows the moving image encoder which concerns on 2nd Embodiment.
  • the block diagram which shows the moving image decoding apparatus which concerns on 2nd Embodiment.
  • the table figure which shows the example of the dynamic range Dr, EncTable [Dr], and Offset [Dr] concerning 3rd Embodiment The table figure which shows the example of the dynamic range Dr, EncTable [Dr], and Offset [Dr] concerning 3rd Embodiment.
  • the table figure which shows the example of the dynamic range Dr and EncTable [Dr] concerning 3rd Embodiment The table figure which shows the example of the dynamic range Dr, EncTable [Dr], and Offset [Dr] concerning 3rd Embodiment.
  • the moving image encoding apparatus includes an encoding unit 100 and an encoding control unit 140.
  • the encoding unit 100 encodes the input image 11 to generate encoded data.
  • the encoding control unit 140 controls various elements in the encoding unit 100. For example, the encoding control unit 140 controls a loop filter setting unit 106, a prediction unit 120, and the like which will be described later.
  • the encoding unit 100 includes a subtraction unit 101, a transform / quantization unit 102, an entropy encoding unit 103, an inverse quantization / inverse transform unit 104, an addition unit 105, a loop filter setting unit 106, a reference image buffer unit 107, scaling information.
  • a buffer unit 108, a loop filter unit 110, a prediction unit 120, and a motion vector generation unit 130 are included.
  • the subtraction unit 101 subtracts the prediction image from the prediction unit 120 from the input image 11 to obtain a prediction error.
  • the transform / quantization unit 102 performs transform (for example, discrete cosine transform (DCT)) and quantization on the prediction error from the subtraction unit 101, and quantizes information on transform coefficients (hereinafter simply referred to as quantum). (Referred to as conversion coefficient).
  • transform for example, discrete cosine transform (DCT)
  • quantum quantizes information on transform coefficients
  • the entropy coding unit 103 performs entropy coding on the quantized transform coefficient from the transform / quantization unit 102, the loop filter information 13 from the loop filter setting unit 106, and the motion vector information from the motion vector generation unit 130. Do.
  • the entropy encoding unit 103 may further entropy encode information other than these (for example, prediction mode information).
  • the type of entropy encoding is, for example, variable length encoding or arithmetic encoding.
  • the entropy encoding unit 103 outputs encoded data obtained by entropy encoding to the outside.
  • the inverse quantization / inverse transform unit 104 performs inverse quantization and inverse transform (for example, inverse discrete cosine transform (IDCT), etc.) on the quantized transform coefficient from the transform / quantization unit 102, thereby reducing a prediction error.
  • inverse quantization and inverse transform for example, inverse discrete cosine transform (IDCT), etc.
  • IDCT inverse discrete cosine transform
  • the addition unit 105 adds the prediction error restored by the inverse quantization / inverse conversion unit 104 and the corresponding prediction image from the prediction unit 120 to generate the local decoded image 12.
  • the loop filter setting unit 106 sets the loop filter information 13 based on the input image 11 and the corresponding local decoded image 12 from the addition unit 105 and notifies the loop filter unit 110 and the entropy encoding unit 103.
  • the loop filter information 13 includes at least filter coefficient information and filter switching information.
  • the filter coefficient information includes information indicating the filter coefficient.
  • the filter coefficient information may further include information indicating an offset coefficient described later.
  • the filter switching information includes information indicating validity / invalidity of filter application.
  • the loop filter unit 110 performs a filtering process or a bypass process that does not go through the filtering process on the target pixel group in the local decoded image 12 from the adding unit 105. Do. Then, the loop filter unit 110 reduces (or maintains) the pixel bit length by performing a scaling process described later on the filter processing result group or the bypass processing result group (that is, the target pixel group itself). The loop filter unit 110 supplies the scaling processing result group to the reference image buffer unit 107 as the scaled reference pixel group 14. Further, the loop filter unit 110 supplies the scaling information 15 regarding the scaling process to the scaling information buffer unit 108. Details of the loop filter unit 110 will be described later.
  • the reference image buffer unit 107 stores the scaled reference pixel group 14 from the loop filter unit 110.
  • the scaling information buffer unit 108 accumulates scaling information 15 corresponding to the scaled reference pixel group 14 in synchronization with the reference image buffer unit 107.
  • the reference pixel group 14 accumulated in the reference image buffer unit 107 and the scaling information 15 accumulated in the scaling information buffer unit 108 are read by the prediction unit 120 or the motion vector generation unit 130 as necessary.
  • the motion vector generation unit 130 reads the scaled reference pixel group and the scaling information from the reference image buffer unit 107 and the scaling information buffer unit 108 as necessary.
  • the motion vector generation unit 130 applies inverse scaling that extends (or maintains) the pixel bit length to the scaled reference pixel group according to the scaling information to restore the reference image.
  • the motion vector generation unit 130 generates motion vector information based on the input image 11 and the restored reference image.
  • the motion vector generation unit 130 notifies the prediction unit 120 and the entropy encoding unit 103 of the motion vector information. Details of the inverse scaling process will be described later.
  • the prediction unit 120 reads the scaled reference pixel group and the scaling information from the reference image buffer unit 107 and the scaling information buffer unit 108 as necessary.
  • the prediction unit 120 restores the reference image by applying inverse scaling that extends the pixel bit length to the scaled reference pixel group according to the scaling information.
  • the prediction unit 120 generates a prediction image based on the motion vector information from the motion vector generation unit 130 and the restored reference image.
  • the prediction unit 120 supplies the predicted image to the subtraction unit 101 and the addition unit 105.
  • the loop filter unit 110 includes a switch 111, a filter processing / scaling processing unit 112, and a scaling processing unit 113. Note that FIG. 3 only illustrates the loop filter unit 110.
  • the loop filter unit 110 may include one or a plurality of filter processing / scaling processing units (not shown) different from the filter processing / scaling processing unit 112.
  • the number of switches 111 selected can be changed to 3 or more according to the configuration of the loop filter unit 110.
  • the switch 111 selects the output destination of the target pixel group included in the local decoded image 12 according to the loop filter information 13.
  • the switch 111 guides the target pixel group to the filter processing / scaling processing unit 112 if the loop filter information 13 indicates that the filter application of the target pixel group is valid.
  • the switch 111 guides the target pixel group to the scaling processing unit 113.
  • the filter application is valid (On) / invalid (Off) for each pixel group (for example, a block) of variable (may be fixed) size.
  • Filter switching information indicating () is set.
  • These pixel groups are all shown as rectangles in FIG. 4, but their shapes may be changed depending on the design.
  • the filter switching information of each pixel group is set by the filter setting unit 106 described above and can be referred to via the loop filter information 13.
  • the filter processing / scaling processing unit 112 performs filter processing on the target pixel group according to the loop filter information 13. Then, the filter processing / scaling processing unit 112 derives scaling information 15 based on the distribution (for example, dynamic range) of the filter processing result group, and performs scaling information for reducing the pixel bit length with respect to the filter processing result group. 15 is applied to generate a scaled reference pixel group 14. Details of the operation of the filter processing / scaling processing unit 112 will be described later.
  • the scaling processing unit 113 derives scaling information 15 based on the distribution (for example, dynamic range) of the target pixel group, and is scaled by applying scaling for reducing the pixel bit length to the target pixel group according to the scaling information 15.
  • the reference pixel group 14 is generated. Details of the operation of the scaling processing unit 113 will be described later.
  • the filter processing / scaling processing unit 112 performs a convolution operation (filter operation) on the target pixel group in accordance with the filter coefficient information included in the loop filter information 13 (step S112-1). Specifically, when the filter coefficient is represented by F [n], the pixel value of the target pixel group is represented by P [m], and the convolution calculation result is represented by B [m], the filter processing / scaling processing unit 112 represents the following formula ( Perform the convolution operation according to 1).
  • O represents an offset coefficient.
  • the offset coefficient can be referred to through the filter coefficient information.
  • the sum of the filter coefficient F [n] is assumed to be designed to be substantially equal to the 2 K. Further, the pixel bit length of the target pixel group is assumed to be T bits.
  • the filter processing / scaling processing unit 112 performs such a convolution operation on each pixel of the target pixel group to obtain a convolution operation result group.
  • the filter processing / scaling processing unit 112 searches for the maximum value Max and the minimum value Min of the convolution calculation result group of the target pixel group (step S112-2).
  • the upper limit of the maximum value Max is 2 K + T ⁇ 1. That is, if the value of the maximum value Max exceeds 2 K + T ⁇ 1, the maximum value Max is handled as 2 K + T ⁇ 1.
  • the filter processing / scaling processing unit 112 derives the scaling information 15 of the target pixel group (step S112-3). Specifically, the filtering / scaling processing unit 112 derives the minimum reference value MinPoint by arithmetically shifting the minimum value Min by S bits to the right according to the following equation (2).
  • S is represented by the following formula (3).
  • L represents the pixel bit length applied to the reference image buffer unit 107. It is assumed that the pixel bit length L of the reference image buffer unit 107 is equal to or less than the pixel bit length T of the target pixel group. That is, the minimum reference value MinPoint is a value obtained by rounding the minimum value Min to L bits.
  • the filter processing / scaling processing unit 112 derives the scaling amount Q by executing the following calculation (4).
  • the operation (4) is described according to the C language, but an operation having the same content can be described according to other programming languages.
  • the scaling amount Q can take any integer value from 0 to S.
  • the minimum reference value MinPoint and the scaling amount Q derived as described above are supplied to the scaling information buffer unit 108 as scaling information 15.
  • An example of a method for efficiently describing the scaling information 15 will be described.
  • the scaling information 15 has 1-bit flag information indicating whether Q is equal to S or not. When Q and S are not equal (that is, the scaling flag information is OFF), it further has a value of scaling amount Q (1 or more and S or less) and a value of minimum reference value MinPoint. If Q is equal to S (ie, the scaling flag information is ON), the minimum reference value MinPoint is interpreted as 0.
  • the calculation (5) is a process of scaling the T-bit target pixel group to L bits when Q is equal to S.
  • the operation (5) is a process of scaling the T-bit target pixel group to L ⁇ 1 bits when Q and S are not equal. Since the reference pixel group is L ⁇ 1 bits, it is possible to cancel the increase in the scaling information 15.
  • the minimum reference value MinPoint can be replaced with a maximum reference value MaxPoint based on the maximum value Max.
  • various formulas and calculations in the present embodiment may be appropriately read.
  • filter switching information or information similar thereto (for example, the value of the parameter K itself) is required. May be included. Alternatively, since such information can be referred to via the loop filter information 13, the loop filter information 13 may be notified to each element that performs inverse scaling processing. In the following description, it is assumed that the filter switching information is included in the scaling information 15.
  • the unit for performing the scaling process and the inverse scaling process only needs to be a common unit on the encoding side and the decoding side. In the present embodiment, the unit of scaling processing and filtering processing corresponds to the unit in which filter switching information is set.
  • the unit of the scaling process and the filtering process may be the unit of a plurality of target pixel groups that are the same as or smaller than the unit of the target pixel group.
  • the smallest block size in processing units may be used as the unit of all target pixel groups.
  • the filter processing / scaling processing unit 112 applies scaling to each convolution calculation result according to the derived filter information 15 (step S112-4). Specifically, the filter processing / scaling processing unit 112 generates each pixel value D [m] of the scaled reference pixel group according to the following formula (6).
  • Clip1 (x) represents a clipping function that rounds x to a value between 0 and 2 L ⁇ 1.
  • the offset in the equation (6) is obtained by the following calculation (7) using the conditional operator (ternary operator) “?:”.
  • the scaled reference pixel group generated as described above is supplied to the reference image buffer unit 107.
  • the scaling processing unit 113 searches for the maximum value Max and the minimum value Min of the target pixel group (step S113-1).
  • the scaling processing unit 113 derives the scaling information 15 of the target pixel group (step S113-2).
  • the scaling processing unit 113 derives the minimum reference value MinPoint by arithmetically shifting the minimum value Min by S bits to the right according to Equation (2).
  • K 0 is handled. That is, S is derived according to the following formula (8).
  • the scaling processing unit 113 derives the scaling amount Q by executing the calculation (4) or the calculation (5).
  • the minimum reference value MinPoint and the scaling amount Q derived as described above are supplied to the scaling information buffer unit 108 as scaling information 15.
  • the scaling processing unit 113 applies scaling to each pixel value of the target pixel group in accordance with the derived filter information 15 (step S113-3). Specifically, the scaling processing unit 113 generates each pixel value D [m] of the scaled reference pixel group according to the following formula (9).
  • the offset in equation (9) is obtained by operation (7).
  • the scaled reference pixel group generated as described above is supplied to the reference image buffer unit 107.
  • the prediction unit 120 includes an inverse scaling processing unit 121 and a predicted image generation unit 122.
  • the inverse scaling processing unit 121 restores the reference image by applying inverse scaling that extends the pixel bit length to the scaled reference pixel group according to the scaling information.
  • the predicted image generation unit 220 generates a predicted image based on the motion vector information and the restored reference image.
  • the inverse scaling processing unit 121 obtains a desired reference pixel group (that is, necessary for generating a predicted image) and corresponding scaling information from the reference image buffer unit 107 and the scaling information buffer unit 108, respectively (step S121). -1). Specifically, the inverse scaling processing unit 121 acquires each pixel value D [m] of the scaled reference pixel group, the minimum reference value MinPoint, the scaling amount Q, and the filter switching information. The inverse scaling processing unit 121 refers to the filter switching information, sets a predetermined value corresponding to the filter processing to the parameter K if the filter application is valid, and sets 0 to the parameter K if the filter application is invalid.
  • the inverse scaling processing unit 121 applies inverse scaling for extending the pixel bit length to the reference pixel group according to the scaling information (step S121-2). Specifically, if QKT ⁇ U ⁇ 0 is satisfied with the pixel bit length after the inverse scaling process as U bits, the inverse scaling processing unit 121 applies inverse scaling according to the following equation (10).
  • the inverse scaling processing unit 121 applies inverse scaling according to the following equation (11).
  • offset2 is calculated by the following calculation (12).
  • G [m] represents each pixel value of the restored reference pixel group.
  • adaptive scaling / inverse scaling processing based on the distribution of the target pixel group is performed.
  • the value B [m] or P [m] before scaling is rounded to L bits
  • the value G [m] after inverse scaling is rounded to L It is guaranteed to be the same value as the value rounded to bits.
  • the inverse obtained by the processing of this embodiment The value G [m] after scaling has high accuracy.
  • the video encoding apparatus performs the scaling process and the inverse scaling process before and after the reference image buffer unit, respectively, so that the pixel bit length applied to the reference image buffer unit Is smaller than the pixel bit length applied to other internal processing (prediction processing, filter processing, etc.). Therefore, according to the video encoding device according to the present embodiment, the pixel bit length applied to the reference image buffer unit while realizing a highly accurate prediction process, filter process, and the like by applying a larger pixel bit length. Can be kept small.
  • the moving picture decoding apparatus includes a decoding unit 200 and a decoding control unit 240.
  • the decoding unit 200 generates the output image 26 by decoding the encoded data.
  • the decoding control unit 240 controls various elements in the decoding unit 200.
  • the decoding control unit 240 controls the prediction unit 220 described later. Note that the scaling process and the inverse scaling process in the moving picture decoding apparatus in FIG. 2 are substantially the same as or similar to the scaling process and the inverse scaling process in the moving picture encoding apparatus in FIG.
  • the decoding unit 200 includes an entropy decoding unit 201, an inverse quantization / inverse conversion unit 202, an addition unit 203, a loop filter unit 210, a reference image buffer unit 204, a scaling information buffer unit 205, a prediction unit 220, and a bit length normalization. Part 230.
  • the entropy decoding unit 201 performs entropy decoding according to syntax information on encoded data generated by, for example, the moving image encoding apparatus in FIG.
  • the entropy decoding unit 201 supplies the decoded quantized transform coefficient to the inverse quantization / inverse transform unit 202, supplies the decoded motion vector information to the prediction unit 220, and decodes the decoded loop filter information. 23 is supplied to the loop filter unit 210.
  • the inverse quantization / inverse transformation unit 202 and the addition unit 203 are substantially the same or similar elements as the inverse quantization / inverse transformation unit 104 and the addition unit 105 described above. That is, the inverse quantization / inverse transform unit 202 performs inverse quantization and inverse transform (for example, inverse discrete cosine transform (IDCT), etc.) on the quantized transform coefficient from the entropy decoding unit 201 to generate a prediction error. To restore. The adding unit 203 adds the prediction error restored by the inverse quantization / inverse transform unit 202 and the corresponding prediction image from the prediction unit 220 to generate a decoded image 22.
  • inverse quantization / inverse transform unit 202 performs inverse quantization and inverse transform (for example, inverse discrete cosine transform (IDCT), etc.) on the quantized transform coefficient from the entropy decoding unit 201 to generate a prediction error.
  • IDCT inverse discrete cosine transform
  • the loop filter unit 210 is substantially the same as or similar to the loop filter unit 110 described above. In other words, the loop filter unit 210 performs a filtering process or a bypass process that does not go through the filtering process on the target pixel group in the decoded image 22 from the adding unit 203 according to the loop filter information 23 from the entropy decoding unit 201. . Then, the loop filter unit 210 performs the above-described scaling processing on the filter processing result group or the bypass processing result group (that is, the target pixel group) to reduce the pixel bit length. The loop filter unit 210 supplies the scaling processing result group to the reference image buffer unit 204 as the scaled reference pixel group 24. In addition, the loop filter unit 210 supplies the scaling information 25 regarding the scaling processing to the scaling information buffer unit 205.
  • the loop filter unit 210 is substantially the same as or similar to the loop filter unit 110 described above, and a detailed description thereof will be omitted.
  • the reference image buffer unit 204 stores the scaled reference pixel group 24 from the loop filter unit 210.
  • the scaling information buffer unit 205 accumulates scaling information 25 corresponding to the scaled reference pixel group 24 while synchronizing with the reference image buffer unit 204.
  • the reference pixel group 24 accumulated in the reference image buffer unit 204 and the scaling information 25 accumulated in the scaling information buffer unit 205 are read by the prediction unit 220 or the bit length normalization unit 230 as necessary. For example, in order to generate the output image 26, the bit length normalization unit 230 reads out a desired reference pixel group (that is, necessary for generating the output image 26) and corresponding scaling information according to the display order.
  • the prediction unit 220 is substantially the same or similar element as the prediction unit 120 described above. That is, the prediction unit 220 reads the scaled reference pixel group and scaling information from the reference image buffer unit 204 and the scaling information buffer unit 205, respectively, as necessary. The prediction unit 220 restores the reference image by applying inverse scaling that extends the pixel bit length to the scaled reference pixel group according to the scaling information. The prediction unit 220 generates a prediction image based on the motion vector information from the entropy decoding unit 201 and the restored reference image. The prediction unit 220 supplies the predicted image to the addition unit 203.
  • the bit length normalization unit 230 reads the scaled reference pixel group and scaling information from the reference image buffer unit 204 and the scaling information buffer unit 205, respectively, as necessary.
  • the bit length normalization unit 230 applies inverse scaling that extends the pixel bit length to the scaled reference pixel group according to the scaling information, and obtains a desired pixel bit length U (where the bit length normalization unit 230
  • the pixel bit length U related to the operation of (2) does not necessarily match the pixel bit length U related to the operation of the prediction unit 220).
  • the bit length normalization unit 230 supplies the output image 26 to the outside.
  • the pixel bit length of the output image 26 may be, for example, the same as or different from the pixel bit length of the input image 11 in the moving image apparatus of FIG.
  • the bit length normalization unit 230 can be removed. It is.
  • the moving picture decoding apparatus performs the scaling process and the inverse scaling process before and after the reference image buffer unit, so that the pixel bit length applied to the reference image buffer unit Is smaller than the pixel bit length applied to other internal processing (prediction processing, filter processing, etc.). Therefore, according to the video decoding device according to the present embodiment, the pixel bit length applied to the reference image buffer unit while realizing a highly accurate prediction process, filter process, and the like by applying a larger pixel bit length. Can be kept small.
  • a plurality of scaling processes in the loop filter unit 110 can be realized in a common scaling processing unit.
  • the common scaling processing unit sets a parameter K according to filter switching information regarding each target pixel group, and applies scaling to the target pixel group or the filter processing result group.
  • inverse scaling processing in the prediction unit 120 and the motion vector generation unit 130 can be implemented in a common inverse scaling processing unit.
  • the inverse scaling processing in the prediction unit 220 and the bit length normalization unit 230 can be realized in a common inverse scaling processing unit.
  • These common inverse scaling processing units set a pixel bit length U according to the output destination, and apply inverse scaling to the scaled reference pixel group.
  • the moving image encoding apparatus includes an encoding unit 300 and an encoding control unit 140.
  • parts that are the same as those in FIG. 1 are given the same reference numerals, and in the following description, different parts between FIG. 9 and FIG. 1 will be mainly described.
  • the scaling process and the inverse scaling process in the moving picture decoding apparatus in FIG. 9 are substantially the same as or similar to the scaling process and the inverse scaling process in the moving picture encoding apparatus in FIG.
  • the encoding unit 300 encodes the input image 11 to generate encoded data.
  • the encoding unit 300 includes a bit length extension unit 309, a subtraction unit 101, a transform / quantization unit 102, an entropy encoding unit 303, an inverse quantization / inverse transform unit 104, an addition unit 105, a loop filter setting unit 106, a loop filter Unit 110, reference image buffer unit 107, scaling information buffer unit 108, prediction unit 120, and motion vector generation unit 130.
  • the bit length extension unit 309 extends the pixel bit length of the input image 11 and supplies it to the subtraction unit 101, the loop filter setting unit 106, and the motion vector generation unit 130. As a result of the operation of the bit length extension unit 309, for example, the pixel bit length applied to the internal processing by the loop filter unit 110, the prediction unit 120, and the like is larger than the original pixel bit length of the input image 11.
  • the bit length extension unit 309 notifies the internal bit length information 37 to the entropy encoding unit 303.
  • the internal bit length information 37 may be information indicating the extension amount of the pixel bit length by the bit length extension unit 309, for example, or the extension amount of the pixel bit length is determined in advance between the encoding side and the decoding side.
  • information for example, a flag
  • the bit length The extension unit 309 may not notify the entropy encoding unit 303 of the internal bit length information 37.
  • the entropy encoding unit 303 further performs entropy encoding of the internal bit length information 37 as necessary, in addition to the operation of the entropy encoding unit 103 described above.
  • the entropy encoding unit 303 performs entropy encoding of the internal bit length information 37, the encoded internal bit length information is also included in the encoded data.
  • the moving image encoding apparatus employs the same scaling process / inverse scaling process as the first embodiment while extending the internal pixel bit length compared to the input image. is doing. Therefore, according to the video encoding device according to the present embodiment, the pixel bit length applied to the reference image buffer unit while realizing further highly accurate prediction processing, filter processing, and the like by extending the internal pixel bit length. Can be kept small.
  • the moving picture decoding apparatus includes a decoding unit 400 and a decoding control unit 240.
  • a decoding unit 400 and a decoding control unit 240.
  • parts that are the same as those in FIG. 2 are given the same reference numerals, and in the following description, different parts between FIG. 10 and FIG. 2 will be mainly described.
  • the scaling process and the inverse scaling process in the moving picture decoding apparatus in FIG. 10 are substantially the same as or similar to the scaling process and the inverse scaling process in the moving picture encoding apparatus in FIG.
  • the decoding unit 400 generates the output image 26 by decoding the encoded data.
  • the decoding unit 400 includes an entropy decoding unit 401, an inverse quantization / inverse conversion unit 202, an addition unit 203, a loop filter unit 210, a reference image buffer unit 204, a scaling information buffer unit 205, a prediction unit 220, and a bit length normalization. Part 430.
  • the entropy decoding unit 401 further performs entropy decoding of the encoded internal bit length information as necessary in addition to the operation of the entropy decoding unit 201 described above.
  • the entropy decoding unit 401 notifies the bit length normalization unit 430 of the decoded internal bit length information 47.
  • the bit length normalization unit 430 When the internal bit length information 47 is notified, the bit length normalization unit 430 performs the same operation as the above-described bit length normalization unit 230 while considering the internal bit length information 47 as necessary. Thus, the output image 26 normalized to a desired pixel bit length is generated. As an example, the bit length normalization unit 430 checks the pixel bit length of the input image on the encoding side by referring to the internal bit length information 47, and outputs the output image 26 normalized to the pixel bit length of the input image. Is generated.
  • the moving picture decoding apparatus expands the internal pixel bit length as compared with the input image on the encoding side, and performs the same scaling process / inverse as in the first embodiment. Scaling processing is adopted. Therefore, according to the video decoding device according to the present embodiment, the pixel bit length applied to the reference image buffer unit while realizing further highly accurate prediction processing, filter processing, and the like by extending the internal pixel bit length. Can be kept small.
  • the video encoding apparatus according to the third embodiment has substantially similar elements to the video encoding apparatus according to the first embodiment described above, but differs in the details of the scaling process / inverse scaling process. In the following description, details of the scaling process / inverse scaling process according to the present embodiment will be described with reference to FIGS. 5, 6, and 8.
  • the filter processing / scaling processing unit 112 performs a convolution operation (filter operation) on the target pixel group according to the filter coefficient information included in the loop filter information 13 (step S112-1). ).
  • the filter processing / scaling processing unit 112 searches for the maximum value Max and the minimum value Min of the convolution calculation result group of the target pixel group as in the first embodiment (step S112-2).
  • the filter processing / scaling processing unit 112 derives the scaling information 15 of the target pixel group (step S112-3). Specifically, the filter processing / scaling processing unit 112 derives the minimum reference value MinPoint according to Equation (2) and Equation (3). Further, the filter processing / scaling processing unit 112 derives the maximum reference value MaxPoint according to the following calculation (13) using the conditional operator (ternary operator) “?:”.
  • MaxPoint 2 L ⁇ 1.
  • the minimum reference value MinPoint and the maximum reference value MaxPoint derived as described above are supplied to the scaling information buffer unit 108 as scaling information 15.
  • filter switching information or information similar thereto is required, and such information may also be included in the scaling information 15.
  • the loop filter information 13 may be notified to an element that performs inverse scaling processing. In the following description, it is assumed that the filter switching information is included in the scaling information 15.
  • the filter processing / scaling processing unit 112 applies scaling to each convolution calculation result according to the derived filter information 15 (step S112-4). Specifically, the filter processing / scaling processing unit 112 generates each pixel value D [m] of the scaled reference pixel group according to the following formula (14).
  • Dr represents the dynamic range of the scaled reference pixel group as shown in the following equation (16).
  • Offset [Dr] represents a rounding offset value.
  • N represents the bit length of EncTable [Dr].
  • Formula (14) includes division, it is assumed that the value of EncTable [Dr] is calculated in advance for each value of Dr and stored in a table format, for example.
  • EncTable [Dr] and Offset [Dr] corresponding to the dynamic range Dr fall within the same number of bits X (X ⁇ (K + T) and X ⁇ U) between B [m] and G [m] described later. When doing so, the values of B [m] and G [m] after rounding may be set to be equal.
  • ExpandFlag in FIGS. 11 and 12 to 15 indicates whether or not the scaling process / inverse scaling process described in this embodiment is performed.
  • ExpandFlag When ExpandFlag is 0, it indicates that the scaling process / inverse scaling process described in the first or second embodiment is performed.
  • the scaling amount Q is derived using the calculation (4), and each pixel value D [m] of the reference pixel group scaled using the equation (6) is generated.
  • ExpandFlag when ExpandFlag is 1, it indicates that the scaling process / inverse scaling process described in this embodiment is performed.
  • MaxPoint and MinPoint may be expressed by (K + T) bits. That is, the upper limit of the values of MaxPoint and MinPoint is 2 K + T ⁇ 1. Therefore, Dr calculated by Expression (16) is expressed by (K + T) bits.
  • Offset [Dr] may be fixed to 1 ⁇ (K + T-1).
  • EncTable [Dr] where Offset [Dr] is different from 1 ⁇ (K + T ⁇ 1) is not used.
  • Dr that is not used is integrated with the contents one line below.
  • the scaled reference pixel group generated as described above is supplied to the reference image buffer unit 107.
  • the scaling processing unit 113 searches for the maximum value Max and the minimum value Min of the target pixel group as in the first embodiment (step S113-1).
  • the scaling processing unit 113 derives the scaling information 15 of the target pixel group (step S113-2).
  • the filter processing / scaling processing unit 112 derives the minimum reference value MinPoint according to Equation (2).
  • the filter processing / scaling processing unit 112 derives the maximum reference value MaxPoint according to Equation (13).
  • K 0 is handled. That is, S is derived according to Equation (8).
  • the minimum reference value MinPoint and the maximum reference value MaxPoint derived as described above are supplied to the scaling information buffer unit 108 as scaling information 15.
  • the scaling processing unit 113 applies scaling to each pixel value of the target pixel group in accordance with the derived filter information 15 (step S113-3). Specifically, the scaling processing unit 113 generates each pixel value D [m] of the scaled reference pixel group according to the following formula (17).
  • the value of EncTable [Dr] corresponding to the dynamic range Dr is the same as the number of bits X (X ⁇ T and X ⁇ U), the values of B [m] and G [m] after rounding are set to be equal.
  • MaxPoint and MinPoint may be expressed by T bits. That is, the upper limit of the values of MaxPoint and MinPoint is 2 T ⁇ 1. Therefore, Dr calculated by Expression (16) is expressed by T bits.
  • the inverse scaling processing unit 121 obtains a desired reference pixel group (that is, necessary for generating a predicted image) and corresponding scaling information from the reference image buffer unit 107 and the scaling information buffer unit 108, respectively (step S121). -1). Specifically, the inverse scaling processing unit 121 acquires each pixel value D [m] of the scaled reference pixel group, the minimum reference value MinPoint, the maximum reference value MaxPoint, and filter switching information. The inverse scaling processing unit 121 refers to the filter switching information, sets a predetermined value corresponding to the filter processing to the parameter K if the filter application is valid, and sets 0 to the parameter K if the filter application is invalid.
  • the inverse scaling processing unit 121 applies inverse scaling for extending the pixel bit length to the reference pixel group according to the scaling information (step S121-2). Specifically, if the relationship of K + T + L ⁇ U is established, the inverse scaling processing unit 121 applies inverse scaling according to the following equation (18).
  • DecTable [Dr] corresponding to the dynamic range Dr is set to correspond to Dr used in the filter processing / scaling processing unit 112 or the scaling processing unit 113.
  • MaxPoint and MinPoint may be expressed by (K + T) bits. That is, the upper limit of the values of MaxPoint and MinPoint is 2 K + T ⁇ 1. Therefore, Dr calculated by Expression (16) is expressed by (K + T) bits.
  • adaptive scaling / inverse scaling processing based on the distribution of the target pixel group is performed.
  • the value B [m] or P [m] before scaling is rounded to L bits
  • the value G [m] after inverse scaling is rounded to L Guaranteed to be the same value rounded to bits.
  • the value after inverse scaling obtained by the processing of this embodiment is used.
  • the value G [m] has high accuracy.
  • the moving picture encoding apparatus performs the scaling process and the inverse scaling process before and after the reference image buffer unit, so that the pixel bit length applied to the reference image buffer unit Is smaller than the pixel bit length applied to other internal processing (prediction processing, filter processing, etc.). Therefore, according to the video encoding device according to the present embodiment, the pixel bit length applied to the reference image buffer unit while realizing a highly accurate prediction process, filter process, and the like by applying a larger pixel bit length. Can be kept small.
  • the moving picture decoding apparatus can perform the same or similar scaling process / inverse scaling process as the moving picture encoding apparatus according to the present embodiment, and obtain the same effect. it can.
  • the various processes described in the first to third embodiments may be realized by executing a program (software).
  • a general-purpose computer system reads a program from a storage medium storing a program for realizing the processing according to each embodiment, and executes the program by a CPU or the like. It operates as a decoding device and brings about the same effect.
  • Programs include magnetic disks (flexible disks, hard disks, etc.), optical disks (CD-ROM, CD-R, CD-RW, DVD-ROM, DVD ⁇ R, DVD ⁇ RW, etc.), semiconductor memory, or the like. It may be stored in other storage media. These storage media may be of any type as long as they can be read by a computer or an embedded system that reads the program.
  • the computer or the embedded system may acquire or read the program via a communication medium such as a network. That is, a medium that downloads a program via a communication medium such as a LAN (local area network) or the Internet and stores (including temporary storage) the program is also included in the category of “storage medium”.
  • the term “storage medium” can also refer to a plurality of storage media comprehensively.
  • the computer or the embedded system may be a single device such as a personal computer or a microcontroller, or may be a system in which a plurality of devices are connected to a network.
  • the term “computer” is not limited to a so-called personal computer, and can comprehensively refer to an apparatus capable of executing a program, including an arithmetic processing unit, a microcontroller, and the like included in an information processing apparatus.
  • a part of the processing according to each embodiment may be executed using a function such as an OS (operating system) operating on a computer, database management software, MW (middleware) such as a network.
  • OS operating system
  • MW middleware
  • Inverse scaling processing unit DESCRIPTION OF SYMBOLS 22 ... Prediction image generation part 130 ... Motion vector generation part 140 ... Encoding control part 200,400 ... Decoding part 201, 401 ... Entropy decoding part 202 ... Inverse quantization / Inverse conversion unit 203... Addition unit 204... Reference image buffer unit 205... Scaling information buffer unit 210... Loop filter unit 220 ... prediction unit 230, 430. 240: Decoding control unit 309: Bit length extension unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

L'invention porte sur un dispositif de codage vidéo comprenant une unité de filtre de boucle (110) qui, (a) si un filtrage est activé pour un groupe de pixels cibles dans une image décodée locale (12), applique un traitement de filtrage sur le groupe de pixels cibles, obtient des informations de mise à l'échelle (15) sur la base de la distribution du groupe résultant du traitement de filtrage, et génère un groupe de pixels de référence (14) qui est mis à l'échelle par application, conformément aux informations de mise à l'échelle (15), d'une mise à l'échelle qui réduit la profondeur de bits par pixel du groupe résultant du traitement de filtrage et qui, (b) si un filtrage n'est pas activé pour le groupe de pixels cibles, obtient les informations de mise à l'échelle (15) sur la base de la distribution du groupe de pixels cibles, et génère un groupe de pixels de référence (14) qui est mis à l'échelle par application, conformément aux informations de mise à l'échelle (15), d'une mise à l'échelle qui réduit la profondeur de bits par pixel relative du groupe de pixels cibles. Le dispositif de codage vidéo comprend une unité de prédiction (120) qui restaure l'image de référence par application, conformément aux informations de mise à l'échelle, d'une mise à l'échelle inverse qui étend la profondeur de bits par pixel du groupe de pixels de référence mis à l'échelle.
PCT/JP2010/061350 2010-07-02 2010-07-02 Dispositif de codage vidéo et dispositif de décodage vidéo WO2012001818A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/JP2010/061350 WO2012001818A1 (fr) 2010-07-02 2010-07-02 Dispositif de codage vidéo et dispositif de décodage vidéo
PCT/JP2010/073604 WO2012001833A1 (fr) 2010-07-02 2010-12-27 Appareil de codage et appareil de décodage d'image animée et procédé

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2010/061350 WO2012001818A1 (fr) 2010-07-02 2010-07-02 Dispositif de codage vidéo et dispositif de décodage vidéo

Publications (1)

Publication Number Publication Date
WO2012001818A1 true WO2012001818A1 (fr) 2012-01-05

Family

ID=45401571

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/061350 WO2012001818A1 (fr) 2010-07-02 2010-07-02 Dispositif de codage vidéo et dispositif de décodage vidéo

Country Status (1)

Country Link
WO (1) WO2012001818A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007114368A1 (fr) * 2006-03-30 2007-10-11 Kabushiki Kaisha Toshiba appareil et procédé de codage d'image, et appareil et procédé de décodage d'image

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007114368A1 (fr) * 2006-03-30 2007-10-11 Kabushiki Kaisha Toshiba appareil et procédé de codage d'image, et appareil et procédé de décodage d'image

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
TAKESHI CHUJOH ET AL.: "Internal bit depth increase except frame memory", ITU - TELECOMMUNICATIONS STANDARDIZATION SECTOR STUDY GROUP 16 QUESTION 6, DOCUMENT VCEG-AF07, April 2007 (2007-04-01), SAN JOSE, USA, pages 1 - 4 *
TAKESHI CHUJOH ET AL.: "Internal bit depth increase for coding efficiency", ITU - TELECOMMUNICATIONS STANDARDIZATION SECTOR STUDY GROUP 16 QUESTION 6, DOCUMENT VCEG-AE13, January 2007 (2007-01-01), MARRAKECH, MA, pages 1 - 6 *
TAKESHI CHUJOH ET AL.: "Specification and experimental results of Quadtree-based Adaptive Loop Filter", ITU - TELECOMMUNICATIONS STANDARDIZATION SECTOR STUDY GROUP 16 QUESTION 6, DOCUMENT: VCEG-AK22 (R1), April 2009 (2009-04-01), YOKOHAMA, JAPAN, pages 1 - 11 *

Similar Documents

Publication Publication Date Title
KR102426721B1 (ko) 색차 성분 양자화 매개 변수 결정 방법 및 이러한 방법을 사용하는 장치
JP5907941B2 (ja) ビデオ画像の刈り取り方法及び装置
JP2009260977A (ja) 不可逆圧縮及び可逆圧縮を組み合わせて用いたビデオデータ圧縮
JP5871628B2 (ja) 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム
JP7343817B2 (ja) 符号化装置、符号化方法、及び符号化プログラム
JP2005318297A (ja) 動画像符号化・復号方法及び装置
JP2006135376A (ja) 動画像符号化装置、動画像符号化方法、動画像復号化装置および動画像復号化方法
JP5571542B2 (ja) 映像符号化方法、及び映像復号方法
KR20220021471A (ko) 화상 처리 장치 및 화상 처리 방법
JP5197428B2 (ja) 画像符号化装置及び画像符号化方法
JP2008271039A (ja) 画像符号化装置及び画像復号化装置
JP6145965B2 (ja) 画像符号化装置及び画像復号化装置並びにプログラム
WO2018043256A1 (fr) Dispositif de codage d'image, et dispositif de décodage d'image
US20230009580A1 (en) Image processing device and image processing method
WO2012001833A1 (fr) Appareil de codage et appareil de décodage d'image animée et procédé
WO2012001818A1 (fr) Dispositif de codage vidéo et dispositif de décodage vidéo
JP6875566B2 (ja) 動画像予測符号化装置、動画像予測復号装置、動画像予測符号化方法、動画像予測復号方法及び動画像予測復号プログラム
JP2012134632A (ja) 画像復号化装置、画像復号化方法およびプログラム
JP6016488B2 (ja) 映像圧縮フォーマット変換装置、映像圧縮フォーマット変換方法、およびプログラム
WO2013145174A1 (fr) Procédé de codage vidéo, procédé de décodage vidéo, dispositif de codage vidéo et dispositif de décodage vidéo
US20120147972A1 (en) Image decoding apparatus, image decoding method, image encoding apparatus, image encoding method, and program
WO2011105230A1 (fr) Dispositif de codage de coefficient de filtrage, dispositif de décodage de coefficient de filtrage, dispositif de codage vidéo, dispositif de décodage vidéo, et structure de données
WO2011161823A1 (fr) Procédé de vidéocodage et procédé de décodage vidéo
US20230007311A1 (en) Image encoding device, image encoding method and storage medium, image decoding device, and image decoding method and storage medium
JP2012119969A (ja) 画像符号化方式変換装置および画像符号化方式変換プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10854115

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10854115

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP