WO2012001833A1 - Appareil de codage et appareil de décodage d'image animée et procédé - Google Patents

Appareil de codage et appareil de décodage d'image animée et procédé Download PDF

Info

Publication number
WO2012001833A1
WO2012001833A1 PCT/JP2010/073604 JP2010073604W WO2012001833A1 WO 2012001833 A1 WO2012001833 A1 WO 2012001833A1 JP 2010073604 W JP2010073604 W JP 2010073604W WO 2012001833 A1 WO2012001833 A1 WO 2012001833A1
Authority
WO
WIPO (PCT)
Prior art keywords
scaling
value
pixel group
unit
information
Prior art date
Application number
PCT/JP2010/073604
Other languages
English (en)
Japanese (ja)
Inventor
中條 健
山影 朋夫
太一郎 塩寺
Original Assignee
株式会社 東芝
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from PCT/JP2010/061350 external-priority patent/WO2012001818A1/fr
Priority claimed from PCT/JP2010/067108 external-priority patent/WO2012042645A1/fr
Application filed by 株式会社 東芝 filed Critical 株式会社 東芝
Publication of WO2012001833A1 publication Critical patent/WO2012001833A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/80Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
    • H04N19/82Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation involving filtering within a prediction loop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/117Filters, e.g. for pre-processing or post-processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/182Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • the embodiment relates to encoding and decoding of moving images.
  • H. is one of the international standards for video coding.
  • H.264 / MPEG-4 AVC was jointly established by ITU-T (International Telecommunication Union-Telecommunication Standardization Sector) and ISO (International Organization for Standardization) / IEC (International Electrotechnical Commission).
  • H. Video coding standards such as H.264 / MPEG-4 AVC usually store local decoded images (encoding side) of already encoded images or decoded images (decoding side) in an image buffer. And has a mechanism to refer to generate a predicted image.
  • the image buffer Since the image buffer stores a large number of reference images, a main memory having a large storage capacity is required on both the encoding side and the decoding side. In addition, since a large amount of access to the image buffer occurs to generate a predicted image, a wide memory bandwidth is required on both the encoding side and the decoding side. Such a problem of hardware requirements regarding the image buffer becomes more prominent as the pixel bit length increases.
  • various internal processes on the encoding side and the decoding side such as a motion estimation process, a predicted image generation process (prediction process), and a filter process (for example, loop filter process) usually increase as the pixel bit length increases. Easy to achieve with high accuracy. Therefore, increasing the pixel bit length helps improve the coding efficiency.
  • Enhancing coding efficiency can be expected by applying a large pixel bit length to various internal processes including prediction processing.
  • a small pixel bit length to the image buffer, it is possible to expect a reduction in hardware requirements related to the image buffer.
  • the embodiment aims to apply a larger pixel bit length to various internal processes including a prediction process while applying a smaller pixel bit length to the image buffer.
  • the moving image encoding apparatus derives scaling information based on the maximum value and the minimum value of the target pixel group in the locally decoded image. Scaling for reducing the pixel bit length is applied to the target pixel group according to the scaling information.
  • the scaling processing unit generates a scaled reference pixel group by limiting the value of the specific pixel to be scaled with respect to the specific value.
  • the description of the first scaling information when the specific value is included or the description of the second scaling information when the specific value is not included and the reference pixel group scaled according to the corresponding scaling information are fixed. It is expressed with the bit length.
  • the reference image is restored by inverse scaling, and a predicted image is generated. Information indicating the difference between the input image and the predicted image is encoded by the encoding unit.
  • FIG. 1 is a block diagram showing a moving image encoding apparatus according to a first embodiment.
  • the block diagram which shows the moving image decoding apparatus which concerns on 1st Embodiment.
  • the block diagram which shows the loop filter part of FIG. Explanatory drawing of an object pixel group.
  • FIG. 4 is a flowchart showing an operation of a filter processing / scaling processing unit in FIG. 3.
  • FIG. 4 is a flowchart showing the operation of the scaling processing unit in FIG. 3.
  • the block diagram which shows the estimation part of FIG. 8 is a flowchart showing the operation of the inverse scaling processing unit of FIG.
  • the block diagram which shows the moving image encoder which concerns on 2nd Embodiment.
  • the block diagram which shows the moving image decoding apparatus which concerns on 2nd Embodiment.
  • the table figure which shows the example of the dynamic range Dr, EncTable [Dr], and Offset [Dr] concerning 3rd Embodiment The table figure which shows the example of the dynamic range Dr, EncTable [Dr], and Offset [Dr] concerning 3rd Embodiment.
  • the table figure which shows the example of the dynamic range Dr and EncTable [Dr] concerning 3rd Embodiment The table figure which shows the example of the dynamic range Dr, EncTable [Dr], and Offset [Dr] concerning 3rd Embodiment.
  • the block diagram which shows the moving image decoding apparatus which concerns on 5th Embodiment. 10 is a flowchart showing an operation of a pixel accuracy control unit according to the fifth embodiment.
  • the moving image encoding apparatus includes an encoding unit 100 and an encoding control unit 140.
  • the encoding unit 100 encodes the input image 11 to generate encoded data.
  • the encoding control unit 140 controls various elements in the encoding unit 100. For example, the encoding control unit 140 controls a loop filter setting unit 106, a prediction unit 120, and the like which will be described later.
  • the encoding unit 100 includes a subtraction unit 101, a transform / quantization unit 102, an entropy encoding unit 103, an inverse quantization / inverse transform unit 104, an addition unit 105, a loop filter setting unit 106, a reference image buffer unit 107, scaling information.
  • a buffer unit 108, a loop filter unit 110, a prediction unit 120, and a motion vector generation unit 130 are included.
  • the subtraction unit 101 subtracts the prediction image from the prediction unit 120 from the input image 11 to obtain a prediction error.
  • the transform / quantization unit 102 performs transform (for example, discrete cosine transform (DCT)) and quantization on the prediction error from the subtraction unit 101, and quantizes information on transform coefficients (hereinafter simply referred to as quantum). (Referred to as conversion coefficient).
  • transform for example, discrete cosine transform (DCT)
  • quantum quantizes information on transform coefficients
  • the entropy coding unit 103 performs entropy coding on the quantized transform coefficient from the transform / quantization unit 102, the loop filter information 13 from the loop filter setting unit 106, and the motion vector information from the motion vector generation unit 130. Do.
  • the entropy encoding unit 103 may further entropy encode information other than these (for example, prediction mode information).
  • the type of entropy encoding is, for example, variable length encoding or arithmetic encoding.
  • the entropy encoding unit 103 outputs encoded data obtained by entropy encoding to the outside.
  • the inverse quantization / inverse transform unit 104 performs inverse quantization and inverse transform (for example, inverse discrete cosine transform (IDCT), etc.) on the quantized transform coefficient from the transform / quantization unit 102, thereby reducing a prediction error.
  • inverse quantization and inverse transform for example, inverse discrete cosine transform (IDCT), etc.
  • IDCT inverse discrete cosine transform
  • the addition unit 105 adds the prediction error restored by the inverse quantization / inverse conversion unit 104 and the corresponding prediction image from the prediction unit 120 to generate the local decoded image 12.
  • the loop filter setting unit 106 sets the loop filter information 13 based on the input image 11 and the corresponding local decoded image 12 from the addition unit 105 and notifies the loop filter unit 110 and the entropy encoding unit 103.
  • the loop filter information 13 includes at least filter coefficient information and filter switching information.
  • the filter coefficient information includes information indicating the filter coefficient.
  • the filter coefficient information may further include information indicating an offset coefficient described later.
  • the filter switching information includes information indicating validity / invalidity of filter application.
  • the loop filter unit 110 performs a filtering process or a bypass process that does not go through the filtering process on the target pixel group in the local decoded image 12 from the adding unit 105 according to the loop filter information 13 from the loop filter setting unit 106. Then, the loop filter unit 110 reduces (or maintains) the pixel bit length by performing a scaling process described later on the filter processing result group or the bypass processing result group (that is, the target pixel group itself). The loop filter unit 110 supplies the scaling processing result group to the reference image buffer unit 107 as the scaled reference pixel group 14. Further, the loop filter unit 110 supplies the scaling information 15 regarding the scaling process to the scaling information buffer unit 108. Details of the loop filter unit 110 will be described later.
  • the reference image buffer unit 107 stores the scaled reference pixel group 14 from the loop filter unit 110.
  • the scaling information buffer unit 108 accumulates scaling information 15 corresponding to the scaled reference pixel group 14 in synchronization with the reference image buffer unit 107.
  • the reference pixel group 14 accumulated in the reference image buffer unit 107 and the scaling information 15 accumulated in the scaling information buffer unit 108 are read by the prediction unit 120 or the motion vector generation unit 130 as necessary.
  • the motion vector generation unit 130 reads the scaled reference pixel group and the scaling information from the reference image buffer unit 107 and the scaling information buffer unit 108 as necessary.
  • the motion vector generation unit 130 applies inverse scaling that extends (or maintains) the pixel bit length to the scaled reference pixel group according to the scaling information to restore the reference image.
  • the motion vector generation unit 130 generates motion vector information based on the input image 11 and the restored reference image.
  • the motion vector generation unit 130 notifies the prediction unit 120 and the entropy encoding unit 103 of the motion vector information. Details of the inverse scaling process will be described later.
  • the prediction unit 120 reads the scaled reference pixel group and the scaling information from the reference image buffer unit 107 and the scaling information buffer unit 108 as necessary.
  • the prediction unit 120 restores the reference image by applying inverse scaling that extends the pixel bit length to the scaled reference pixel group according to the scaling information.
  • the prediction unit 120 generates a prediction image based on the motion vector information from the motion vector generation unit 130 and the restored reference image.
  • the prediction unit 120 supplies the predicted image to the subtraction unit 101 and the addition unit 105.
  • the loop filter unit 110 includes a switch 111, a filter processing / scaling processing unit 112, and a scaling processing unit 113. Note that FIG. 3 only illustrates the loop filter unit 110.
  • the loop filter unit 110 may include one or a plurality of filter processing / scaling processing units (not shown) different from the filter processing / scaling processing unit 112.
  • the number of switches 111 selected can be changed to 3 or more according to the configuration of the loop filter unit 110.
  • the switch 111 selects the output destination of the target pixel group included in the local decoded image 12 according to the loop filter information 13.
  • the switch 111 guides the target pixel group to the filter processing / scaling processing unit 112 if the loop filter information 13 indicates that the filter application of the target pixel group is valid.
  • the switch 111 guides the target pixel group to the scaling processing unit 113.
  • the filter application is valid (On) / invalid (Off) for each pixel group (for example, a block) of variable (may be fixed) size.
  • Filter switching information indicating () is set.
  • These pixel groups are all shown as rectangles in FIG. 4, but their shapes may be changed depending on the design.
  • the filter switching information of each pixel group is set by the filter setting unit 106 described above and can be referred to via the loop filter information 13.
  • the filter processing / scaling processing unit 112 performs filter processing on the target pixel group according to the loop filter information 13. Then, the filter processing / scaling processing unit 112 derives scaling information 15 based on the distribution (for example, dynamic range) of the filter processing result group, and performs scaling information for reducing the pixel bit length with respect to the filter processing result group. 15 is applied to generate a scaled reference pixel group 14. Details of the operation of the filter processing / scaling processing unit 112 will be described later.
  • the scaling processing unit 113 derives scaling information 15 based on the distribution (for example, dynamic range) of the target pixel group, and is scaled by applying scaling for reducing the pixel bit length to the target pixel group according to the scaling information 15.
  • the reference pixel group 14 is generated. Details of the operation of the scaling processing unit 113 will be described later.
  • the filter processing / scaling processing unit 112 performs a convolution operation (filter operation) on the target pixel group in accordance with the filter coefficient information included in the loop filter information 13 (step S112-1). Specifically, when the filter coefficient is represented by F [n], the pixel value of the target pixel group is represented by P [m], and the convolution calculation result is represented by B [m], the filter processing / scaling processing unit 112 represents the following formula ( Perform the convolution operation according to 1).
  • O represents an offset coefficient.
  • the offset coefficient can be referred to through the filter coefficient information.
  • the sum of the filter coefficient F [n] is assumed to be designed to be substantially equal to the 2 K. Further, the pixel bit length of the target pixel group is assumed to be T bits.
  • the filter processing / scaling processing unit 112 performs such a convolution operation on each pixel of the target pixel group to obtain a convolution operation result group.
  • the filter processing / scaling processing unit 112 searches for the maximum value Max and the minimum value Min of the convolution calculation result group of the target pixel group (step S112-2).
  • the upper limit of the maximum value Max is 2 K + T ⁇ 1. That is, if the value of the maximum value Max exceeds 2 K + T ⁇ 1, the maximum value Max is handled as 2 K + T ⁇ 1.
  • the filter processing / scaling processing unit 112 derives the scaling information 15 of the target pixel group (step S112-3). Specifically, the filtering / scaling processing unit 112 derives the minimum reference value MinPoint by arithmetically shifting the minimum value Min by S bits to the right according to the following equation (2).
  • S is represented by the following formula (3).
  • L represents a pixel bit length applied to the reference image buffer unit 107. It is assumed that the pixel bit length L of the reference image buffer unit 107 is equal to or less than the pixel bit length T of the target pixel group. That is, the minimum reference value MinPoint is a value obtained by rounding the minimum value Min to L bits.
  • the filter processing / scaling processing unit 112 derives the scaling amount Q by executing the following calculation (4).
  • the operation (4) is described according to the C language, but an operation having the same content can be described according to other programming languages.
  • the scaling amount Q can take any integer value from 0 to S.
  • the minimum reference value MinPoint and the scaling amount Q derived as described above are supplied to the scaling information buffer unit 108 as scaling information 15.
  • An example of a method for efficiently describing the scaling information 15 will be described.
  • the scaling information 15 has 1-bit flag information indicating whether Q is equal to S or not. When Q and S are not equal (that is, the scaling flag information is OFF), it further has a value of scaling amount Q (1 or more and S or less) and a value of minimum reference value MinPoint. If Q is equal to S (ie, the scaling flag information is ON), the minimum reference value MinPoint is interpreted as 0.
  • Operation (5) is a process of scaling the T-bit target pixel group to L bits when Q is equal to S.
  • the operation (5) is a process of scaling the T-bit target pixel group to L ⁇ 1 bits when Q and S are not equal. Since the reference pixel group is L ⁇ 1 bits, it is possible to cancel the increase in the scaling information 15.
  • the minimum reference value MinPoint can be replaced with a maximum reference value MaxPoint based on the maximum value Max.
  • various formulas and calculations in the present embodiment may be appropriately read.
  • filter switching information or information similar thereto (for example, the value of the parameter K itself) is required. May be included. Alternatively, since such information can be referred to via the loop filter information 13, the loop filter information 13 may be notified to each element that performs inverse scaling processing. In the following description, it is assumed that the filter switching information is included in the scaling information 15.
  • the unit for performing the scaling process and the inverse scaling process only needs to be a common unit on the encoding side and the decoding side. In the present embodiment, the unit of scaling processing and filtering processing corresponds to the unit in which filter switching information is set.
  • the unit of the scaling process and the filtering process may be the unit of a plurality of target pixel groups that are the same as or smaller than the unit of the target pixel group.
  • the smallest block size in processing units may be used as the unit of all target pixel groups.
  • the filter processing / scaling processing unit 112 applies scaling to each convolution calculation result according to the derived filter information 15 (step S112-4). Specifically, the filter processing / scaling processing unit 112 generates each pixel value D [m] of the scaled reference pixel group according to the following formula (6).
  • Clip1 (x) represents a clipping function that rounds x to a value between 0 and 2 L ⁇ 1.
  • the offset in the equation (6) is obtained by the following calculation (7) using the conditional operator (ternary operator) “?:”.
  • the scaled reference pixel group generated as described above is supplied to the reference image buffer unit 107.
  • the scaling processing unit 113 searches for the maximum value Max and the minimum value Min of the target pixel group (step S113-1).
  • the scaling processing unit 113 derives the scaling information 15 of the target pixel group (step S113-2).
  • the scaling processing unit 113 derives the minimum reference value MinPoint by arithmetically shifting the minimum value Min by S bits to the right according to Equation (2).
  • K 0 is handled. That is, S is derived according to the following formula (8).
  • the scaling processing unit 113 derives the scaling amount Q by executing the calculation (4) or the calculation (5).
  • the minimum reference value MinPoint and the scaling amount Q derived as described above are supplied to the scaling information buffer unit 108 as scaling information 15.
  • the scaling processing unit 113 applies scaling to each pixel value of the target pixel group in accordance with the derived filter information 15 (step S113-3). Specifically, the scaling processing unit 113 generates each pixel value D [m] of the scaled reference pixel group according to the following formula (9).
  • Offset in formula (9) is obtained by calculation (7).
  • the scaled reference pixel group generated as described above is supplied to the reference image buffer unit 107.
  • the prediction unit 120 includes an inverse scaling processing unit 121 and a predicted image generation unit 122.
  • the inverse scaling processing unit 121 restores the reference image by applying inverse scaling that extends the pixel bit length to the scaled reference pixel group according to the scaling information.
  • the predicted image generation unit 220 generates a predicted image based on the motion vector information and the restored reference image.
  • the inverse scaling processing unit 121 obtains a desired reference pixel group (that is, necessary for generating a predicted image) and corresponding scaling information from the reference image buffer unit 107 and the scaling information buffer unit 108, respectively (step S121). -1). Specifically, the inverse scaling processing unit 121 acquires each pixel value D [m] of the scaled reference pixel group, the minimum reference value MinPoint, the scaling amount Q, and the filter switching information. The inverse scaling processing unit 121 refers to the filter switching information, sets a predetermined value corresponding to the filter processing to the parameter K if the filter application is valid, and sets 0 to the parameter K if the filter application is invalid.
  • the inverse scaling processing unit 121 applies inverse scaling for extending the pixel bit length to the reference pixel group according to the scaling information (step S121-2). Specifically, if QKT ⁇ U ⁇ 0 is satisfied with the pixel bit length after the inverse scaling process as U bits, the inverse scaling processing unit 121 applies inverse scaling according to the following equation (10).
  • the inverse scaling processing unit 121 applies inverse scaling according to the following equation (11).
  • offset2 is calculated by the following calculation (12).
  • G [m] represents each pixel value of the restored reference pixel group.
  • adaptive scaling / inverse scaling processing based on the distribution of the target pixel group is performed.
  • the value B [m] or P [m] before scaling is rounded to L bits
  • the value G [m] after inverse scaling is rounded to L It is guaranteed to be the same value as the value rounded to bits.
  • the inverse obtained by the processing of this embodiment The value G [m] after scaling has high accuracy.
  • the video encoding apparatus performs the scaling process and the inverse scaling process before and after the reference image buffer unit, respectively, so that the pixel bit length applied to the reference image buffer unit Is smaller than the pixel bit length applied to other internal processing (prediction processing, filter processing, etc.). Therefore, according to the video encoding device according to the present embodiment, the pixel bit length applied to the reference image buffer unit while realizing a highly accurate prediction process, filter process, and the like by applying a larger pixel bit length. Can be kept small.
  • the moving picture decoding apparatus includes a decoding unit 200 and a decoding control unit 240.
  • the decoding unit 200 generates the output image 26 by decoding the encoded data.
  • the decoding control unit 240 controls various elements in the decoding unit 200.
  • the decoding control unit 240 controls the prediction unit 220 described later. Note that the scaling process and the inverse scaling process in the moving picture decoding apparatus in FIG. 2 are substantially the same as or similar to the scaling process and the inverse scaling process in the moving picture encoding apparatus in FIG.
  • the decoding unit 200 includes an entropy decoding unit 201, an inverse quantization / inverse conversion unit 202, an addition unit 203, a loop filter unit 210, a reference image buffer unit 204, a scaling information buffer unit 205, a prediction unit 220, and a bit length normalization. Part 230.
  • the entropy decoding unit 201 performs entropy decoding according to syntax information on encoded data generated by, for example, the moving image encoding apparatus in FIG.
  • the entropy decoding unit 201 supplies the decoded quantized transform coefficient to the inverse quantization / inverse transform unit 202, supplies the decoded motion vector information to the prediction unit 220, and decodes the decoded loop filter information. 23 is supplied to the loop filter unit 210.
  • the inverse quantization / inverse transformation unit 202 and the addition unit 203 are substantially the same or similar elements as the inverse quantization / inverse transformation unit 104 and the addition unit 105 described above. That is, the inverse quantization / inverse transform unit 202 performs inverse quantization and inverse transform (for example, inverse discrete cosine transform (IDCT), etc.) on the quantized transform coefficient from the entropy decoding unit 201 to generate a prediction error. To restore. The adding unit 203 adds the prediction error restored by the inverse quantization / inverse transform unit 202 and the corresponding prediction image from the prediction unit 220 to generate a decoded image 22.
  • inverse quantization / inverse transform unit 202 performs inverse quantization and inverse transform (for example, inverse discrete cosine transform (IDCT), etc.) on the quantized transform coefficient from the entropy decoding unit 201 to generate a prediction error.
  • IDCT inverse discrete cosine transform
  • the loop filter unit 210 is substantially the same as or similar to the loop filter unit 110 described above. In other words, the loop filter unit 210 performs a filtering process or a bypass process that does not go through the filtering process on the target pixel group in the decoded image 22 from the adding unit 203 according to the loop filter information 23 from the entropy decoding unit 201. . Then, the loop filter unit 210 performs the above-described scaling processing on the filter processing result group or the bypass processing result group (that is, the target pixel group) to reduce the pixel bit length. The loop filter unit 210 supplies the scaling processing result group to the reference image buffer unit 204 as the scaled reference pixel group 24. In addition, the loop filter unit 210 supplies the scaling information 25 regarding the scaling processing to the scaling information buffer unit 205.
  • the loop filter unit 210 is substantially the same as or similar to the loop filter unit 110 described above, and a detailed description thereof will be omitted.
  • the reference image buffer unit 204 stores the scaled reference pixel group 24 from the loop filter unit 210.
  • the scaling information buffer unit 205 accumulates scaling information 25 corresponding to the scaled reference pixel group 24 while synchronizing with the reference image buffer unit 204.
  • the reference pixel group 24 accumulated in the reference image buffer unit 204 and the scaling information 25 accumulated in the scaling information buffer unit 205 are read by the prediction unit 220 or the bit length normalization unit 230 as necessary. For example, in order to generate the output image 26, the bit length normalization unit 230 reads out a desired reference pixel group (that is, necessary for generating the output image 26) and corresponding scaling information according to the display order.
  • the prediction unit 220 is substantially the same or similar element as the prediction unit 120 described above. That is, the prediction unit 220 reads the scaled reference pixel group and scaling information from the reference image buffer unit 204 and the scaling information buffer unit 205, respectively, as necessary. The prediction unit 220 restores the reference image by applying inverse scaling that extends the pixel bit length to the scaled reference pixel group according to the scaling information. The prediction unit 220 generates a prediction image based on the motion vector information from the entropy decoding unit 201 and the restored reference image. The prediction unit 220 supplies the predicted image to the addition unit 203.
  • the bit length normalization unit 230 reads the scaled reference pixel group and scaling information from the reference image buffer unit 204 and the scaling information buffer unit 205, respectively, as necessary.
  • the bit length normalization unit 230 applies inverse scaling that extends the pixel bit length to the scaled reference pixel group according to the scaling information, and obtains a desired pixel bit length U (where the bit length normalization unit 230
  • the pixel bit length U related to the operation of (2) does not necessarily match the pixel bit length U related to the operation of the prediction unit 220).
  • the bit length normalization unit 230 supplies the output image 26 to the outside.
  • the pixel bit length of the output image 26 may be, for example, the same as or different from the pixel bit length of the input image 11 in the moving image apparatus of FIG.
  • the bit length normalization unit 230 can be removed. It is.
  • the moving picture decoding apparatus performs the scaling process and the inverse scaling process before and after the reference image buffer unit, so that the pixel bit length applied to the reference image buffer unit Is smaller than the pixel bit length applied to other internal processing (prediction processing, filter processing, etc.). Therefore, according to the video decoding device according to the present embodiment, the pixel bit length applied to the reference image buffer unit while realizing a highly accurate prediction process, filter process, and the like by applying a larger pixel bit length. Can be kept small.
  • a plurality of scaling processes in the loop filter unit 110 can be realized in a common scaling processing unit.
  • the common scaling processing unit sets a parameter K according to filter switching information regarding each target pixel group, and applies scaling to the target pixel group or the filter processing result group.
  • inverse scaling processing in the prediction unit 120 and the motion vector generation unit 130 can be implemented in a common inverse scaling processing unit.
  • the inverse scaling processing in the prediction unit 220 and the bit length normalization unit 230 can be realized in a common inverse scaling processing unit.
  • These common inverse scaling processing units set a pixel bit length U according to the output destination, and apply inverse scaling to the scaled reference pixel group.
  • the moving image encoding apparatus includes an encoding unit 300 and an encoding control unit 140.
  • parts that are the same as those in FIG. 1 are given the same reference numerals, and in the following description, different parts between FIG. 9 and FIG. 1 will be mainly described.
  • the scaling process and the inverse scaling process in the moving picture decoding apparatus in FIG. 9 are substantially the same as or similar to the scaling process and the inverse scaling process in the moving picture encoding apparatus in FIG.
  • the encoding unit 300 encodes the input image 11 to generate encoded data.
  • the encoding unit 300 includes a bit length extension unit 309, a subtraction unit 101, a transform / quantization unit 102, an entropy encoding unit 303, an inverse quantization / inverse transform unit 104, an addition unit 105, a loop filter setting unit 106, a loop filter Unit 110, reference image buffer unit 107, scaling information buffer unit 108, prediction unit 120, and motion vector generation unit 130.
  • the bit length extension unit 309 extends the pixel bit length of the input image 11 and supplies it to the subtraction unit 101, the loop filter setting unit 106, and the motion vector generation unit 130. As a result of the operation of the bit length extension unit 309, for example, the pixel bit length applied to the internal processing by the loop filter unit 110, the prediction unit 120, and the like is larger than the original pixel bit length of the input image 11.
  • the bit length extension unit 309 notifies the internal bit length information 37 to the entropy encoding unit 303.
  • the internal bit length information 37 may be information indicating the extension amount of the pixel bit length by the bit length extension unit 309, for example, or the extension amount of the pixel bit length is determined in advance between the encoding side and the decoding side.
  • information for example, a flag
  • the bit length The extension unit 309 may not notify the entropy encoding unit 303 of the internal bit length information 37.
  • the entropy encoding unit 303 further performs entropy encoding of the internal bit length information 37 as necessary, in addition to the operation of the entropy encoding unit 103 described above.
  • the entropy encoding unit 303 performs entropy encoding of the internal bit length information 37, the encoded internal bit length information is also included in the encoded data.
  • the moving image encoding apparatus employs the same scaling process / inverse scaling process as the first embodiment while extending the internal pixel bit length compared to the input image. is doing. Therefore, according to the video encoding device according to the present embodiment, the pixel bit length applied to the reference image buffer unit while realizing further highly accurate prediction processing, filter processing, and the like by extending the internal pixel bit length. Can be kept small.
  • the moving picture decoding apparatus includes a decoding unit 400 and a decoding control unit 240.
  • a decoding unit 400 and a decoding control unit 240.
  • parts that are the same as those in FIG. 2 are given the same reference numerals, and in the following description, different parts between FIG. 10 and FIG. 2 will be mainly described.
  • the scaling process and the inverse scaling process in the moving picture decoding apparatus in FIG. 10 are substantially the same as or similar to the scaling process and the inverse scaling process in the moving picture encoding apparatus in FIG.
  • the decoding unit 400 generates the output image 26 by decoding the encoded data.
  • the decoding unit 400 includes an entropy decoding unit 401, an inverse quantization / inverse conversion unit 202, an addition unit 203, a loop filter unit 210, a reference image buffer unit 204, a scaling information buffer unit 205, a prediction unit 220, and a bit length normalization. Part 430.
  • the entropy decoding unit 401 further performs entropy decoding of the encoded internal bit length information as necessary in addition to the operation of the entropy decoding unit 201 described above.
  • the entropy decoding unit 401 notifies the bit length normalization unit 430 of the decoded internal bit length information 47.
  • the bit length normalization unit 430 When the internal bit length information 47 is notified, the bit length normalization unit 430 performs the same operation as the above-described bit length normalization unit 230 while considering the internal bit length information 47 as necessary. Thus, the output image 26 normalized to a desired pixel bit length is generated. As an example, the bit length normalization unit 430 checks the pixel bit length of the input image on the encoding side by referring to the internal bit length information 47, and outputs the output image 26 normalized to the pixel bit length of the input image. Is generated.
  • the moving picture decoding apparatus expands the internal pixel bit length as compared with the input image on the encoding side, and performs the same scaling process / inverse as in the first embodiment. Scaling processing is adopted. Therefore, according to the video decoding device according to the present embodiment, the pixel bit length applied to the reference image buffer unit while realizing further highly accurate prediction processing, filter processing, and the like by extending the internal pixel bit length. Can be kept small.
  • the video encoding apparatus according to the third embodiment has substantially similar elements to the video encoding apparatus according to the first embodiment described above, but differs in the details of the scaling process / inverse scaling process. In the following description, details of the scaling process / inverse scaling process according to the present embodiment will be described with reference to FIGS. 5, 6, and 8.
  • the filter processing / scaling processing unit 112 performs a convolution operation (filter operation) on the target pixel group according to the filter coefficient information included in the loop filter information 13 (step S112-1). ).
  • the filter processing / scaling processing unit 112 searches for the maximum value Max and the minimum value Min of the convolution calculation result group of the target pixel group as in the first embodiment (step S112-2).
  • the filter processing / scaling processing unit 112 derives the scaling information 15 of the target pixel group (step S112-3). Specifically, the filter processing / scaling processing unit 112 derives the minimum reference value MinPoint according to Equation (2) and Equation (3). Further, the filter processing / scaling processing unit 112 derives the maximum reference value MaxPoint according to the following calculation (13) using the conditional operator (ternary operator) “?:”.
  • MaxPoint 2 L ⁇ 1.
  • the minimum reference value MinPoint and the maximum reference value MaxPoint derived as described above are supplied to the scaling information buffer unit 108 as scaling information 15.
  • filter switching information or information similar thereto is required, and such information may also be included in the scaling information 15.
  • the loop filter information 13 may be notified to an element that performs inverse scaling processing. In the following description, it is assumed that the filter switching information is included in the scaling information 15.
  • the filter processing / scaling processing unit 112 applies scaling to each convolution calculation result according to the derived filter information 15 (step S112-4). Specifically, the filter processing / scaling processing unit 112 generates each pixel value D [m] of the scaled reference pixel group according to the following formula (14).
  • Dr represents the dynamic range of the scaled reference pixel group as shown in the following equation (16).
  • Offset [Dr] represents a rounding offset value.
  • N represents the bit length of EncTable [Dr].
  • Formula (14) includes division, it is assumed that the value of EncTable [Dr] is calculated in advance for each value of Dr and stored in a table format, for example.
  • EncTable [Dr] and Offset [Dr] corresponding to the dynamic range Dr fall within the same number of bits X (X ⁇ (K + T) and X ⁇ U) between B [m] and G [m] described later. When doing so, the values of B [m] and G [m] after rounding may be set to be equal.
  • ExpandFlag in FIGS. 11 and 12 to 15 indicates whether or not the scaling process / inverse scaling process described in this embodiment is performed.
  • ExpandFlag When ExpandFlag is 0, it indicates that the scaling process / inverse scaling process described in the first or second embodiment is performed.
  • the scaling amount Q is derived using the calculation (4), and each pixel value D [m] of the reference pixel group scaled using the equation (6) is generated.
  • ExpandFlag when ExpandFlag is 1, it indicates that the scaling process / inverse scaling process described in this embodiment is performed.
  • MaxPoint and MinPoint may be expressed by (K + T) bits. That is, the upper limit of the values of MaxPoint and MinPoint is 2 K + T ⁇ 1. Therefore, Dr calculated by Expression (16) is expressed by (K + T) bits.
  • Offset [Dr] may be fixed to 1 ⁇ (K + T-1).
  • EncTable [Dr] where Offset [Dr] is different from 1 ⁇ (K + T ⁇ 1) is not used.
  • Dr that is not used is integrated with the contents one line below.
  • the scaled reference pixel group generated as described above is supplied to the reference image buffer unit 107.
  • the scaling processing unit 113 searches for the maximum value Max and the minimum value Min of the target pixel group as in the first embodiment (step S113-1).
  • the scaling processing unit 113 derives the scaling information 15 of the target pixel group (step S113-2).
  • the filter processing / scaling processing unit 112 derives the minimum reference value MinPoint according to Equation (2).
  • the filter processing / scaling processing unit 112 derives the maximum reference value MaxPoint according to Equation (13).
  • K 0 is handled. That is, S is derived according to Equation (8).
  • the minimum reference value MinPoint and the maximum reference value MaxPoint derived as described above are supplied to the scaling information buffer unit 108 as scaling information 15.
  • the scaling processing unit 113 applies scaling to each pixel value of the target pixel group in accordance with the derived filter information 15 (step S113-3). Specifically, the scaling processing unit 113 generates each pixel value D [m] of the scaled reference pixel group according to the following formula (17).
  • the value of EncTable [Dr] corresponding to the dynamic range Dr is the same number of bits X (X ⁇ T and X as P [m] and G [m] described later, as in the filter processing / scaling processing unit 112. ⁇ U), the values of B [m] and G [m] after rounding are set to be equal.
  • MaxPoint and MinPoint may be expressed by T bits. That is, the upper limit of the values of MaxPoint and MinPoint is 2 T ⁇ 1. Therefore, Dr calculated by Expression (16) is expressed by T bits.
  • the inverse scaling processing unit 121 obtains a desired reference pixel group (that is, necessary for generating a predicted image) and corresponding scaling information from the reference image buffer unit 107 and the scaling information buffer unit 108, respectively (step S121). -1). Specifically, the inverse scaling processing unit 121 acquires each pixel value D [m] of the scaled reference pixel group, the minimum reference value MinPoint, the maximum reference value MaxPoint, and filter switching information. The inverse scaling processing unit 121 refers to the filter switching information, sets a predetermined value corresponding to the filter processing to the parameter K if the filter application is valid, and sets 0 to the parameter K if the filter application is invalid.
  • the inverse scaling processing unit 121 applies inverse scaling for extending the pixel bit length to the reference pixel group according to the scaling information (step S121-2). Specifically, if the relationship of K + T + L ⁇ U is established, the inverse scaling processing unit 121 applies inverse scaling according to the following equation (18).
  • DecTable [Dr] corresponding to the dynamic range Dr is set to correspond to Dr used in the filter processing / scaling processing unit 112 or the scaling processing unit 113.
  • MaxPoint and MinPoint may be expressed by (K + T) bits. That is, the upper limit of the values of MaxPoint and MinPoint is 2 K + T ⁇ 1. Therefore, Dr calculated by Expression (16) is expressed by (K + T) bits.
  • adaptive scaling / inverse scaling processing based on the distribution of the target pixel group is performed.
  • the value B [m] or P [m] before scaling is rounded to L bits
  • the value G [m] after inverse scaling is rounded to L Guaranteed to be the same value rounded to bits.
  • the value after inverse scaling obtained by the processing of this embodiment is used.
  • the value G [m] has high accuracy.
  • the moving picture encoding apparatus performs the scaling process and the inverse scaling process before and after the reference image buffer unit, so that the pixel bit length applied to the reference image buffer unit Is smaller than the pixel bit length applied to other internal processing (prediction processing, filter processing, etc.). Therefore, according to the video encoding device according to the present embodiment, the pixel bit length applied to the reference image buffer unit while realizing a highly accurate prediction process, filter process, and the like by applying a larger pixel bit length. Can be kept small.
  • the moving picture decoding apparatus can perform the same or similar scaling process / inverse scaling process as the moving picture encoding apparatus according to the present embodiment, and obtain the same effect. it can.
  • the moving image encoding apparatus includes an encoding unit 500 and an encoding control unit 140.
  • the encoding unit 500 encodes the input image 11 to generate encoded data.
  • the encoding control unit 140 controls various elements in the encoding unit 500. For example, the encoding control unit 140 controls a loop filter setting unit 106, a prediction unit 520, and the like which will be described later.
  • the encoding unit 500 includes a bit length extension unit 309, a subtraction unit 101, a transform / quantization unit 102, an entropy encoding unit 303, an inverse quantization / inverse transform unit 104, an addition unit 105, a loop filter setting unit 106, and a reference image
  • a buffer unit 507, a loop filter unit 510, a prediction unit 520, and a motion vector generation unit 130 are included.
  • the bit length extension unit 309 is the same as that already described in the second embodiment.
  • the part in which the pixel bit length in the fourth embodiment is extended does not have to be the entire encoding unit and decoding unit as shown in FIGS. 16 and 17. It is sufficient that the pixel bit length is extended at least in the loop filter unit 510 (610: decoding device) and the prediction unit 520 (620: decoding device) described in this embodiment.
  • the subtraction unit 101 subtracts the prediction image from the prediction unit 520 from the input image 11 to obtain a prediction error.
  • the transform / quantization unit 102 performs transform (for example, discrete cosine transform (DCT)) and quantization on the prediction error from the subtraction unit 101, and quantizes information on transform coefficients (hereinafter simply referred to as quantum). (Referred to as conversion coefficient).
  • transform for example, discrete cosine transform (DCT)
  • quantum quantizes information on transform coefficients
  • the entropy encoding unit 303 is the same as that already described in the second embodiment.
  • the inverse quantization / inverse transform unit 104 performs inverse quantization and inverse transform (for example, inverse discrete cosine transform (IDCT), etc.) on the quantized transform coefficient from the transform / quantization unit 102, thereby reducing a prediction error.
  • inverse quantization and inverse transform for example, inverse discrete cosine transform (IDCT), etc.
  • IDCT inverse discrete cosine transform
  • the adding unit 105 adds the prediction error restored by the inverse quantization / inverse transform unit 104 and the corresponding prediction image from the prediction unit 520 to generate the local decoded image 12.
  • the loop filter setting unit 106 sets the loop filter information 13 based on the input image 11 and the corresponding local decoded image 12 from the addition unit 105 and notifies the loop filter unit 510 and the entropy encoding unit 303 of the loop filter information 13.
  • the loop filter information 13 includes at least filter coefficient information and filter switching information.
  • the filter coefficient information includes information indicating the filter coefficient.
  • the filter coefficient information may further include information indicating an offset coefficient described later.
  • the filter switching information includes information indicating validity / invalidity of filter application.
  • the loop filter unit 510 performs filter processing or bypass processing that does not pass the filter processing on the target pixel group in the local decoded image 12 from the addition unit 105 according to the loop filter information 13 from the loop filter setting unit 106. Do. Then, the loop filter unit 510 performs scaling processing described later on the filter processing result group or the bypass processing result group (that is, the target pixel group itself) to reduce (or maintain) the pixel bit length.
  • the loop filter unit 510 supplies the scaled reference pixel group 14 as the scaling processing result group and the scaling information 15 regarding the scaling processing to the reference image buffer unit 507. Details of the loop filter unit 510 will be described later.
  • the reference image buffer unit 507 stores the scaled reference pixel group 14 from the loop filter unit 510 and the scaling information 15 related to the scaling process.
  • the reference pixel group 14 and the scaling information 15 accumulated in the reference image buffer unit 507 are read by the prediction unit 520 or the motion vector generation unit 130 as necessary. Details of the reference image buffer unit 507 will be described later.
  • the motion vector generation unit 130 reads the scaled reference pixel group and the scaling information from the reference image buffer unit 507 as necessary.
  • the motion vector generation unit 130 applies inverse scaling that extends (or maintains) the pixel bit length to the scaled reference pixel group according to the scaling information to restore the reference image.
  • the motion vector generation unit 130 generates motion vector information based on the input image 11 and the restored reference image.
  • the motion vector generation unit 130 notifies the prediction unit 520 and the entropy encoding unit 303 of the motion vector information. Details of the inverse scaling process will be described later.
  • the prediction unit 520 reads the scaled reference pixel group and scaling information from the reference image buffer unit 507 as necessary.
  • the prediction unit 520 restores the reference image by applying inverse scaling that extends the pixel bit length to the scaled reference pixel group according to the scaling information.
  • the prediction unit 520 generates a prediction image based on the motion vector information from the motion vector generation unit 130 and the restored reference image.
  • the prediction unit 520 supplies the predicted image to the subtraction unit 101 and the addition unit 105.
  • the loop filter unit 510 will be described on the assumption that it has the same configuration as that of FIG. 3 already described in the first embodiment.
  • the loop filter unit 510 does not have to have the same configuration as that of the first embodiment.
  • a filter process is performed on the target pixel group in the local decoded image 12 from the addition unit 105, or bypass that does not pass through the filter process Process.
  • a configuration may be adopted in which scaling processing is performed by the scaling processing unit 113 described later on the target pixel group of the filter processing result group or the bypass processing result group.
  • the filter processing unit 1001 is not limited to a filter for which application or non-application of a filter is determined on a block basis as shown in FIG.
  • the filter processing unit 1001 may include a deblocking filter that applies filter processing to block boundaries.
  • the operation of the filter processing / scaling processing unit 112 and the operation of the scaling processing unit 113 are different from those in the first embodiment. Details of the operation of the filter processing / scaling processing unit 112 will be described below with reference to FIG.
  • the filter processing / scaling processing unit 112 performs a convolution operation (filter operation) on the target pixel group in accordance with the filter coefficient information included in the loop filter information 13 (step S112-1). Specifically, when the filter coefficient is represented by F [n], the pixel value of the target pixel group is represented by P [m], and the convolution calculation result is represented by B [m], the filter processing / scaling processing unit 112 represents the following formula ( Perform the convolution operation according to 1).
  • Equation (19) O represents an offset coefficient.
  • the offset coefficient can be referred to through the filter coefficient information.
  • the unit of the scaling processing is set to 16 pixels, the sum of the filter coefficient F [n] is described as being designed to be substantially equal to the 2 K.
  • the 16-pixel unit is, for example, a 4 ⁇ 4 pixel block unit, but there is no restriction on the shape.
  • the pixel bit length of the target pixel group is 8 bits, and the number of extension bits at the time of input to the filter / scaling processing unit is 4 bits. Note that the pixel bit length and the number of extension bits of the target pixel group are not limited to these.
  • the filter processing / scaling processing unit 112 performs such a convolution operation on each pixel of the target pixel group to obtain a convolution operation result group.
  • the filter processing / scaling processing unit 112 searches for the maximum value Max and the minimum value Min of the convolution calculation result group of the target pixel group (step S112-2).
  • the upper limit of the maximum value Max is, for example, 2 K + 12 ⁇ 1. That is, if the value of the maximum value Max is larger than 2 K + 12 ⁇ 1, the value of the maximum value Max is treated as 2 K + 12 ⁇ 1.
  • the filter processing / scaling processing unit 112 derives the scaling information 15 of the target pixel group (step S112-3). Specifically, the filter processing / scaling processing unit 112 derives the minimum reference value MinPoint by arithmetically shifting the minimum value Min by (K + 6) bits according to the following formula (2).
  • MinPoint is a 6-bit value.
  • the filter processing / scaling processing unit 112 derives the scaling amount Q by executing the following calculation (21).
  • the scaling amount Q can take any integer value from 0 to 4.
  • the minimum reference value MinPoint and the scaling amount Q derived as described above are supplied to the reference image buffer unit 507 as scaling information 15.
  • the filter processing / scaling processing unit 112 applies scaling to each convolution operation result according to the derived scaling information 15 (step S112-4). Specifically, the filter processing / scaling processing unit 112 generates each pixel value D [m] of the scaled reference pixel group according to the following calculation (22) and calculation (23).
  • the scaled reference pixel group is generated by executing the calculation (22).
  • the first pixel B [0] of the target pixel group is set to 1 when B [0] is smaller than 2K + 3 so that the scaled pixel value D [0] does not become 0. .
  • rounding is performed by normal rounding.
  • D [0] is limited to a value from 1 to 255, but D [m] in other cases is limited to a value from 0 to 255. In this example, 0 does not appear in D [0], but this value may be other than 0. For example, there is a method of using 255. Further, regarding the limitation of the value of D [m], for example, a method may be used in which the luminance signal is limited to a range of 16 to 235 and the color difference signal is limited to a value of 16 to 240. In this case, the value of D [0] does not need to be specifically limited.
  • the scaled reference pixel group generated as described above is supplied to the reference image buffer unit 507.
  • (K + 12) bits that are the pixel bit length of the filter processing result group of the target pixel group are reduced to 8 bits after the filter processing.
  • each pixel value D [m] of the scaled reference pixel group is generated by shifting left by Q + K bits.
  • the scaled reference pixel group generated as described above is supplied to the reference image buffer unit 507.
  • (K + 12) bits that are the pixel bit length of the filter processing result group of the target pixel group are reduced to 7 bits after the filter processing.
  • the unit of scaling processing is 16 pixel units, and the 16 pixel unit is, for example, a 4 ⁇ 4 pixel block unit, but there is no restriction on the shape.
  • the pixel bit length of the target pixel group is 8 bits, and the number of extension bits at the time of input to the scaling processing unit is 4 bits.
  • the scaling processing unit 113 generates scaling information for the target pixel group and a scaled reference pixel group.
  • the scaling processing unit 113 searches for the maximum value Max and the minimum value Min of the target pixel group (step S113-1).
  • the upper limit of the maximum value Max is set to 2 12 ⁇ 1, for example. That is, if the value of the maximum value Max is larger than 2 12 ⁇ 1, the value of the maximum value Max is handled as 2 12 ⁇ 1.
  • the scaling processing unit 113 derives the scaling amount Q by executing the calculation (25).
  • the minimum reference value MinPoint and the scaling amount Q derived as described above are supplied to the reference image buffer unit 507 as scaling information 15.
  • the scaling processing unit 113 applies scaling to each pixel value of the target pixel group in accordance with the derived scaling information 15 (step S113-3). Specifically, the scaling processing unit 113 generates each pixel value D [m] of the scaled reference pixel group according to the following formula (26).
  • a scaled reference pixel group is generated by executing the calculation (26).
  • the first pixel B [0] of the target pixel group is set to 1 when B [0] is a value smaller than 8 so that the scaled pixel value D [0] does not become 0.
  • rounding is performed by normal rounding.
  • D [0] is limited to a value from 1 to 255, but D [m] in other cases is limited to a value from 0 to 255. In this example, 0 does not appear in D [0], but this value may be other than 0. For example, there is a method of using 255. Further, regarding the limitation of the value of D [m], for example, a method may be used in which the luminance signal is limited to a range of 16 to 235 and the color difference signal is limited to a value of 16 to 240. In this case, the value of D [0] does not need to be specifically limited.
  • the scaled reference pixel group generated as described above is supplied to the reference image buffer unit 507.
  • the scaling processing unit 113 By the operation of the scaling processing unit 113, the pixel bit length of 12 bits of the target pixel group is reduced to 8 bits after the filter processing.
  • the minimum reference value MinPoint is arithmetically shifted left by about 6 bits to return to the original bit length, and this value is subtracted from the value of the target pixel group.
  • the pixel values D [m] of the scaled reference pixel group are generated by shifting left by Q bits.
  • the scaled reference pixel group generated as described above is supplied to the reference image buffer unit 507.
  • 12 bits which is the pixel bit length of the target pixel group, is reduced to 7 bits after the filter processing.
  • a scaling block (or adaptive scaling block) is a representation that includes scaling information and a group of scaled pixels.
  • description elements for expressing a scaling block with a fixed length will be described, but the order of writing the description elements of the scaling block to the reference image buffer or the order of reading the description elements of the scaling block from the reference image buffer is arbitrary. .
  • the description of the scaling information can be switched by limiting the scaled leading pixel value so as not to be a specific value.
  • the prediction unit 520 includes an inverse scaling processing unit 121 and a predicted image generation unit 122.
  • the inverse scaling processing unit 121 restores the reference image by applying inverse scaling for extending the pixel bit length to the scaled reference pixel group according to the scaling information.
  • the predicted image generation unit 220 generates a predicted image based on the motion vector information and the restored reference image.
  • the inverse scaling processing unit 121 obtains a desired scaled reference pixel group (that is, necessary for generating a predicted image) and corresponding scaling information from the reference image buffer unit 507 (step S121-1). Specifically, the inverse scaling processing unit 121 acquires each pixel value D [m] of the scaled reference pixel group, the minimum reference value MinPoint, the scaling amount Q, and the filter switching information. In this case, if the first pixel value D [0] of the scaled reference pixel group is other than 0, the scaling amount Q is set to 4, and if D [0] is 0, the minimum reference value MinPoint and the scaling value Q are described. Loaded data.
  • the inverse scaling processing unit 121 applies inverse scaling for extending the pixel bit length to the scaled reference pixel group according to the scaling information (step S121-2).
  • the scaling value Q is 4, the pixel value of the reference pixel group scaled using the operation (28) is arithmetically shifted to the left by about 4 bits, and inverse scaling processing is performed.
  • the scaling value Q is less than 4, the reference pixel scaled using the operation (29) is arithmetically shifted to the left by Q bits, and the minimum reference value MinPoint is arithmetically shifted to the left by about 6 bits and restored.
  • G [m] represents each pixel value of the restored reference pixel group.
  • the pixel bit length (here, 8 bits) of the scaled reference pixel group is expanded to 12 bits. Note that the value of G [m] is limited to a range that can be expressed by 12 bits.
  • adaptive scaling / inverse scaling processing based on the difference between the maximum value and the minimum value of the target pixel group is performed.
  • the value D [m] obtained by rounding the value B [m] before scaling to 8 bits by rounding off becomes D [0] in the case of 4-bit scaling.
  • the power is set to 1
  • the value G [m] after inverse scaling is rounded to 8 bits by rounding off.
  • the value G after inverse scaling obtained by the processing of this embodiment is used. [M] has high accuracy.
  • the pixel value 0 and the pixel value 255 may not be used as a timing reference in various standards, and therefore, D [0] is limited within encoding and decoding. There is no substantial loss.
  • the pixel value of the 8-bit input image is expanded by 4 bits
  • the target pixel group for performing scaling and inverse scaling processing is 16 pixels
  • the maximum value of the scaling value Q is 4
  • the minimum reference value
  • the shift amount at the time of generating MinPoint is 6
  • the pixel value scaled by the maximum scaling value is 8 bits
  • the scaled pixel value of other scaling values is 7 bits, the same applies to some other values. Is possible.
  • the pixel value of the 10-bit input image is not expanded, the target pixel group to be subjected to scaling and inverse scaling processing is set to 16 pixel units, the maximum value of the scaling value Q is set to 2, and the minimum reference value MinPoint is set. It is also possible to adopt a configuration in which the shift amount at the time of generation is set to 7, the bit length of the pixel value scaled with the maximum scaling value is set to 8 bits, and the bit length of the pixel value scaled with other scaling values is set to 7 bits. .
  • the video encoding apparatus performs the scaling process and the inverse scaling process before and after the reference image buffer unit, respectively, so that the pixel bit length applied to the reference image buffer unit Is smaller than the pixel bit length applied to other internal processing (prediction processing, filter processing, etc.). Therefore, according to the video encoding device according to the present embodiment, the pixel bit length applied to the reference image buffer unit while realizing a highly accurate prediction process, filter process, and the like by applying a larger pixel bit length. Can be kept small. Further, since the scaling information and the scaled reference pixel group data can be managed in a fixed length in units of bytes, data access in the reference image buffer is facilitated.
  • the moving picture decoding apparatus includes a decoding unit 600 and a decoding control unit 240.
  • the decoding unit 600 generates the output image 26 by decoding the encoded data.
  • the decoding control unit 240 controls various elements in the decoding unit 600.
  • the decoding control unit 240 controls the prediction unit 620 and the like described later. Note that the scaling process and the inverse scaling process in the moving picture decoding apparatus in FIG. 17 are substantially the same as or similar to the scaling process and the inverse scaling process in the moving picture encoding apparatus in FIG.
  • the decoding unit 600 includes an entropy decoding unit 401, an inverse quantization / inverse transformation unit 202, an addition unit 203, a loop filter unit 610, a reference image buffer unit 604, a prediction unit 620, and a bit length normalization unit 430.
  • the entropy decoding unit 401 is the same as that already described in the second embodiment.
  • the inverse quantization / inverse transformation unit 202 and the addition unit 203 are substantially the same or similar elements as the inverse quantization / inverse transformation unit 104 and the addition unit 105 described above. That is, the inverse quantization / inverse transform unit 202 performs inverse quantization and inverse transform (for example, inverse discrete cosine transform (IDCT), etc.) on the quantized transform coefficient from the entropy decoding unit 401 to generate a prediction error. To restore. The adding unit 203 adds the prediction error restored by the inverse quantization / inverse transform unit 202 and the corresponding prediction image from the prediction unit 620 to generate a decoded image 22.
  • inverse quantization / inverse transform unit 202 performs inverse quantization and inverse transform (for example, inverse discrete cosine transform (IDCT), etc.) on the quantized transform coefficient from the entropy decoding unit 401 to generate a prediction error.
  • IDCT inverse discrete cosine transform
  • the loop filter unit 610 is an element that is substantially the same as or similar to the loop filter unit 510 described above. That is, the loop filter unit 610 performs a filtering process or a bypass process that does not pass the filtering process on the target pixel group in the decoded image 22 from the adding unit 203 according to the loop filter information 23 from the entropy decoding unit 401. . Further, the loop filter unit 610 reduces the pixel bit length by performing the above-described scaling processing on the filter processing result group or the bypass processing result group (that is, the target pixel group). The loop filter unit 610 supplies the scaled reference pixel group 24 and the scaling information 25 related to the scaling process to the reference image buffer unit 604 as the scaling process result group.
  • the reference image buffer unit 604 stores the scaled reference pixel group 24 and the scaling information 25 from the loop filter unit 610.
  • the reference pixel group 24 and the scaling information 25 accumulated in the reference image buffer unit 604 are read by the prediction unit 620 or the bit length normalization unit 430 as necessary.
  • the bit length normalization unit 430 reads out a desired reference pixel group (ie, necessary for generating the output image 26) and corresponding scaling information according to the display order.
  • the prediction unit 620 is an element that is substantially the same as or similar to the prediction unit 520 described above. That is, the prediction unit 620 reads the scaled reference pixel group and scaling information from the reference image buffer unit 604 as necessary. The prediction unit 620 restores the reference image by applying inverse scaling that extends the pixel bit length to the scaled reference pixel group according to the scaling information. The prediction unit 620 generates a prediction image based on the motion vector information from the entropy decoding unit 401 and the restored reference image. The prediction unit 620 supplies the predicted image to the addition unit 203.
  • the bit length normalization unit 430 reads the scaled reference pixel group and the scaling information from the reference image buffer unit 604 as necessary.
  • the bit length normalization unit 430 applies inverse scaling that extends the pixel bit length to the scaled reference pixel group according to the scaling information, and obtains a desired pixel bit length U (here, the bit length normalization unit 430).
  • the pixel bit length U related to the operation of the prediction unit 620 does not necessarily match the pixel bit length U related to the operation of the prediction unit 620).
  • the bit length normalization unit 430 supplies the output image 26 to the outside. Note that the pixel bit length of the output image 26 may be the same as or different from the pixel bit length of the input image 11 in the moving image encoding device, for example.
  • the bit length normalization unit 430 can be omitted. It is.
  • the bit length normalization unit 430 may perform bit length normalization of the output image 26 according to the internal pixel bit length information 47 from the entropy decoding unit 401.
  • the moving picture decoding apparatus uses the pixel bit length applied to the reference image buffer unit as the pixel bit applied to other internal processing (prediction processing, filter processing, etc.).
  • the reference pixel group and the scaling information can be simultaneously stored in the reference image buffer unit. Therefore, according to the video decoding device according to the present embodiment, the pixel bit length applied to the reference image buffer unit while realizing a highly accurate prediction process, filter process, and the like by applying a larger pixel bit length. Can be kept small. Further, since the scaling information and the scaled reference pixel group data can be managed in a fixed length in units of bytes, data access in the reference image buffer is facilitated.
  • the moving image encoding apparatus includes an encoding unit 700 and an encoding control unit 140.
  • the encoding unit 700 encodes the input image 11 to generate encoded data.
  • the encoding control unit 140 controls various elements in the encoding unit 700. For example, the encoding control unit 140 controls the prediction unit 720 and the like.
  • the encoding unit 700 includes a bit length extension unit 309, a subtraction unit 101, a transform / quantization unit 102, an entropy encoding unit 703, an inverse quantization / inverse transform unit 104, an addition unit 105, a reference image buffer unit 707, a loop filter.
  • the bit length extension unit 309 is the same as that already described in the second embodiment. However, the part in which the pixel bit length is extended in the fifth embodiment does not have to be the entire encoding unit and decoding unit as shown in FIGS. 20 and 21, and in this embodiment, This can be realized if the pixel bit length is extended up to the output of the loop filter units 710 and 810 and the input of the prediction units 720 and 820 described.
  • the subtraction unit 101 subtracts the prediction image from the prediction unit 720 from the input image 11 to obtain a prediction error.
  • the transform / quantization unit 102 performs transform (for example, discrete cosine transform (DCT)) and quantization on the prediction error from the subtraction unit 101, and quantizes information on transform coefficients (hereinafter simply referred to as quantum). (Referred to as conversion coefficient).
  • transform for example, discrete cosine transform (DCT)
  • quantum quantizes information on transform coefficients
  • the entropy encoding unit 703 performs entropy encoding on the quantized transform coefficient from the transform / quantization unit 102, the motion vector information from the motion vector generation unit 730, and the pixel bit length extension information. Note that the entropy encoding unit 703 may further entropy encode information other than these (for example, prediction mode information).
  • the type of entropy encoding is, for example, variable length encoding or arithmetic encoding.
  • the entropy encoding unit 703 outputs encoded data obtained by entropy encoding to the outside.
  • the inverse quantization / inverse transform unit 104 performs inverse quantization and inverse transform (for example, inverse discrete cosine transform (IDCT), etc.) on the quantized transform coefficient from the transform / quantization unit 102, thereby reducing a prediction error.
  • inverse quantization and inverse transform for example, inverse discrete cosine transform (IDCT), etc.
  • IDCT inverse discrete cosine transform
  • the adding unit 105 adds the prediction error restored by the inverse quantization / inverse transform unit 104 and the corresponding prediction image from the prediction unit 720 to generate the local decoded image 12.
  • the loop filter unit 710 receives the local decoded image 12 from the adding unit 106, performs a loop filter process such as a deblocking process or an image restoration process on the local decoded image 12, and generates a decoded image signal.
  • a loop filter process such as a deblocking process or an image restoration process
  • the reference image buffer unit 707 stores the decoded image signal from the loop filter unit 710. The detailed operation of the image buffer unit 707 will be described later with reference to FIGS.
  • the pixel accuracy control unit 721 includes a decoded image signal (hereinafter referred to as a reference image signal) used for prediction processing among the decoded image signals stored in the reference image buffer unit 707, and pixels from the bit length extension unit 309. Bit length extension information is received.
  • the pixel accuracy control unit 721 performs processing for controlling the degree of deterioration in pixel accuracy caused by scaling. Detailed operation of the pixel accuracy control unit 721 will be described later with reference to FIG.
  • the motion vector generation unit 730 receives the image signal from the bit length extension unit 309 and the reference image signal from the pixel accuracy control unit 721, calculates a motion vector, and generates motion vector information.
  • the motion vector generation unit 730 notifies the motion vector information to the prediction unit 720 and the entropy encoding unit 703.
  • the unit of pixel accuracy control processing is 16 pixels.
  • the 16-pixel unit is, for example, a 4 ⁇ 4 pixel block unit, but there is no restriction on the shape.
  • the pixel bit length of the target pixel group is 8 bits
  • the number of pixel extension bits is 4 bits. Note that the pixel bit length and the pixel extension bit number of the target pixel group are not limited to these specific bit numbers.
  • the pixel accuracy control unit 721 searches for the maximum value Max and the minimum value Min of the pixel values of the target pixel group (step S712-1).
  • the upper limit of the maximum value Max is set to 2 12 ⁇ 1, for example. That is, if the value of the maximum value Max exceeds 2 12 ⁇ 1, the value of the maximum value Max is handled as 2 12 ⁇ 1.
  • the upper limit of the maximum value Max is, for example, (255 ⁇ 2 4 ). In this case, if the maximum value Max exceeds (255 ⁇ 2 4 ), the maximum value Max is handled as (255 ⁇ 2 4 ).
  • the pixel accuracy control unit 721 derives the scaling amount Q of the target pixel group (step S712-2). Specifically, the scaling amount Q is derived by executing the following calculation (30).
  • the scaling amount Q can take any integer value from 0 to 4.
  • the pixel accuracy control unit 721 applies scaling to each pixel value according to the derived scaling amount Q (step S712-3). Specifically, the pixel accuracy control unit 721 generates each pixel value G [m] of the reference pixel group whose pixel accuracy is controlled according to the following calculations (31), (32), and (33).
  • the scaled reference pixel group is generated by executing the calculation (31). At this time, for the first pixel P [0] of the target pixel group, when P [0] is a value smaller than 8 so that the pixel value G [0] whose pixel accuracy is controlled is not 0, 16 To do. Otherwise, after adding 8, zero padding is performed on the lower 4 bits.
  • the input pixel value is used as the output value as it is as in the following calculation (32).
  • the scaling amount Q is a value other than the above
  • the lower Q bits are zero-padded according to the following calculation (33), and the offset value (1 ⁇ (Q-1)) is added.
  • G [0] if the scaling amount Q is 4, but is limited to 16 to a value of 255 ⁇ 2 4, otherwise, is limited to a value of from 0 to 255 ⁇ 2 4 Shall.
  • P [0] when P [0] is smaller than 8, 0 does not appear in G [0].
  • this value may be other than 0.
  • G [0] there is also a method of limiting the value of 254 ⁇ 2 4.
  • the pixel value of the input image signal may be limited, for example, the range from 16 ⁇ 2 4 to 235 ⁇ 2 4 for the luminance signal and 16 ⁇ 2 for the color difference signal. 4 may be a technique that limits a value of up to 240 ⁇ 2 4. In this case, it is not necessary to limit the value of G [0] when the scaling amount Q is 4.
  • the pixel accuracy control method corresponding to the scaling process and the inverse scaling process of the fourth embodiment has been described.
  • another scaling method for example, 12 bits are fixed.
  • the processing of Expression (34) may be performed on the entire screen.
  • FIG. 23 shows an example in which the scaling processing unit 113 and the inverse scaling processing unit 121 exist before and after the reference frame unit.
  • the scaling processing unit 113 and the inverse scaling processing unit 121 are the same as those shown in the fourth embodiment. Based on the pixel bit length extension information, the scaling processing of FIG. 6 and the inverse scaling processing of FIG. I do.
  • the reference image buffer unit in FIG. 24 shows an example in which the scaling processing unit 113 and the inverse scaling processing unit 121 do not exist before and after the reference frame unit.
  • the configuration of the reference image buffer unit 707 is the configuration shown in FIG. 23 or the configuration shown in FIG.
  • the output results from the pixel accuracy control unit 721 are the same. That is, the same result can be obtained regardless of the presence or absence of the scaling process and the inverse scaling in the reference image buffer unit, and the introduction of the scaling process and the inverse scaling process can be selected depending on the mounting method.
  • the position of the pixel accuracy control unit 721 is positioned after the reference image buffer 707, but the pixel accuracy control unit 721 may be positioned before the reference image buffer 707.
  • the same encoding and decoding results can be obtained even if the pixel accuracy control unit 721 does not exist.
  • the pixel value of the 8-bit input image is expanded by 4 bits
  • the target pixel group for performing the pixel accuracy control processing is set to 16 pixels
  • the maximum value of the scaling value Q is possible with some other values.
  • the pixel value of the 10-bit input image is not expanded, the unit of the target pixel group for performing the scaling and inverse scaling processing is 16 pixels, and the maximum value of the scaling value Q is 2. Is also possible.
  • the moving picture coding apparatus does not store scaling information and the scaled reference picture group in the reference picture buffer, but rather determines the degree of deterioration in pixel accuracy caused by scaling. Shown how to control. Regardless of the presence or absence of the scaling process and the inverse scaling, the encoding result and the decoding result can be matched, and the introduction of the scaling process and the inverse scaling process can be selected according to the mounting method.
  • the moving picture decoding apparatus includes a decoding unit 800 and a decoding control unit 240.
  • the decoding unit 800 generates the output image 26 by decoding the encoded data.
  • the decoding control unit 240 controls various elements in the decoding unit 600.
  • the decoding control unit 240 controls the prediction unit 820 and the like. 21 is the same as the pixel accuracy control unit 721 in the moving image encoding device in FIG.
  • the decoding unit 800 includes an entropy decoding unit 801, an inverse quantization / inverse transform unit 202, an addition unit 203, a loop filter unit 810, a reference image buffer unit 804, and a prediction unit 820.
  • the entropy decoding unit 801 performs entropy decoding according to the syntax information, for example, on the encoded data generated by the moving image encoding device in FIG.
  • the entropy decoding unit 801 supplies the decoded quantized transform coefficient to the inverse quantization / inverse transform unit 202, supplies the decoded motion vector information to the prediction unit 820, and decodes the decoded pixel bit extension Information is supplied to the pixel accuracy control unit 806.
  • the inverse quantization / inverse transformation unit 202 and the addition unit 203 are substantially the same or similar elements as the inverse quantization / inverse transformation unit 104 and the addition unit 105 described above. That is, the inverse quantization / inverse transform unit 202 performs inverse quantization and inverse transform (for example, inverse discrete cosine transform (IDCT), etc.) on the quantized transform coefficient from the entropy decoding unit 201 to generate a prediction error. To restore. The adding unit 203 adds the prediction error restored by the inverse quantization / inverse transform unit 202 and the corresponding prediction image from the prediction unit 620 to generate a decoded image 22.
  • inverse quantization / inverse transform unit 202 performs inverse quantization and inverse transform (for example, inverse discrete cosine transform (IDCT), etc.) on the quantized transform coefficient from the entropy decoding unit 201 to generate a prediction error.
  • IDCT inverse discrete cosine transform
  • the loop filter unit 810 receives the decoded image signal from the adding unit 203 and performs loop filter processing on the decoded image signal.
  • the reference image buffer unit 804 is the same as the reference image buffer unit 707, and can adopt either the configuration of FIG. 23 or FIG.
  • the reference image buffer unit 804 stores the decoded image signal after the filter processing from the loop filter unit 810. Also, in response to an external request, the decoded image signal is extracted as a reference image signal from the reference image buffer unit 804, and the reference image signal is output in accordance with the display order.
  • the pixel accuracy control unit 806 has the same configuration as the pixel accuracy control unit 721 described above, receives a reference image signal from the reference image buffer unit 804, and receives pixel bit length extension information from the entropy decoding unit 801. Detailed processing contents are the same as those of the pixel accuracy control unit 721.
  • the position of the pixel accuracy control unit 806 in FIG. 21 is positioned after the reference image buffer unit 804, but may be positioned before the reference image buffer unit 804. Further, when the reference image buffer unit 804 has the same configuration as the reference image buffer unit of FIG. 23, the encoding result and the decoding result are the same even if the pixel accuracy control unit 806 does not exist.
  • the prediction unit 820 receives a reference image signal from the pixel accuracy control unit 806, receives motion vector information from the entropy decoding unit 801, performs prediction processing, and generates a predicted image signal.
  • the prediction unit 820 supplies the predicted image to the addition unit 203.
  • the moving picture decoding apparatus does not store scaling information and the scaled reference picture group in the reference picture buffer, but controls the degree of deterioration in pixel accuracy caused by scaling. Showed how to do. Regardless of the presence or absence of scaling processing and inverse scaling, the results of encoding and decoding can be matched, and introduction of scaling processing and inverse scaling processing can be selected depending on the implementation method.
  • the various processes described in the first to fifth embodiments may be realized by executing a program (software).
  • a general-purpose computer system reads a program from a storage medium storing a program for realizing the processing according to each embodiment, and executes the program by a CPU or the like. It operates as a decoding device and brings about the same effect.
  • Programs include magnetic disks (flexible disks, hard disks, etc.), optical disks (CD-ROM, CD-R, CD-RW, DVD-ROM, DVD ⁇ R, DVD ⁇ RW, etc.), semiconductor memory, or the like. It may be stored in other storage media. These storage media may be of any type as long as they can be read by a computer or an embedded system that reads the program.
  • the computer or the embedded system may acquire or read the program via a communication medium such as a network. That is, a medium that downloads a program via a communication medium such as a LAN (local area network) or the Internet and stores (including temporary storage) the program is also included in the category of “storage medium”.
  • the term “storage medium” can also refer to a plurality of storage media comprehensively.
  • the computer or the embedded system may be a single device such as a personal computer or a microcontroller, or may be a system in which a plurality of devices are connected to a network.
  • the term “computer” is not limited to a so-called personal computer, and can comprehensively refer to an apparatus capable of executing a program, including an arithmetic processing unit, a microcontroller, and the like included in an information processing apparatus.
  • a part of the processing according to each embodiment may be executed using a function such as an OS (operating system) operating on a computer, database management software, MW (middleware) such as a network.
  • OS operating system
  • MW middleware

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Selon un mode de réalisation, l'invention concerne un appareil de codage d'image animée qui dérive des informations de mise à l'échelle en fonction de valeurs maximum et minimum d'un groupe de pixels cibles dans une image locale décodée. Une mise à l'échelle qui réduit l'échelle de la longueur binaire de pixel est appliquée au groupe de pixels cibles en fonction des informations de mise à l'échelle. Une unité de mise à l'échelle limite les valeurs de pixels particuliers, qui doivent être mis à l'échelle par rapport aux valeurs particulières, ce qui permet de générer un groupe de pixels de référence mis à l'échelle. Une longueur binaire fixe est utilisée pour exprimer la description de premières informations de mise à l'échelle en cas d'inclusion des valeurs particulières ou la description de secondes informations de mise à l'échelle en cas d'exclusion des valeurs particulières et pour exprimer le groupe de pixels de référence en tant que mis à l'échelle en fonction des informations de mise à l'échelle correspondantes. Une image de référence est reconstruite par utilisation d'une mise à l'échelle inverse et une image prédite est générée. Des informations indiquant la différence entre une image d'entrée et l'image prédite sont codées par une unité de codage.
PCT/JP2010/073604 2010-07-02 2010-12-27 Appareil de codage et appareil de décodage d'image animée et procédé WO2012001833A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
PCT/JP2010/061350 WO2012001818A1 (fr) 2010-07-02 2010-07-02 Dispositif de codage vidéo et dispositif de décodage vidéo
JPPCT/JP2010/061350 2010-07-02
PCT/JP2010/067108 WO2012042645A1 (fr) 2010-09-30 2010-09-30 Dispositif de codage d'image dynamique et dispositif de décodage d'image dynamique
JPPCT/JP2010/067108 2010-09-30

Publications (1)

Publication Number Publication Date
WO2012001833A1 true WO2012001833A1 (fr) 2012-01-05

Family

ID=45401585

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/073604 WO2012001833A1 (fr) 2010-07-02 2010-12-27 Appareil de codage et appareil de décodage d'image animée et procédé

Country Status (1)

Country Link
WO (1) WO2012001833A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014203351A1 (fr) * 2013-06-19 2014-12-24 株式会社 東芝 Dispositif de traitement d'image et procédé de traitement d'image

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007114368A1 (fr) * 2006-03-30 2007-10-11 Kabushiki Kaisha Toshiba appareil et procédé de codage d'image, et appareil et procédé de décodage d'image
JP2010087984A (ja) * 2008-10-01 2010-04-15 Ntt Docomo Inc 動画像符号化装置、動画像復号装置、動画像符号化方法、動画像復号方法、動画像符号化プログラム、動画像復号プログラム、及び動画像符号化・復号システム

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007114368A1 (fr) * 2006-03-30 2007-10-11 Kabushiki Kaisha Toshiba appareil et procédé de codage d'image, et appareil et procédé de décodage d'image
JP2010087984A (ja) * 2008-10-01 2010-04-15 Ntt Docomo Inc 動画像符号化装置、動画像復号装置、動画像符号化方法、動画像復号方法、動画像符号化プログラム、動画像復号プログラム、及び動画像符号化・復号システム

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014203351A1 (fr) * 2013-06-19 2014-12-24 株式会社 東芝 Dispositif de traitement d'image et procédé de traitement d'image

Similar Documents

Publication Publication Date Title
KR102426721B1 (ko) 색차 성분 양자화 매개 변수 결정 방법 및 이러한 방법을 사용하는 장치
JP5854439B2 (ja) 適応セグメンテーションを用いた動画符号化システムおよび方法
JP5586139B2 (ja) 映像の解像度の調整を通じて動画を効率的に符号化/復号化する方法及び装置
KR101446771B1 (ko) 영상 부호화장치 및 영상 복호화장치
US9167245B2 (en) Method of determining binary codewords for transform coefficients
JP2009260977A (ja) 不可逆圧縮及び可逆圧縮を組み合わせて用いたビデオデータ圧縮
JP5871628B2 (ja) 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム
US20130188694A1 (en) Method of determining binary codewords for transform coefficients
TW201313031A (zh) 用於大色度區塊的可變長度寫碼係數寫碼
KR100968371B1 (ko) 영상의 복호화 방법 및 장치
US6804299B2 (en) Methods and systems for reducing requantization-originated generational error in predictive video streams using motion compensation
US8655088B2 (en) Image encoder, image decoder and method for encoding original image data
JP2008271039A (ja) 画像符号化装置及び画像復号化装置
JP5197428B2 (ja) 画像符号化装置及び画像符号化方法
WO2012001833A1 (fr) Appareil de codage et appareil de décodage d'image animée et procédé
JP6145965B2 (ja) 画像符号化装置及び画像復号化装置並びにプログラム
JP6796463B2 (ja) 映像符号化装置、映像復号装置、及びプログラム
US20230009580A1 (en) Image processing device and image processing method
WO2018043256A1 (fr) Dispositif de codage d'image, et dispositif de décodage d'image
JP6875566B2 (ja) 動画像予測符号化装置、動画像予測復号装置、動画像予測符号化方法、動画像予測復号方法及び動画像予測復号プログラム
JP2008160402A (ja) 符号化装置及び方法並びに画像符号化装置
JP2012134632A (ja) 画像復号化装置、画像復号化方法およびプログラム
WO2012001818A1 (fr) Dispositif de codage vidéo et dispositif de décodage vidéo
US20120147972A1 (en) Image decoding apparatus, image decoding method, image encoding apparatus, image encoding method, and program
US20230007311A1 (en) Image encoding device, image encoding method and storage medium, image decoding device, and image decoding method and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10854129

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10854129

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP