US20210006796A1 - Image processing device and image processing method - Google Patents

Image processing device and image processing method Download PDF

Info

Publication number
US20210006796A1
US20210006796A1 US16/980,422 US201916980422A US2021006796A1 US 20210006796 A1 US20210006796 A1 US 20210006796A1 US 201916980422 A US201916980422 A US 201916980422A US 2021006796 A1 US2021006796 A1 US 2021006796A1
Authority
US
United States
Prior art keywords
quantizing
size
matrix
unit
quantizing matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/980,422
Other languages
English (en)
Inventor
Takeshi Tsukuba
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Assigned to SONY CORPORATION reassignment SONY CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TSUKUBA, TAKESHI
Publication of US20210006796A1 publication Critical patent/US20210006796A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/124Quantisation
    • H04N19/126Details of normalisation or weighting functions, e.g. normalisation matrices or variable uniform quantisers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/18Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a set of transform coefficients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/463Embedding additional information in the video signal during the compression process by compressing encoding parameters before transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/96Tree coding, e.g. quad-tree coding

Definitions

  • the present disclosure relates to an image processing device and an image processing method.
  • JVET Joint Video Experts Team
  • FVC field-generation video coding
  • FVC reference software which is being developed based on an HEVC model, is referred to as joint exploration model (JEM), and various technical elements incorporated in JEM are described in non-patent literature 1.
  • Existing video coding methods involve various techniques, such as prediction (intra-prediction/inter-prediction), orthogonal transformation, quantization, and entropy coding.
  • a quantization process which is one of the above techniques, quantizes high-frequency components of transform coefficients more roughly than quantization of low-frequency components in the frequency domain after orthogonal transformation. This achieves an intended data rate while suppressing a deterioration in subjective image quality.
  • H.265/HEVC which will hereinafter be referred to simply as “HEVC”
  • HEVC orthogonal transformation and quantization are executed for each block called transform unit (TU).
  • Candidates for TU sizes include 4 ⁇ 4, 8 ⁇ 8, 16 ⁇ 16, and 32 ⁇ 32, and quantizing matrices corresponding to some TU sizes can be signaled from an encoder to a decoder.
  • a quantizing matrix affects quantizing steps of quantizing respective frequency components of transform coefficients of each block.
  • FVC allows an expanded maximum TU size of 128 ⁇ 128 and allows also a non-square TU.
  • patent literatures 1 and 2 propose a technique by which not the entire quantizing matrices used but only some of them are signaled and the rest of the quantizing matrices are generated from the signaled quantizing matrices so that an increase in overhead is avoided.
  • Non Patent Literature 1 J. Chen, E. Alshina, G. J. Sullivan, J. R. Ohm and J. Boyce, “Algorithm Description of Joint Exploration Test Model (JEM7)”, JVET-G1001, Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 7th Meeting: Torino, IT, 13-21 Jul. 2017
  • a drop in coding efficiency caused by signaling of quantizing matrices and an effect that generation of a different quantizing matrix from a certain quantizing matrix has on device performance have a relationship of trading off against each other.
  • process cost required for generation of quantizing matrices e.g., occupation of hardware resources, processing delays, increasing power consumption, etc.
  • an image processing device includes a decoding unit that decodes scaling list data to generate a first quantizing matrix of a first size, a generating unit that generates a second quantizing matrix for a transform block of a second size to which zeroing of a high-frequency component is applied, by referring to only a partial matrix of the first quantizing matrix generated by the decoding unit, and an inverse quantizing unit that inversely quantizes a quantized transform coefficient of the transform block of the second size, using the second quantizing matrix generated by the generating unit.
  • an image processing method executed by an image processing device includes decoding scaling list data to generate a first quantizing matrix of a first size, generating a second quantizing matrix for a transform block of a second size to which zeroing of a high-frequency component is applied, by referring to only a partial matrix of the first quantizing matrix generated, and inversely quantizing a quantized transform coefficient of the transform block of the second size, using the second quantizing matrix generated.
  • an image processing device includes a generating unit that generates a second quantizing matrix for a transform block of a second size to which zeroing of a high-frequency component is applied, by referring to only a partial matrix of a first quantizing matrix of a first size, a quantizing unit that quantizes a transform coefficient of the transform block of the second size in an image to be coded, using the second quantizing matrix generated by the generating unit, to generate a quantized transform coefficient, and a coding unit that codes a scaling list expressing the quantized transform coefficient and the first quantizing matrix, to generate a coded stream.
  • an image processing method executed by an image processing device includes generating a second quantizing matrix for a transform block of a second size to which zeroing of a high-frequency component is applied, by referring to only a partial matrix of a first quantizing matrix of a first size, quantizing a transform coefficient of the transform block of the second size in an image to be coded, using the second quantizing matrix generated, to generate a quantized transform coefficient, and coding a scaling list expressing the quantized transform coefficient and the first quantizing matrix, to generate a coded stream.
  • quantizing matrices can be generated or signaled efficiently.
  • FIG. 1 is an explanatory view for explaining types of quantizing matrices usable in HEVC.
  • FIG. 2 is an explanatory view illustrating an example of QTBT block division in FVC.
  • FIG. 3A is an explanatory view for explaining zeroing of transform coefficients of a square transform block in FVC.
  • FIG. 3B is an explanatory view for explaining zeroing of transform coefficients of a non-square transform block in FVC.
  • FIG. 4 is an explanatory diagram for explaining an example of basic implementation of a technique according to the present disclosure on a decoder side.
  • FIG. 5 is an explanatory diagram for explaining generation of a quantizing matrix for a transform bock to which zeroing is not applied.
  • FIG. 6A is a first explanatory diagram for explaining generation of a quantizing matrix for a transform bock to which zeroing is applied.
  • FIG. 6B is a second explanatory diagram for explaining generation of a quantizing matrix for a transform bock to which zeroing is applied.
  • FIG. 6C is a third explanatory diagram for explaining generation of a quantized matrix for a transform bock to which zeroing is applied.
  • FIG. 7 is an explanatory diagram for explaining an example of basic implementation of the technique according to the present disclosure on an encoder side.
  • FIG. 8 is a block diagram illustrating an example of a configuration of an encoder according to a first embodiment.
  • FIG. 9 is a flowchart illustrating an example of a flow of quantization-related processes executed by the encoder of FIG. 8 .
  • FIG. 10 is a flowchart illustrating an example of a flow of a quantizing matrix generating process.
  • FIG. 11 is a flowchart illustrating an example of a flow of a scaling list coding process.
  • FIG. 12 is a block diagram illustrating an example of a configuration of a decoder according to the first embodiment.
  • FIG. 13 is a flowchart illustrating an example of a flow of inverse-quantization-related processes executed by the decoder of FIG. 12 .
  • FIG. 14 is a flowchart illustrating an example of a flow of a scaling list data decoding process.
  • FIG. 15 is a block diagram illustrating an example of a configuration of an encoder according to a second embodiment.
  • FIG. 16 is a flowchart illustrating an example of a flow of quantization-related processes executed by the encoder of FIG. 15 .
  • FIG. 17 is a block diagram illustrating an example of a configuration of a decoder according to the second embodiment.
  • FIG. 18 is a flowchart illustrating an example of a flow of inverse-quantization-related processes executed by the decoder of FIG. 17 .
  • FIG. 19 is a block diagram illustrating an example of a hardware configuration.
  • coding units which are process units in a coding process, are set in quad-tree patterns in an image.
  • a CU for which inter-prediction is selected as a prediction type is directly divided to set one or more TUs.
  • each of prediction units (PU) making up the CU is divided to set one or more TUs.
  • a TU transform unit
  • the minimum size of the TU is 4 ⁇ 4, and the maximum size of the same is 32 ⁇ 32.
  • An encoder and a decoder perform orthogonal transformation/quantization and inverse orthogonal transformation/inverse quantization, respectively, using such TUs as process units.
  • quantizing steps may be uniform in a transform block or may be different depending on locations in the transform block (i.e., depending on frequency components of transform coefficients). For example, when coded streams run at the same bit rate, quantizing high-frequency components of transform coefficients more roughly than quantization of low-frequency components allows relative suppression of a deterioration in subjective image quality.
  • Quantizing steps that are different depending on locations in the transform block are expressed by elements of a quantizing matrix that is equal in size to the transform block.
  • a quantizing matrix that is equal in size to the transform block.
  • Y, Cb, or Cr color component
  • Y Cb, or Cr
  • the size of a quantizing matrix is identified with a size ID, and a combination of a prediction type and a color component of the same is identified with a matrix ID.
  • Types of quantizing matrices usable in HEVC are illustrated schematically in FIG. 1 .
  • An existing quantizing matrix of 16 ⁇ 16 in size is generated by up-sampling elements of an existing quantizing matrix of 8 ⁇ 8 in size by a nearest neighboring algorithm.
  • An existing quantizing matrix of 32 ⁇ 32 in size is generated by up-sampling elements of an existing quantizing matrix of 16 ⁇ 16 in size by the same nearest neighboring algorithm.
  • HEVC when using a quantizing matrix different from the existing quantizing matrices is desirable, a specific quantizing matrix defined by a user can be signaled explicitly.
  • quantizing matrices of 4 ⁇ 4 and 8 ⁇ 8 in size can be signaled as a whole by scanning all of their elements
  • signaling of quantizing matrices of 16 ⁇ 16 and 32 ⁇ 32 in size is achieved through signaling and up-sampling of a quantizing matrix of 8 ⁇ 8 in size.
  • element values for DC components that make up specific quantizing matrices of 16 ⁇ 16 and 32 ⁇ 32 in size can be signaled separately.
  • FIG. 2 illustrates an example of QTBT block division in FVC.
  • An image Im 0 illustrated in FIG. 2 includes four CTUs each having a size of 128 ⁇ 128.
  • a CTU in an upper left section includes 13 CUs formed by four recursive QT divisions.
  • the minimum CU is 8 ⁇ 8 in size and the maximum CU is 64 ⁇ 64 in size.
  • a CTU in a lower left section includes five CUs formed by four recursive BT divisions.
  • a CTU in an upper right section includes nine CUs formed by multiple times of recursive QT division and BT division.
  • a CTU in a lower right section is not divided and therefore includes one CU.
  • the size of the minimum CU is 8 ⁇ 8.
  • a square CU or non-square CU with the length of its one side being 2 or 4 is also permitted.
  • each of these CUs serves also as a transform block.
  • an upper limit of transform block sizes (i.e., TU sizes) permitted in HEVC is 32 ⁇ 32
  • an upper limit of transform block sizes permitted in FVC is 128 ⁇ 128, a wide increase from 32 ⁇ 32.
  • Such a large transform block may be used, for example, in an application where high-definition video images called “4K” are coded efficiently.
  • 4K high-definition video images
  • FIGS. 3A and 3B schematically illustrate some examples of such zeroing performed in FVC.
  • FIG. 3A illustrates three square transform blocks B 01 , B 02 , and B 03 .
  • the size of the transform block B 01 is 32 ⁇ 32, and therefore zeroing is not applied to the transform block B 01 .
  • the size of the transform block B 02 is 64 ⁇ 64. In this case, transform coefficients of the transform block B 02 except 32 ⁇ 32 transform coefficients in the upper left section of the transform block B 02 are rendered zero.
  • the size of the transform block B 03 is 128 ⁇ 128. In this case, transform coefficients of the transform block B 03 except 32 ⁇ 32 transform coefficients in the upper left section of the transform block B 03 are rendered zero.
  • FIG. 3B illustrates nine non-square transform blocks of various sizes, in addition to a square transform block of 32 ⁇ 32 in size. As it is understood from FIG. 3B , according to FVC, along a side of 64 or more in length, 32 frequency components on the low-frequency side are maintained as the rest of frequency components (belonging to the high-frequency side) are rendered zero.
  • signaling of quantizing matrices of 16 ⁇ 16 and 32 ⁇ 32 in size is achieved through signaling and up-sampling of a quantizing matrix of 8 ⁇ 8 in size, in order to avoid a drop in coding efficiency that is caused by signaling of quantizing matrices.
  • a drop in coding efficiency caused by signaling of quantizing matrices and an effect that generating a different quantizing matrix from a certain quantizing matrix (through, for example, up-sampling) has on device performance have a relationship of trading off against each other.
  • a technique according to the present disclosure provides an improved system for efficiently generating or signaling quantizing matrices.
  • FIG. 4 is an explanatory diagram for explaining an example of basic implementation of the technique according to the present disclosure on a decoder side.
  • FIG. 4 illustrates process steps S 11 to S 16 related to inverse quantization that can be executed by the decoder.
  • the technique according to the present disclosure introduces a method by which when an additional quantizing matrix of a size to which zeroing is applied is generated, not the whole of a reference quantizing matrix but only the partial matrix of the same is referred to, the partial matrix covering a range that substantially contributes to quantization of non-zero coefficients.
  • step S 14 is included in step S 15 .
  • a quantizing matrix needed for each transform block may be generated in so-called “on the fly” mode at the time of inverse quantization (if the quantizing matrix is not generated yet).
  • the above step S 14 may be executed before processing on a plurality of transform blocks so that quantizing matrices of all size candidates are stored in advance in the QM memory M 11 .
  • the process of generating an additional quantizing matrix from a reference quantizing matrix is typically a combination of down-sampling or up-sampling of elements in the horizontal direction of a quantizing matrix and down-sampling or up-sampling of elements in the vertical direction of the same.
  • Up-sampling includes, for example, interpolating matrix elements by an interpolation method, such as a nearest neighboring algorithm, bilinear interpolation, and bicubic interpolation.
  • Down-sampling includes, for example, thinning out matrix elements.
  • FIG. 5 is an explanatory diagram for explaining generation of a quantizing matrix for a transform block to which zeroing is not applied.
  • a reference quantizing matrix B 10 is illustrated on an upper left part of FIG. 5 .
  • the reference quantizing matrix B 10 is of a square shape and is N 1 ⁇ N 1 in size. It is also assumed that zeroing of high-frequency components is not applied to a transform block of N 1 ⁇ N 1 in size.
  • Quantizing matrices B 11 , B 12 , B 13 , and B 14 illustrated on a lower part of FIG. 5 are additional quantizing matrices to be generated.
  • the size of the additional quantizing matrix B 11 in the horizontal direction is smaller than N 1 and the size of the same in the vertical direction is also smaller than N 1 .
  • the additional quantizing matrix B 11 corresponds to a transform block to which zeroing of high-frequency components is not applied.
  • the additional quantizing matrix B 11 is, therefore, generated by down-sampling the whole of the reference quantizing matrix B 10 in both horizontal and vertical directions.
  • the size of the additional quantizing matrix B 12 in the horizontal direction is smaller than N 1 but the size of the same in the vertical direction is larger than N 1 .
  • the additional quantizing matrix B 12 corresponds to a transform block to which zeroing of high-frequency components is not applied.
  • the additional quantizing matrix B 12 is, therefore, generated by down-sampling the whole of the reference quantizing matrix B 10 in the horizontal direction and up-sampling the same in the vertical direction.
  • the size of the additional quantizing matrix B 13 in the horizontal direction is larger than N 1 but the size of the same in the vertical direction is smaller than N 1 .
  • the additional quantizing matrix B 13 corresponds to a transform block not subjected to zeroing of high-frequency components.
  • the additional quantizing matrix B 13 is, therefore, generated by up-sampling the whole of the reference quantizing matrix B 10 in the horizontal direction and down-sampling the same in the vertical direction.
  • the size of the additional quantizing matrix B 14 in the horizontal direction is larger than N 1 and the size of the same in the vertical direction is also larger than N 1 .
  • the additional quantizing matrix B 14 corresponds to a transform block to which zeroing of high-frequency components is not applied.
  • the additional quantizing matrix B 14 is, therefore, generated by up-sampling the whole of the reference quantizing matrix B 10 in both horizontal and vertical directions.
  • re-sampling down-sampling or up-sampling
  • re-sampling operations may be executed integrally as a single operation.
  • Quantizing matrices B 21 and B 22 illustrated on a lower part of FIG. 6A are additional quantizing matrices to be generated.
  • the size of the additional quantizing matrix B 21 in the horizontal direction is larger than N TH , which denotes an upper limit of sizes to which zeroing of high-frequency components is not applied, and the size of the same in the vertical direction is smaller than N 1 .
  • the additional quantizing matrix B 21 corresponds to a transform block to which zeroing of high-frequency components is applied.
  • the additional quantizing matrix B 21 is, therefore, generated by up-sampling a partial matrix of the reference quantizing matrix B 10 in the horizontal direction and down-sampling the same in the vertical direction.
  • a ratio of the size N PAR of the partial matrix to the size N 1 of the reference quantizing matrix B 10 is equal to a ratio of the size N TH of a non-zero part to the size N 2 of the additional quantizing matrix B 21 .
  • the size of the additional quantizing matrix B 22 in the horizontal direction is also larger than N TH , which denotes the upper limit of sizes to which zeroing of high-frequency components is not applied, and the size of the same in the vertical direction is larger than N 2 but is smaller than N TH .
  • the additional quantizing matrix B 22 corresponds to a transform block to which zeroing of high-frequency components is applied.
  • the additional quantizing matrix B 22 is, therefore, generated by up-sampling the partial matrix of the reference quantizing matrix B 10 in both horizontal and vertical directions.
  • Quantizing matrices B 31 and B 32 illustrated on a lower part of FIG. 6B are additional quantizing matrices to be generated.
  • the size of the additional quantizing matrix B 31 in the horizontal direction is smaller than N 1 , and the size of the same in the vertical direction is larger than N TH , which denotes the upper limit of sizes to which zeroing of high-frequency components is not applied.
  • the additional quantizing matrix B 31 corresponds to a transform block to which zeroing of high-frequency components is applied.
  • the additional quantizing matrix B 31 is, therefore, generated by down-sampling the partial matrix of the reference quantizing matrix B 10 in the horizontal direction and up-sampling the same in the vertical direction.
  • a ratio of the size N PAR of the partial matrix to the size N 1 of the reference quantizing matrix B 10 is equal to a ratio of the size N TH of a non-zero part to the size N 2 of the additional quantizing matrix B 31 .
  • the size of the additional quantizing matrix B 32 in the horizontal direction is larger than N 1 but is smaller than N TH , and the size of the same in the vertical direction is larger than N TH , which denotes the upper limit of sizes to which zeroing of high-frequency components is not applied.
  • the additional quantizing matrix B 32 corresponds to a transform block to which zeroing of high-frequency components is applied.
  • the additional quantizing matrix B 32 is, therefore, generated by up-sampling the partial matrix of the reference quantizing matrix B 10 in both horizontal and vertical directions.
  • a quantizing matrix B 41 illustrated on a lower part of FIG. 6C is an additional quantizing matrix to be generated.
  • the size of the additional quantizing matrix B 41 in the horizontal direction is larger than N TH , which denotes the upper limit of sizes to which zeroing of high-frequency components is not applied, and the size of the same in the vertical direction is also larger than N TH .
  • the additional quantizing matrix B 41 corresponds to a transform block to which zeroing of high-frequency components is applied.
  • the additional quantizing matrix B 41 is, therefore, generated by up-sampling the partial matrix of the reference quantizing matrix B 10 in both horizontal and vertical directions.
  • a ratio of the size N PAR_H of the partial matrix to the size N 1 of the reference quantizing matrix B 10 is equal to a ratio of the size N TH of a non-zero part to the size N 2_H of the additional quantizing matrix B 41 .
  • a ratio of the size N PAR_V of the partial matrix to the size N 1 of the reference quantizing matrix B 10 is equal to a ratio of the size N TH of the non-zero part to the size N 2_V of the additional quantizing matrix B 41 .
  • the encoder usually includes a local decoder, which executes inverse quantization.
  • FIG. 7 does not illustrate inverse quantization executed by the local decoder, the same quantizing matrix used at step S 25 may be used also in this inverse quantization.
  • this upper limit N TH used in FVC is equivalent to 32. In this case, it is unnecessary to code a control parameter indicating to which transform block zeroing is applied. However, to realize more flexible control of zeroing, for example, the following control parameters may be additionally coded.
  • control parameters for controlling zeroing may be coded, for example, for each sequence, picture, slice, tile, CTU, or transform block. In this manner, by dynamically determining application/non-application of zeroing or a size to which zeroing is applied, an image expressing even minute high-frequency components can be reproduced flexibly according to the user's needs or system requirements/constraints.
  • HEVC high-sampling a quantizing matrix of a smaller size.
  • HEVC has a predetermined specification-based rule providing that quantizing matrices of up to 8 ⁇ 8 in size be signaled directly.
  • M and N each denote the power of 2
  • size specifying information is coded, the size specifying information indicating the size of the quantizing matrix generated as a basic quantizing matrix, from scaling list data.
  • the size specifying information thus specifies various sizes of basic quantizing matrices, thereby allowing flexible use of various types of quantizing matrices.
  • the technique according to the present disclosure can also be applied to a system in which, regardless of the largeness/smallness of a matrix size, the size of the quantizing matrix signaled directly is determined in advance based on specifications.
  • FIG. 8 is a block diagram illustrating an example of a configuration of an image processing device 10 a according to a first embodiment, the image processing device 10 a having a functionality of an encoder.
  • the image processing device 10 a includes a coding control unit 11 , a rearrangement buffer 12 , a deducting unit 13 , an orthogonal transformation unit 14 , a quantizing unit 15 a , a reversible coding unit 16 , an accumulation buffer 17 , a rate control unit 18 , an inverse quantizing unit 21 , an inverse orthogonal transformation unit 22 , an adding unit 23 , an in-loop filter 24 , a frame memory 25 , a switch 26 , an intra-prediction unit 30 , an inter-prediction unit 35 , a mode setting unit 40 , and a QM memory unit 115 a.
  • the coding control unit 11 controls the overall encoder functionality of the image processing device 10 a , which will be described in detail below.
  • the coding control unit 11 includes a block setting unit 111 and a basic QM setting unit 113 .
  • the block setting unit 111 is a module that executes the block setting process step S 21 , which has been described above referring to FIG. 7 .
  • the basic QM setting unit 113 is a module that executes the basic QM setting process step S 22 , which has been described above referring to FIG. 7 . These modules will be described further later on.
  • the rearrangement buffer 12 rearranges a series of images making up video to be coded, according to a given group-of-pictures (GOP) structure.
  • the rearrangement buffer 12 outputs rearranged images to the deducting unit 13 , to the intra-prediction unit 30 , and to the inter-prediction unit 35 .
  • the deducting unit 13 calculates predicted errors, which represent a difference between the incoming image (original image) from the rearrangement buffer 12 and a predicted image, and outputs the calculated predicted errors to the orthogonal transformation unit 14 .
  • the orthogonal transformation unit 14 executes orthogonal transformation of each of one or more transform blocks set in an image to be coded. This orthogonal transformation may be executed, for example, as discrete cosine transformation (DCT) or discrete sine transformation (DST). More specifically, the orthogonal transformation unit 14 orthogonally transforms a signal sample in the spacial domain for each transform block, the signal sample representing the incoming predicted errors from the deducting unit 13 , to generate transform coefficients in the frequency domain. In addition, under control by the coding control unit 11 , the orthogonal transformation unit 14 applies zeroing to high-frequency components of a transform block of a certain size to render the high-frequency components zero.
  • DCT discrete cosine transformation
  • DST discrete sine transformation
  • the 32-th frequency component and other frequency components to follow on the high-frequency side may be rendered zero.
  • the orthogonal transformation unit 14 outputs the generated transform coefficients to the quantizing unit 15 a.
  • the quantizing unit 15 a is supplied with the incoming transform coefficients from the orthogonal transformation unit 14 and with a rate control signal from the rate control unit 18 , which will be described later on. For each of one or more transform blocks in the image to be coded, the quantizing unit 15 a quantizes transform coefficients, using a quantizing matrix equal in size to the transform block, to generate quantized transform coefficients (which will hereinafter be referred to also as “quantized data”). Under control by the coding control unit 11 , the quantizing unit 15 a skips quantization of frequency components rendered zero that are included in the transform coefficients. The quantizing unit 15 a then outputs the generated quantized data to the reversible coding unit 16 and to the inverse quantizing unit 21 .
  • the quantizing unit 15 a switches a quantizing step, based on the rate control signal from the rate control unit 18 , thereby changing a bit rate of the quantized data.
  • the quantizing unit 15 a includes a QM generating unit 117 a .
  • the QM generating unit 117 a is a module that executes the QM generating process step S 23 , which has been described above referring to FIG. 7 .
  • the QM generating unit 117 a includes a reference memory M 22 not illustrated in FIG. 8 . This module will be described further later on.
  • the reversible coding unit 16 codes the incoming quantized data from the quantizing unit 15 a to generate a coded stream.
  • the reversible coding unit 16 codes also various control parameters, which the decoder refers to, and inserts the coded parameters into the coded stream.
  • the control parameters coded at this point include, for example, the above-mentioned block division data and scaling list (or scaling list data).
  • the reversible coding unit 16 outputs the generated coded stream to the accumulation buffer 17 .
  • the reversible coding unit 16 includes an SL coding unit 119 .
  • the SL coding unit 119 is a module that executes the QM transformation/SL data generating process step S 26 , which has been described above referring to FIG. 7 . This module will be described further later on.
  • the accumulation buffer 17 temporarily stores the incoming coded stream from the reversible coding unit 16 , using a memory medium.
  • the accumulation buffer 17 then outputs the accumulated coded stream to a transmission unit (not illustrated), which is, for example, a communication interface or an interface connecting to peripheral equipment, at a bit rate corresponding to a bandwidth in a transmission path.
  • the inverse quantizing unit 21 , the inverse orthogonal transformation unit 22 , and the adding unit 23 make up a local decoder.
  • the local decoder plays a role of decoding coded data to reconstruct an image.
  • the inverse quantizing unit 21 For each transform bock, the inverse quantizing unit 21 inversely quantizes quantized transform coefficients, using the same quantizing matrix as used by the quantizing unit 15 a , to restore transform coefficients. The inverse quantizing unit 21 skips inverse quantization of frequency components forcibly rendered zero that are included in the quantized transform coefficients. The inverse quantizing unit 21 then outputs the restored transform coefficients to the inverse orthogonal transformation unit 22 .
  • the inverse orthogonal transformation unit 22 executes inverse orthogonal transformation. More specifically, for each transform block, the inverse orthogonal transformation unit 22 subjects transform coefficients in the frequency domain, the transform coefficients coming from the inverse quantizing unit 21 , to inverse orthogonal transformation, thereby restoring predicted errors in the form of a signal sample in the spacial domain. The inverse orthogonal transformation unit 22 then outputs the restored predicted errors to the adding unit 23 .
  • the adding unit 23 adds up the incoming restored predicted errors from the inverse orthogonal transformation unit 22 and an incoming predicted image from the intra-prediction unit 30 or the inter-prediction unit 35 , to reconstruct a decoded image.
  • the adding unit 23 then outputs the reconstructed decoded image to the in-loop filter 24 and to the frame memory 25 .
  • the in-loop filter 24 is composed of a series of filters that are applied to the decoded image for the purpose of improving its quality.
  • the in-loop filter 24 may include one or more of, for example, a bilateral filter, a de-blocking filter, an adaptive offset filter, and an adaptive loop filter, which are described in the reference document REF3.
  • the in-loop filter 24 outputs the decoded image having been filtered through the series of filters, to the frame memory 25 .
  • the frame memory 25 stores the incoming pre-filtering decoded image from the adding unit 23 and the incoming post-filtering decoded image from the in-loop filter 24 .
  • the switch 26 reads the pre-filtering decoded image, which is used for intra-prediction, out of the frame memory 25 , and supplies the read decoded image as a reference image, to the intra-prediction unit 30 .
  • the switch 26 reads also the post-filtering decoded image, which is used for inter-prediction, out of the frame memory 25 , and supplies the read decoded image as a reference image, to the inter-prediction unit 35 .
  • the intra-prediction unit 30 executes an intra-prediction process, based on the original image and the decoded image. For example, the intra-prediction unit 30 evaluates cost based on predicted errors and the volume of codes generated, for each of prediction mode candidates included in a search range. The intra-prediction unit 30 then selects a prediction mode that makes the cost the minimum, as an optimum prediction mode. In addition, the intra-prediction unit 30 generates a predicted image according to the selected optimum prediction mode. The intra-prediction unit 30 then outputs the predicted image and a cost corresponding thereto, together with some control parameters containing prediction mode information, to the mode setting unit 40 .
  • the inter-prediction unit 35 executes an inter-prediction process (motion compensation), based on the original image and the decoded image. For example, the inter-prediction unit 35 evaluates cost based on predicted errors and the volume of codes generated, for each of prediction mode candidates included in a search range. The inter-prediction unit 35 then selects a prediction mode that makes the cost the minimum, as an optimum prediction mode. In addition, the inter-prediction unit 35 generates a predicted image according to the selected optimum prediction mode. The inter-prediction unit 35 then outputs the predicted image and a cost corresponding thereto, together with some control parameters containing prediction mode information, to the mode setting unit 40 .
  • an inter-prediction process motion compensation
  • the mode setting unit 40 Based on comparison between the incoming cost from the intra-prediction unit 30 and the incoming cost from the inter-prediction unit 35 , the mode setting unit 40 sets a prediction type of each block. For a block of which a prediction type is set as intra-prediction, the mode setting unit 40 outputs the predicted image generated by the intra-prediction unit 30 to the deducting unit 13 and to the adding unit 23 . For a block of which a prediction type is set as inter-prediction, the mode setting unit 40 outputs the predicted image generated by the inter-prediction unit 35 to the deducting unit 13 and to the adding unit 23 . In addition, the mode setting unit 40 outputs control parameters to be coded, to the reversible coding unit 16 .
  • the block setting unit 111 the basic QM setting unit 113 , the QM memory unit 115 a , the QM generating unit 117 a , and the SL coding unit 119 are mainly involved in quantizing matrix generation performed by the encoder.
  • the block setting unit 111 divides each image into a plurality of transform blocks through QTBT block division, thus setting a plurality of transform blocks in each of a series of images.
  • the block setting unit 111 generates block division data that defines the block structures of set transform blocks, and outputs the generated block division data to the reversible coding unit 16 .
  • the size of a transform block set by the block setting unit 111 may range, for example, from 2 ⁇ 2 to 128 ⁇ 128.
  • the shape of the transform block may be square or non-square. Some examples of the shapes and sizes of transform blocks are illustrated in FIG. 2 .
  • the basic QM setting unit 113 sets basic quantizing matrices of one or more sizes, as quantizing matrices used by the image processing device 10 a .
  • a basic quantizing matrix has at least one element different in value from an element of an existing quantizing matrix defined by FVC specifications.
  • the value of an element of the basic quantizing matrix can be determined, for example, as a result of a preliminary image analysis or parameter tuning.
  • the basic QM setting unit 113 can set a plurality of types of quantizing matrices different in combination of prediction type and color component from each other.
  • a quantizing matrix of a certain type may be identical with a quantizing matrix of another type.
  • basic quantizing matrices include square quantizing matrices only.
  • basic quantizing matrices include both square quantizing matrices and non-square quantizing matrices.
  • size ID the size of a quantizing matrix is identified with a size ID and the type of the same is identified with a matrix ID.
  • the QM memory unit 115 a is a memory module that stores various types of quantizing matrices having various sizes, the quantizing matrices being used by the image processing device 10 a .
  • Quantizing matrices stored in the QM memory unit 115 a include basic quantizing matrices set by the basic QM setting unit 113 and additional quantizing matrices additionally generated by the QM generating unit 117 a , which will be described later on.
  • a basic quantizing matrix is set prior to orthogonal transformation and quantization performed across a plurality of transform blocks, and is stored in the QM memory unit 115 a through these processes of orthogonal transformation and quantization.
  • An additional quantizing matrix is generated according to a need when transform coefficients of each transform block are quantized, and is stored in the QM memory unit 115 a .
  • the QM memory unit 115 a may manage matrix management information that is internal control information indicating the size of the quantizing matrix present already.
  • the matrix management information is composed of, for example, a set of flags indicating whether a quantizing matrix identified with two size IDs corresponding respectively to a horizontal size and a vertical size is present (i.e., for example, is generated already).
  • the QM generating unit 117 a determines whether a quantizing matrix equal in size to the transform block is already generated, by referring to the above matrix management information provided by the QM memory unit 115 a .
  • the QM generating unit 117 a reads that quantizing matrix already generated, out of the QM memory unit 115 a .
  • the QM generating unit 117 a selects one of basic quantizing matrices already generated, as a reference quantizing matrix, and re-samples the selected reference quantizing matrix to generate an additional quantizing matrix.
  • the QM generating unit 117 a includes a memory in which a reference quantizing matrix to be re-sampled or its partial matrix is stored temporarily.
  • the QM generating unit 117 a when zeroing of high-frequency components is not applied to a transform block of a subject size, the QM generating unit 117 a generates a quantizing matrix for the transform block by referring to the whole of a reference quantizing matrix.
  • the QM generating unit 117 a When zeroing of high-frequency components is applied to a transform block of another subject size, on the other hand, the QM generating unit 117 a generates a quantizing matrix for the transform block by referring to only the partial matrix of a reference quantizing matrix.
  • a ratio of the size of the partial matrix referred to i.e., partial matrix stored temporarily in the memory of the QM generating unit 117 a ), to the size of the reference quantizing matrix is equal to a ratio of the size of a non-zero part to the size of an additional quantizing matrix generated.
  • the quantizing unit 15 a quantizes transform coefficients of each transform block, using one of various quantizing matrices which are generated as resources are saved.
  • the QM generating unit 117 a may determine whether zeroing is applied to a transform block, according to a specification-based rule that predetermines the size of the transform block to which zeroing is to be applied. In another example, the QM generating unit 117 a may determine that zeroing is applied to a certain transform block and is not applied to another transform block, according to control by the coding control unit 11 . In the latter example, one or both of the above-described zeroing flags and zeroing size information, the zeroing flags indicating whether zeroing is applied to a transform block and the zeroing size information indicating the size of the transform block to which zeroing is to be applied, can be coded as control parameters and inserted in a coded stream.
  • the SL coding unit 119 codes a scaling list expressing the above-described basic quantizing matrix set by the basic QM setting unit 113 to generate scaling list data.
  • the scaling list data is inserted in a coded stream generated by the reversible coding unit 16 .
  • the SL coding unit 119 includes size specifying information in the scaling list data, the size specifying information indicating the size of the quantizing matrix signaled explicitly as a basic quantizing matrix, via the scaling list data.
  • the SL coding unit 119 may include also size count information in the scaling list data, the size count information indicating the number of sizes that is to be signaled.
  • a bit stream constraint may be imposed to provide that when two or more sizes are signaled, they should be different from each other (i.e., should be identified with different size IDs). Such a bit stream constraint prevents the encoder from redundantly encoding the size specifying information, thus reducing coding volume overhead to avoid a waste of resources.
  • the size of a basic quantizing matrix signaled explicitly via the scaling list data is predetermined as a specification-based rule.
  • the SL coding unit 119 does not include the above size specifying information and size count information in the scaling list data.
  • the following table 1 shows the syntax of HEVC scaling list data described in the reference document REF2.
  • HEVC scaling list data includes one or more of the following control parameters for each of combinations of four sizes, which are identified with size IDs (“sizeId”), and six types, which are identified with matrix IDs (“matrixId”).
  • “scaling_list_pred_mode_flag[sizeId][matrixId]” is a control flag for switching a coding method for the scaling list.
  • a quantizing matrix of a type for which this flag is set false is coded simply, by referring to a quantizing matrix of another type specified by “scaling_list_pred_matrix_id_delta[sizeId][matrixId]”.
  • a quantizing matrix for which the above control flag is set true is coded differentially, using “scaling_list_dc_coef_minus8[sizeId ⁇ 2][matrixId]” and a plurality of “scaling_list_delta_coef”. The number of “scaling_list_delta_coef” is indicated by an intermediate variable “coefNum”.
  • the following table 2 shows an example of the syntax of scaling list data that may be revised in this embodiment.
  • the example of table 2 includes the size specifying information indicating the size of the quantizing matrix generated as a basic quantizing matrix, the size specifying information having been described in [1-5. Controlling Size of Basic Quantizing Matrix].
  • a parameter “size_id_minusX” on the second line of table 2 represents the size specifying information.
  • a value for the parameter “size_id_minusX” is given by deducting a preset offset value X from an actual size ID.
  • the following table 3 shows an example of size ID definitions that may be revised from size ID definitions in HEVC.
  • indexes of 0 to 6 which serve as size IDs, are assigned respectively to candidate values (2 to 128) for the size of one side of a square quantizing matrix in descending order in which the smallest index is on the top.
  • a relationship between a candidate value N for the size of one side and a value “sizeId” for a size ID is given by the following equation.
  • the relationship between the size candidate value and the size ID is not limited to the relationship defined by the above equation.
  • the size of a non-square quantizing matrix can be determined by specifying a size ID in the horizontal direction and a size ID in the vertical direction as well.
  • the size specifying information includes only one parameter “size_id_minusX”. This means that only the square quantizing matrix is signaled explicitly as the basic quantizing matrix. It also implies that a quantizing matrix of a size identified with a size ID smaller than the offset value X is not signaled.
  • the size specifying information may include two parameters that indicate two size IDs for identifying a non-square basic quantizing matrix, respectively. Further, deduction of the offset value may be omitted, in which case a size ID from which no offset value is deducted is coded directly.
  • the following table 4 shows an example of matrix ID definitions that may be revised from matrix ID definitions in HEVC.
  • the definition of matrix ID remains common definition regardless of the value of the size ID. This fact leads to a difference between a matrix ID loop on the third line of the syntax of table 1 and a matrix ID loop on the fourth line of the syntax of table 2.
  • the maximum number of element values to be coded differentially is 64 in both of the syntax of table 1 and the syntax of table 2. To allow more flexible quantizing matrix designing, however, this maximum number may be changed or set variably.
  • Which quantizing matrix should be referred to at generation of an additional quantizing matrix may be determined according to any given rule.
  • a quantizing matrix of a largest size may be selected out of square quantizing matrices available, as a reference quantizing matrix.
  • a size ID for the quantizing matrix of the largest size to be signaled explicitly is defined as “maxSignaledSizeId”, and whether a quantizing matrix identified with a certain combination of a side ID and a matrix ID is present is indicated by matrix management information “QMAvailFlag[sizeId][matrixId]” (in which case the quantizing matrix identified with the combination is present when “QMAvailFlag[sizeId][matrixId]” is true, and is not present when the same is false).
  • a reference size ID “refSizeId” which indicates the size of a reference quantizing matrix, can be determined by the following pseudo codes.
  • a quantizing matrix with a smallest size difference with an additional quantizing matrix to be generated may be selected out of quantizing matrices available, as a reference quantizing matrix. Further, reference quantizing matrix information indicating which quantizing matrix should be referred to may be additionally coded.
  • a first quantizing matrix of a first size is selected as a reference quantizing matrix for generating a second quantizing matrix of a second size.
  • the first size is W 1 ⁇ H 1
  • the second size is W 2 ⁇ H 2 .
  • a flag “zoFlag” can be set, the flag indicating whether zeroing of high-frequency components is applied to a transform block of the second size.
  • the flag “zoFlag” is set to 1 when zeroing of high-frequency components is applied to the transform block of the second size (W 2 ⁇ H 2 ), and is set to 0 when the zeroing is not applied to the same.
  • ranges of elements of the second quantizing matrix the elements being actually generated through re-sampling, are defined such that a range in the horizontal direction is R WIDTH2 and a range in the vertical direction is R HEIGHT2 , these ranges are given by the following equations.
  • W R2 min( W 2 ,N TH )
  • H R2 min( H 2 ,N TH )
  • R WIDTH2 [0, W R2 ⁇ 1]
  • W R2 and H R2 denote the number of elements included in the range in the horizontal direction and the number of elements included in the range in the vertical direction, respectively.
  • a ratio r WIDTH2 of the size of a non-zero part (part to which zeroing is not applied) to the second size (W 2 ⁇ H 2 ) in the horizontal direction and a ratio r HEIGHT2 of the same in the vertical direction can be given by the following equations.
  • a range in the horizontal direction R WIDTH1 of a part of the first quantizing matrix that is referred to at generation of the non-zero part of the second quantizing matrix and a range in the vertical direction R HEIGHT1 of the same can be derived as follows.
  • H R1 H 1 ⁇ r HEIGHT2
  • R WIDTH1 [0, W R1 ⁇ 1]
  • W R1 and H R1 denote the number of elements included in the range in the horizontal direction and the number of elements included in the range in the vertical direction, respectively.
  • the second quantizing matrix for quantizing transform coefficients (or inversely quantizing quantized transform coefficients) of the subject transform block can be generated by referring to only the partial matrix of the first quantizing matrix of the first size W 1 ⁇ H 1 .
  • the whole of the first quantizing matrix is referred to.
  • the QM generating unit 117 a reads, for re-sampling, only the elements that are included in reference ranges R WIDTH1 and R HEIGHT1 among the entire elements of the reference quantizing matrix, out of the QM memory unit 115 a and buffers the read elements.
  • the QM generating unit 117 a reads the entire elements of the reference quantizing matrix out of the QM memory unit 115 a and buffers the read elements.
  • s WIDTH a ratio of the first size to the second size in the horizontal direction
  • s HEIGHT a ratio of the same in the vertical direction
  • a process of up-sampling a first quantizing matrix Q REF by the nearest neighboring algorithm to derive elements Q ADD [J][i] of a second quantizing matrix Q ADD can be expressed by equations shown below, using the size ratios s WIDTH and s HEIGHT . Note that j and i are indexes denoting a line and a column, respectively. It is assumed that the entire elements of the second quantizing matrix are reset to 0 before execution of re-sampling.
  • i ′ Floor( i ⁇ s HEIGHT )
  • Floor(x) denotes a function that returns a maximum integer equal to or smaller than an argument x.
  • a process of down-sampling the first quantizing matrix Q REF to derive the elements Q ADD [j][i] of the second quantizing matrix Q ADD can be expressed by an equation shown below. It is assumed that the entire elements of the second quantizing matrix are reset to 0 before execution of re-sampling.
  • FIG. 9 is a flowchart illustrating an example of a flow of quantization-related processes executed by the image processing device 10 a of FIG. 8 .
  • process steps not related to quantization are not described for the purpose of simpler description.
  • the basic QM setting unit 113 sets one or more basic quantizing matrices, which include a first quantizing matrix of a first size (step S 111 ). These basic quantizing matrices are stored in the QM memory unit 115 a.
  • a series of process steps S 113 to S 116 to follow are repeated for each of a plurality of transform blocks that are set in an image by the block setting unit 111 through QTBT block division (step S 112 ).
  • Each transform block for which these process steps are repeated is referred to as subject transform block.
  • the QM generating unit 117 a first determines whether a quantizing matrix of the size corresponding to the block size of the subject transform block is present, by, for example, referring to matrix management information provided by the QM memory unit 115 a (step S 113 ). When such a quantizing matrix is not present, the QM generating unit 117 a executes a quantizing matrix generating process, which will be described later on, to generate an additional quantizing matrix from a reference quantizing matrix (step S 114 ). The additional quantizing matrix generated at this step is stored in the QM memory unit 115 a .
  • the QM generating unit 117 a reads that quantizing matrix of the size corresponding to the block size of the subject transform block, out of the QM memory unit 115 a (step S 115 ). Subsequently, the quantizing unit 15 a quantizes transform coefficients of the subject transform block, using the additionally generated quantizing matrix or the quantizing matrix read out of the QM memory unit 115 a (step S 116 ).
  • the SL coding unit 119 turns each of one or more basic quantizing matrices set at step S 111 into one-dimensional codes, thus transforming each of the basic quantizing matrices into a scaling list (step S 117 ).
  • the SL coding unit 119 then executes a scaling list coding process, which will be described later on, to generate scaling list data (step S 118 ).
  • the quantized transform coefficients of each transform block, the quantized transform coefficients being generated at step S 116 are coded by the reversible coding unit 16 so that the coded quantized transform coefficients, together with the scaling list data, become part of a coded stream. This process is not illustrated in FIG. 9 .
  • the scaling list data may be updated in any kind of updating unit, such as sequence, picture, slice, and tile.
  • FIG. 10 is a flowchart illustrating an example of a flow of the quantizing matrix generating process that can be executed at step S 114 of FIG. 9 .
  • the QM generating unit 117 a selects a reference quantizing matrix that should be referred to at generation of the quantizing matrix for the subject transform block (step S 121 ).
  • the reference quantizing matrix may be selected according to a predetermined specification-based rule (e.g., a rule to select a quantizing matrix of a maximum size or a size closest to the size of the subject transform block, out of quantizing matrices available).
  • the reference quantizing matrix may be selected dynamically.
  • the QM generating unit 117 a determines whether zeroing of high-frequency components is applied to the subject transform block (step S 122 ).
  • the QM generating unit 117 a may determine whether zeroing of high-frequency components is applied to the subject transform block, according to a predetermined specification-based rule (e.g., a rule to make a determination depending on whether the length of at least one side of the subject transform block is larger than a certain threshold).
  • a determination on whether zeroing of high-frequency components is applied to the subject transform block may be changed dynamically.
  • the QM generating unit 117 a When zeroing is not applied to the subject transform block, the QM generating unit 117 a reads the whole of the reference quantizing matrix selected at step S 121 , out of the QM memory unit 115 a , and buffers the read reference quantizing matrix by storing it in an internal memory (step S 123 ). The QM generating unit 117 a then re-sample the read reference quantizing matrix to generate an additional quantizing matrix (step S 124 ).
  • the QM generating unit 117 a calculates a ratio of the size of a non-zero part to the size of the subject transform block (e.g., the above-described ratios r WIDTH2 and r HEIGHT2 ) (step S 125 ). Subsequently, according to the calculated ratio, the QM generating unit 117 a reads a partial matrix of the selected reference quantizing matrix, out of the QM memory unit 115 a , and buffers the read partial matrix by storing it in the internal memory (step S 126 ). The QM generating unit 117 a then re-sample the read partial matrix to generate a non-zero part of the additional quantizing matrix (step S 127 ).
  • a ratio of the size of a non-zero part to the size of the subject transform block e.g., the above-described ratios r WIDTH2 and r HEIGHT2
  • the QM generating unit 117 a then stores the generated additional quantizing matrix in the QM memory unit 115 a (step S 168 ).
  • FIG. 11 is a flowchart illustrating an example of a flow of the scaling list coding process that can be executed at step S 118 of FIG. 9 .
  • the SL coding unit 119 codes size IDs for identifying sizes of basic quantizing matrices to generate size specifying information (step S 131 ).
  • the basic quantizing matrices are always square matrices, one size ID is coded for one size, as shown in table 3.
  • the basic quantizing matrices include non-square matrices, however, two size IDs corresponding to two directions may be coded for one size.
  • the size specifying information may be generated by deducting a preset offset value from a size ID.
  • a series of process steps S 133 to S 136 to follow are repeated for each of combinations of prediction types and color components, that is, each of quantizing matrix types identified by matrix IDs (step S 132 ).
  • a matrix ID for which the process steps are repeated is referred to as subject matrix ID.
  • the SL coding unit 119 determines whether or not to explicitly code a series of element values of a scaling list associated with the subject matrix ID (step S 133 ). In other words, the SL coding unit 119 determines a coding method for the scaling list. For example, if the scaling list associated with the subject matrix ID is identical with a scaling list associated with a different matrix ID (for a matrix with the same size ID), the SL coding unit 119 can select a simpler method of coding reference scaling information only, instead of coding the element values.
  • the SL coding unit 119 determines a reference scaling list (step S 134 ), and codes reference scaling list information indicating the determined reference scaling list (step S 135 ).
  • the SL coding unit 119 codes a series of element values of the scaling list derived at step S 117 of FIG. 9 by, for example, differential pulse code modulation (DPCM) to generate scaling list data (step S 136 ).
  • DPCM differential pulse code modulation
  • the flowchart illustrated in FIG. 11 depicts an example in which size IDs for basic quantizing matrices are coded explicitly. In another example, coding of the sizes of the basic quantizing matrices may be omitted. In still another example, the number of sizes to be signaled may also be coded as size count information.
  • FIG. 12 is a block diagram illustrating an example of a configuration of an image processing device 60 a according to the first embodiment, the image processing device 60 a having a functionality of a decoder.
  • the image processing device 60 a includes a decoding control unit 61 , a reversible decoding unit 62 , an inverse quantizing unit 63 a , an inverse orthogonal transformation unit 64 , an adding unit 65 , an in-loop filter 66 , a rearrangement buffer 67 , a frame memory 68 , selectors 70 and 71 , an intra-prediction unit 80 , an inter-prediction unit 85 , and a QM memory unit 165 a.
  • the decoding control unit 61 controls the overall decoder functionality of the image processing device 60 a , which will be described in detail below.
  • the decoding control unit 61 includes a block setting unit 161 .
  • the block setting unit 161 is a module that executes the block setting process step S 12 , which has been described above referring to FIG. 4 . This module will be described further later on.
  • the reversible decoding unit 62 parses control parameters included in an incoming coded stream from the transmission unit (not illustrated), such as a communication interface and an interface connecting to peripheral equipment.
  • the control parameters parsed by the reversible decoding unit 62 include, for example, the above-mentioned block division data and scaling list data.
  • the block division data is output to the decoding control unit 61 .
  • the reversible decoding unit 62 includes an SL decoding unit 163 .
  • the SL decoding unit 163 is a module that executes the scaling list decoding process step S 13 , which has been described above referring to FIG. 4 . This module will be described further later on.
  • the reversible decoding unit 62 decodes the coded stream to generate quantized data on each of one or more transform blocks.
  • the reversible decoding unit 62 outputs the generated quantized data to the inverse quantizing unit 63 a.
  • the inverse quantizing unit 63 a For each of one or more transform blocks set in an image, the inverse quantizing unit 63 a inversely quantizes the incoming quantized data, i.e., quantized transform coefficients from the reversible decoding unit 62 , to restore transform coefficients.
  • the inverse quantizing unit 63 a selects a quantizing matrix equal in size to a transform block out of a plurality of quantizing matrices stored in the QM memory unit 165 a and uses the selected quantizing matrix to inversely quantize quantized transform coefficients of the transform block. Under control by the decoding control unit 61 , the inverse quantizing unit 63 a skips inverse quantization of frequency components forcibly rendered zero.
  • the inverse quantizing unit 63 a then outputs the restored transform coefficients to the inverse orthogonal transformation unit 64 .
  • the inverse quantizing unit 63 a includes a QM generating unit 167 a .
  • the QM generating unit 167 a is a module that executes the QM generating process step S 14 , which has been described above referring to FIG. 4 . This module will be described further later on.
  • the inverse orthogonal transformation unit 64 executes inverse orthogonal transformation.
  • This inverse orthogonal transformation may be executed, for example, as inverse discrete cosine transformation or inverse discrete sine transformation. More specifically, for each transform block, the inverse orthogonal transformation unit 64 subjects transform coefficients in the frequency domain, the transform coefficients coming from the inverse quantizing unit 63 a , to inverse orthogonal transformation, thereby generating predicted errors, which represent a signal sample in the spacial domain. The inverse orthogonal transformation unit 64 then outputs the generated predicted errors to the adding unit 65 .
  • the adding unit 65 adds up the incoming predicted errors from the inverse orthogonal transformation unit 64 and an incoming predicted image from the selector 71 , to generate a decoded image.
  • the adding unit 65 then outputs the generated decoded image to the in-loop filter 66 and to the frame memory 68 .
  • the in-loop filter 66 is composed of a series of filters that are applied to the decoded image for the purpose of improving its quality.
  • the in-loop filter 66 may include one or more of, for example, a bilateral filter, a de-blocking filter, an adaptive offset filter, and an adaptive loop filter, which are described in the reference document REF3.
  • the in-loop filter 66 outputs the decoded image having been filtered through the series of filters, to the rearrangement buffer 67 and to the frame memory 68 .
  • the rearrangement buffer 67 rearranges incoming images from the in-loop filter 66 to generate a time-based sequence of images making up audio.
  • the rearrangement buffer 67 then outputs the generated sequence of images to external equipment (e.g., a display connected to the image processing device 60 a ).
  • the frame memory 68 stores the incoming pre-filtering decoded image from the adding unit 65 and the incoming post-filtering decoded image from the in-loop filter 66 .
  • the selector 70 switches a destination to which an image from the frame memory 68 is sent, between the intra-prediction unit 80 and the inter-prediction unit 85 .
  • the selector 70 outputs the pre-filtering decoded image as a reference image, the decoded image being supplied from the frame memory 68 , to the intra-prediction unit 80 .
  • the selector 70 outputs the post-filtering decoded image as a reference image, to the inter-prediction unit 85 .
  • the intra-prediction unit 80 performs intra-prediction, based on information on intra-prediction obtained by parsing the coded stream and on a reference image from the frame memory 68 , to generate a predicted image.
  • the intra-prediction unit 80 then outputs the generated predicted image to the selector 71 .
  • the inter-prediction unit 85 performs inter-prediction, based on information on inter-prediction obtained by parsing the coded stream and on a reference image from the frame memory 68 , to generate a predicted image.
  • the inter-prediction unit 85 then outputs the generated predicted image to the selector 71 .
  • the block setting unit 161 , the SL decoding unit 163 , the QM memory unit 165 a , and the QM generating unit 167 a are mainly involved in quantizing matrix generation performed by the decoder.
  • the block setting unit 161 sets a plurality of transform blocks in each image through QTBT block division, which is executed according to block division data.
  • the size of a transform block set by the block setting unit 161 may range, for example, from 2 ⁇ 2 to 128 ⁇ 128.
  • the shape of the transform block may be square or non-square. Some examples of the shapes and sizes of transform blocks are illustrated in FIG. 2 .
  • the SL decoding unit 163 decodes scaling list data to generate basic quantizing matrices of one or more sizes.
  • the SL decoding unit 163 decodes size specifying information indicating the sizes of quantizing matrices generated from the scaling list data.
  • the SL decoding unit 163 recognizes the size of the quantizing matrix signaled explicitly as a basic quantizing matrix via the scaling list data.
  • the SL decoding unit 163 may also decode size count information indicating the number of sizes to be signaled. In this case, a bit stream constraint may be imposed to provide that when two or more sizes are signaled, they should be different from each other (i.e., should be identified with different size IDs).
  • the size of a basic quantizing matrix signaled explicitly via the scaling list data is predetermined as a specification-based rule.
  • the scaling list data does not include the above size specifying information and size count information, and the SL decoding unit 163 decodes the scaling list data on each of one or more predetermined sizes to generate quantizing matrices of the one or more sizes.
  • the SL decoding unit 163 stores basic quantizing matrices generated based on the scaling list data in the QM memory unit 165 a .
  • a basic quantizing matrix may be generated by decoding a series of differentially coded element values or by referring to a basic quantizing matrix of a different type.
  • basic quantizing matrices include square quantizing matrices only.
  • basic quantizing matrices include both square quantizing matrices and non-square quantizing matrices. Examples of the syntax of scaling list data have been described above in [2-2. Examples of Syntax and Semantics].
  • the QM memory unit 165 a is a memory module that stores various types of quantizing matrices having various sizes, the quantizing matrices being used by the image processing device 60 a .
  • Quantizing matrices stored in the QM memory unit 165 a include basic quantizing matrices generated by the SL decoding unit 163 and additional quantizing matrices additionally generated by the QM generating unit 167 a , which will be described later on.
  • a basic quantizing matrix is generated prior to inverse quantization and inverse orthogonal transformation performed across a plurality of transform blocks, and is stored in the QM memory unit 165 a through these processes of inverse orthogonal transformation and inverse quantization.
  • An additional quantizing matrix is generated according to a need when quantized transform coefficients of each transform block are inversely quantized, and is stored in the QM memory unit 165 a .
  • the QM memory unit 165 a may manage matrix management information indicating the size of the quantizing matrix present already, similarly to the QM memory unit 115 a on the encoder side.
  • the matrix management information is composed of, for example, a set of flags indicating whether a quantizing matrix identified with two size IDs corresponding respectively to a horizontal size and a vertical size is present.
  • the QM generating unit 167 a determines whether a quantizing matrix equal in size to the transform block is already generated based on scaling list data, by referring to the matrix management information provided by the QM memory unit 165 a .
  • the QM generating unit 167 a reads that quantizing matrix already generated, out of the QM memory unit 165 a .
  • the QM generating unit 167 a re-samples one of basic quantizing matrices already generated or a partial matrix thereof to generate an additional quantizing matrix.
  • the QM generating unit 167 a includes a memory that temporarily stores a reference quantizing matrix or its partial matrix to be re-sampled.
  • the QM generating unit 167 a When zeroing of high-frequency components is not applied to a transform block of a subject size, for example, the QM generating unit 167 a generates a quantizing matrix for the transform block by referring to the whole of a reference quantizing matrix. When zeroing of high-frequency components is applied to a transform block of another subject size, on the other hand, the QM generating unit 167 a generates a quantizing matrix for the transform block by referring to only the partial matrix of a reference quantizing matrix. As described above, a ratio of the size of the partial matrix referred to, to the size of the reference quantizing matrix is equal to a ratio of the size of a non-zero part to the size of an additional quantizing matrix generated.
  • the inverse quantizing unit 63 a inversely quantizes quantized transform coefficients of each transform bock, using one of various quantizing matrices that are generated as resources are saved.
  • the QM generating unit 167 a may determine whether zeroing is applied to a transform block, according to a specification-based rule that predetermines the size of the transform block to which zeroing is to be applied. In another example, the QM generating unit 167 a may determine whether zeroing is applied to a transform block, based on one or more control parameters that can be additionally obtained by parsing the coded stream, such as a zeroing flag indicating whether zeroing is applied to a transform block and zeroing size information indicating the size of the transform block to which zeroing is to be applied. Examples of these control parameters have been described above in [1-4. Zeroing Control]
  • FIG. 13 is a flowchart illustrating an example of a flow of inverse-quantization-related processes executed by the image processing device 60 a of FIG. 12 .
  • process steps not related to inverse quantization are not described for the purpose of simpler description.
  • the SL decoding unit 163 executes a scaling list data decoding process, which will be described later on, to generate scaling lists expressing basic quantizing matrices of one or more sizes (step S 161 ). Subsequently, the SL decoding unit 163 maps each scaling list, which is an array of one-dimensional element values, into a two-dimensional array of element values through a certain scan sequence, thus transforming the scaling list into a basic quantizing matrix (step S 162 ). The QM memory unit 165 a stores the basic quantizing matrix generated in this manner (step S 163 ).
  • a series of process steps S 165 to S 168 to follow are repeated for each of a plurality of transform blocks that are set in an image by the block setting unit 161 through QTBT block division (step S 164 ).
  • Each transform block for which these process steps are repeated is referred to as subject transform block.
  • the QM generating unit 167 a first determines whether a quantizing matrix of the size corresponding to the block size of the subject transform block is present, by, for example, referring to matrix management information provided by the QM memory unit 165 a (step S 165 ). When such a quantizing matrix is not present, the QM generating unit 167 a executes the quantizing matrix generating process, which has been described above referring to FIG. 10 , to generate an additional quantizing matrix from a reference quantizing matrix (step S 166 ). The additional quantizing matrix generated at this step is stored in the QM memory unit 165 a .
  • the QM generating unit 167 a reads that quantizing matrix of the size corresponding to the block size of the subject transform block, out of the QM memory unit 165 a (step S 167 ). Subsequently, the inverse quantizing unit 63 a inversely quantizes quantized transform coefficients of the subject transform block, using the additionally generated quantizing matrix or the quantizing matrix read out of the QM memory unit 165 a (step S 168 ).
  • Transform coefficients in the frequency domain which are generated as a result of the inverse quantization at step S 168 , are transformed by the inverse orthogonal transformation unit 64 into predicted errors, which represent a signal sample in the spacial domain. This process is not depicted in FIG. 13 .
  • the quantizing matrix may be updated in any kind of updating unit, such as sequence, picture, slice, and tile.
  • FIG. 14 is a flowchart illustrating an example of a flow of the scaling list data decoding process that can be executed at step S 161 of FIG. 13 .
  • the SL decoding unit 163 decodes size specifying information to set a size ID for identifying the size of a quantizing matrix corresponding to a scaling list to be generated (step S 171 ).
  • a size ID is set for one size, as shown in table 3.
  • two size IDs corresponding to two directions may be set for one size, based on the size specifying information.
  • the size ID may be derived by adding a preset offset value to a value indicated by the size specifying information.
  • a series of process steps S 173 to S 177 to follow are repeated for each of combinations of prediction types and color components, that is, each of quantizing matrix types identified by matrix IDs (step S 172 ).
  • a matrix ID for which the process steps are repeated is referred to as subject matrix ID.
  • the SL decoding unit 163 determines whether a series of element values of a scaling list associated with the subject matrix ID are explicitly coded (step S 173 ). For example, the SL decoding unit 163 can determine whether the series of element values are coded or only the reference scaling list information is coded, based on the size ID set at step S 171 and on a control flag associated with the subject matrix ID (e.g., “scaling_list_pred_mode_flag[sizeId][matrixId]” on table 1).
  • a control flag associated with the subject matrix ID e.g., “scaling_list_pred_mode_flag[sizeId][matrixId]” on table 1).
  • the SL decoding unit 163 parses the reference scaling list information to derive a matrix ID for a basic quantizing matrix to be referred to (step S 174 ). The SL decoding unit 163 then generates the scaling list for the subject matrix ID, based on a reference scaling list that is referred to using the derived matrix ID as a key (step S 175 ).
  • the SL decoding unit 163 parses difference values of the series of element values, the difference values being differentially coded in the scaling list data (step S 176 ). The SL decoding unit 163 then decodes those difference values by differential pulse-code modulation (DPCM) to generate the scaling list for the subject matrix ID (step S 177 ).
  • DPCM differential pulse-code modulation
  • the flowchart illustrated in FIG. 14 depicts an example in which the size specifying information indicating the size ID for the basic quantizing matrix is decoded.
  • the size of the basic quantizing matrix may be predetermined based on specifications so that decoding of the size specifying information is skipped.
  • size count information indicating the number of sizes to be signaled may also be decoded.
  • FIG. 15 is a block diagram illustrating an example of a configuration of an image processing device 10 b according to a second embodiment, the image processing device 10 b having the functionality of the encoder.
  • the image processing device 10 b includes the coding control unit 11 , the rearrangement buffer 12 , the deducting unit 13 , the orthogonal transformation unit 14 , a quantizing unit 15 b , the reversible coding unit 16 , the accumulation buffer 17 , the rate control unit 18 , the inverse quantizing unit 21 , the inverse orthogonal transformation unit 22 , the adding unit 23 , the in-loop filter 24 , the frame memory 25 , the switch 26 , the intra-prediction unit 30 , the inter-prediction unit 35 , the mode setting unit 40 , a QM memory unit 115 b , and a QM generating unit 117 b .
  • the coding control unit 11 includes the block setting unit 111 and the basic QM setting unit 113 , similarly to the first
  • the quantizing unit 15 b is supplied with incoming transform coefficients from the orthogonal transformation unit 14 and with a rate control signal from the rate control unit 18 . For each of one or more transform blocks in an image to be coded, the quantizing unit 15 b quantizes the transform coefficients, using a quantizing matrix equal in size to the transform block, to generate quantized transform coefficients (quantized data). Under control by the coding control unit 11 , the quantizing unit 15 b skips quantization of frequency components forcibly rendered zero that are included in the transform coefficients. The quantizing unit 15 b then outputs the generated quantized data to the reversible coding unit 16 and to the inverse quantizing unit 21 . The quantizing unit 15 b may change a bit rate of the quantized data by switching a quantizing step, based on the rate control signal.
  • the block setting unit 111 the basic QM setting unit 113 , the QM memory unit 115 b , the QM generating unit 117 b , and the SL coding unit 119 are mainly involved in quantizing matrix generation performed by the encoder.
  • the functions of these components have parts different from the functions of the corresponding components according to the first embodiment. Those different parts will hereinafter be described.
  • the QM memory unit 115 b is a memory module that stores various types of quantizing matrices having various sizes, the quantizing matrices being used by the image processing device 10 b .
  • Quantizing matrices stored in the QM memory unit 115 b include basic quantizing matrices set by the basic QM setting unit 113 and additional quantizing matrices additionally generated by the QM generating unit 117 b , which will be described later on.
  • both basic quantizing matrix and additional quantizing matrix are generated prior to orthogonal transformation and quantization performed across a plurality of transform blocks, and are stored in the QM memory unit 115 b through these processes of orthogonal transformation and quantization.
  • the QM memory unit 115 b may manage matrix management information that is internal control information indicating the size of the quantizing matrix present already.
  • the QM generating unit 117 b Before quantization of transform coefficients of the plurality of transform blocks, the QM generating unit 117 b generates an additional quantizing matrix corresponding to each of a plurality of size candidates for the transform blocks. For example, for each of size candidates for quantizing matrices that are judged to be quantizing matrices not generated yet, based on the matrix management information provided by the QM memory unit 115 b , the QM generating unit 117 b selects one of already generated basic quantizing matrices, as a reference quantizing matrix, and re-samples the selected reference quantizing matrix, thereby generating an additional quantizing matrix corresponding to the size candidate.
  • the QM generating unit 117 b then stores the generated additional quantizing matrix in the QM memory unit 115 b .
  • the QM generating unit 117 b includes a memory that temporarily stores the reference quantizing matrix or its partial matrix to be re-sampled.
  • the QM generating unit 117 b refers to the whole of the reference quantizing matrix, similarly to the QM generating unit 117 a according to the first embodiment.
  • the QM generating unit 117 b refers to only the partial matrix of the reference quantizing matrix. In the latter case, as described above, a ratio of the size of the partial matrix referred to, to the size of the reference quantizing matrix is equal to a ratio of the size of a non-zero part to the size of the additional quantizing matrix generated.
  • FIG. 16 is a flowchart illustrating an example of a flow of quantization-related processes executed by the image processing device 10 b of FIG. 15 .
  • process steps not related to quantization are not described for the purpose of simpler description.
  • the basic QM setting unit 113 sets one or more basic quantizing matrices, which include a first quantizing matrix of a first size (step S 211 ). These basic quantizing matrices are stored in the QM memory unit 115 b.
  • Process steps S 213 and S 214 to follow are repeated for each of one or more size candidates, that is, for each of combinations of size IDs in the horizontal direction and size IDs in the vertical direction (step S 212 a ).
  • steps S 213 and S 214 are repeated for each of a plurality of matrix IDs corresponding to combinations of prediction types and color components (step S 212 b ).
  • a size candidate and a matrix ID for which the process steps are repeated is referred to respectively as subject size candidate and subject matrix ID.
  • the QM generating unit 117 b first determines whether a quantizing matrix corresponding to the subject size candidate and the subject matrix ID is present, by, for example, referring to matrix management information provided by the QM memory unit 115 b (step S 213 ). When such a quantizing matrix is not present, the QM generating unit 117 b executes the quantizing matrix generating process, which has been described above referring to FIG. 10 , to generate an additional quantizing matrix from a reference quantizing matrix (step S 214 ). The additional quantizing matrix generated at this step is stored in the QM memory unit 115 b.
  • the quantizing unit 15 b reads a quantizing matrix corresponding to the bock size of the subject transform block, out of the QM memory unit 115 b (step S 216 ). The quantizing unit 15 b then quantizes transform coefficients of the subject transform block, using the read quantizing matrix (step S 217 ).
  • the SL coding unit 119 turns each of one or more basic quantizing matrices set at step S 211 into one-dimensional codes, thus transforming each of the basic quantizing matrices into a scaling list (step S 218 ).
  • the SL coding unit 119 then executes the scaling list coding process, which has been described above referring to FIG. 11 , to generate scaling list data (step S 219 ).
  • the quantized transform coefficients of each transform block, the quantized transform coefficients being generated at step S 217 , are coded by the reversible coding unit 16 so that the coded quantized transform coefficients, together with the scaling list data, become part of a coded stream. This process is not illustrated in FIG. 16 .
  • the scaling list data may be updated in any kind of updating unit, such as sequence, picture, slice, and tile.
  • FIG. 17 is a block diagram illustrating an example of a configuration of the image processing device 60 b according to the second embodiment, the image processing device 60 b having a functionality of the decoder.
  • the image processing device 60 b includes the decoding control unit 61 , the reversible decoding unit 62 , an inverse quantizing unit 63 b , the inverse orthogonal transformation unit 64 , the adding unit 65 , the in-loop filter 66 , the rearrangement buffer 67 , the frame memory 68 , the selectors 70 and 71 , the intra-prediction unit 80 , the inter-prediction unit 85 , a QM memory unit 165 b , and a QM generating unit 167 b .
  • the decoding control unit 61 includes the block setting unit 161 , similarly to the first embodiment.
  • the reversible decoding unit 62 includes the SL decoding unit 163 , similarly to the first embodiment.
  • the inverse quantizing unit 63 b For each of one or more transform blocks set in an image, the inverse quantizing unit 63 b inversely quantizes incoming quantized data, i.e., quantized transform coefficients from the reversible decoding unit 62 , to restore transform coefficients.
  • the inverse quantizing unit 63 b selects a quantizing matrix equal in size to a transform block out of a plurality of quantizing matrices stored in the QM memory unit 165 b and uses the selected quantizing matrix to inversely quantize quantized transform coefficients of the transform block. Under control by the decoding control unit 61 , the inverse quantizing unit 63 b skips inverse quantization of frequency components forcibly rendered zero.
  • the inverse quantizing unit 63 b then outputs the restored transform coefficients to the inverse orthogonal transformation unit 64 .
  • the block setting unit 161 , the SL decoding unit 163 , the QM memory unit 165 b , and the QM generating unit 167 b are mainly involved in quantizing matrix generation performed by the decoder.
  • the functions of these components have parts different from the functions of the corresponding components according to the first embodiment. Those different parts will hereinafter be described.
  • the QM memory unit 165 b is a memory module that stores various types of quantizing matrices having various sizes, the quantizing matrices being used by the image processing device 60 b .
  • Quantizing matrices stored in the QM memory unit 165 b include basic quantizing matrices generated by the SL decoding unit 163 and additional quantizing matrices additionally generated by the QM generating unit 167 b , which will be described later on.
  • both basic quantizing matrix and additional quantizing matrix are generated prior to inverse quantization and inverse orthogonal transformation performed across a plurality of transform blocks, and are stored in the QM memory unit 165 b through these processes of inverse quantization and inverse orthogonal transformation.
  • the QM memory unit 165 b may manage matrix management information that is internal control information indicating the size of the quantizing matrix present already.
  • the QM generating unit 167 b Before inverse quantization of transform coefficients of the plurality of transform blocks, the QM generating unit 167 b generates an additional quantizing matrix corresponding to each of a plurality of size candidates for the transform blocks. For example, for each of size candidates for quantizing matrices that are judged to be quantizing matrices not generated yet, based on the matrix management information provided by the QM memory unit 165 b , the QM generating unit 167 b selects one of already generated basic quantizing matrices, as a reference quantizing matrix, and re-samples the selected reference quantizing matrix, thereby generating an additional quantizing matrix corresponding to the size candidate.
  • the QM generating unit 167 b then stores the generated additional quantizing matrix in the QM memory unit 165 b .
  • the QM generating unit 167 b includes a memory that temporarily stores the reference quantizing matrix or its partial matrix to be re-sampled.
  • the QM generating unit 167 b refers to the whole of the reference quantizing matrix, similarly to the QM generating unit 167 a according to the first embodiment.
  • the QM generating unit 117 b refers to only the partial matrix of the reference quantizing matrix. In the latter case, as described above, a ratio of the size of the partial matrix referred to, to the size of the reference quantizing matrix is equal to a ratio of the size of a non-zero part to the size of the additional quantizing matrix generated.
  • FIG. 18 is a flowchart illustrating an example of a flow of inverse-quantization-related processes executed by the image processing device 60 b of FIG. 17 .
  • process steps not related to inverse quantization are not described for the purpose of simpler description.
  • the SL decoding unit 163 executes the scaling list data decoding process, which has been described above referring to FIG. 14 , to generate scaling lists expressing basic quantizing matrices of one or more sizes (step S 261 ). Subsequently, the SL decoding unit 163 maps each scaling list, which is an array of one-dimensional element values, into a two-dimensional array of element values through a certain scan sequence, thus transforming the scaling list into a basic quantizing matrix (step S 262 ). The QM memory unit 165 b stores the basic quantizing matrix generated in this manner (step S 263 ).
  • Process steps S 265 and S 266 to follow are repeated for each of one or more size candidates, that is, for each of combinations of size IDs in the horizontal direction and size IDs in the vertical direction (step S 264 a ).
  • steps S 265 and S 266 are repeated for each of a plurality of matrix IDs corresponding to combinations of prediction types and color components (step S 264 b ).
  • a size candidate and a matrix ID for which the process steps are repeated is referred to respectively as subject size candidate and subject matrix ID.
  • the QM generating unit 167 b first determines whether a quantizing matrix corresponding to the subject size candidate and the subject matrix ID is present, by, for example, referring to matrix management information provided by the QM memory unit 165 b (step S 265 ). When such a quantizing matrix is not present, the QM generating unit 167 b executes the quantizing matrix generating process, which has been described above referring to FIG. 10 , to generate an additional quantizing matrix from a reference quantizing matrix (step S 266 ). The additional quantizing matrix generated at this step is stored in the QM memory unit 165 b.
  • process steps S 268 and S 269 are repeated for each of a plurality of transform blocks that are set in an image by the block setting unit 161 (step S 267 ).
  • Each transform block for which these process steps are repeated is referred to as subject transform block.
  • the inverse quantizing unit 63 b reads a quantizing matrix corresponding to the bock size of the subject transform block, out of the QM memory unit 165 b (step S 268 ). The inverse quantizing unit 63 b then inversely quantizes quantized transform coefficients of the subject transform block, using the read quantizing matrix (step S 269 ).
  • Transform coefficients in the frequency domain which are generated as a result of the inverse quantization at step S 269 , are transformed by the inverse orthogonal transformation unit 64 into predicted errors, which represent a signal sample in the spacial domain. This process is not depicted in FIG. 18 .
  • the quantizing matrix may be updated in any kind of updating unit, such as sequence, picture, slice, and tile.
  • a number of quantizing matrices that may possibly be needed for the quantization or inverse quantization process are generated in advance before execution of processing on a plurality of transform blocks.
  • Such a configuration eliminates a need of calculations for additionally generating quantizing matrices in shortage after the start of processing for each block in the image, thus improving processing performance.
  • the above embodiments may be achieved using any one of these means: software, hardware, and a combination of software and hardware.
  • image processing devices 10 a , 10 b , 60 a , and 60 b use software
  • computer programs making up the software are stored beforehand in, for example, computer-readable media (non-transitory media) incorporated in the devices or disposed outside the devices.
  • the program is loaded onto a random access memory (RAM) and is executed by a processor, such as a central processing unit (CPU).
  • RAM random access memory
  • CPU central processing unit
  • FIG. 19 is a block diagram illustrating an example of a hardware configuration of a device to which the above embodiments can be applied.
  • a device 800 includes a system bus 810 , an image processing chip 820 , and an off-chip memory 890 .
  • the image processing chip 820 includes n processing circuits 830 - 1 , 830 - 2 , . . . 830 - n (n denotes an integer of 1 or larger), a reference buffer 840 , a system bus interface 850 , and a local bus interface 860 .
  • the system bus 810 provides a communication path between the image processing chip 820 and an external module (e.g., a central control function, an application function, a communication interface, a user interface, or the like).
  • the processing circuits 830 - 1 , 830 - 2 , . . . 830 - n are connected to the system bus 810 via the system bus interface 850 and to the off-chip memory 890 via the local bus interface 860 .
  • the processing circuits 830 - 1 , 830 - 2 , . . . 830 - n are also accessible to the reference buffer 840 , which is equivalent to an on-chip memory (e.g., a static random-access memory (SRAM)).
  • SRAM static random-access memory
  • the on-chip memory may include, for example, the internal memory M 12 illustrated in FIG. 4 or the internal memory M 22 illustrated in FIG. 7 .
  • the off-chip memory 890 may include, for example, the QM memory M 11 illustrated in FIG. 4 or the QM memory M 21 illustrated in FIG. 7 .
  • the off-chip memory 890 may further include a frame memory that stores image data processed by the image processing chip 820 .
  • the ratio of the size of the partial matrix, which is referred to at generation of the second quantizing matrix, to the size of the first size is equal to the ratio of the size of the non-zero part to the second size.
  • the size specifying information indicating the size of a basic quantizing matrix generated from scaling list data is coded or decoded explicitly, as a control parameter. This means that the size of the basic quantizing matrix, based on which various quantizing matrices are generated, can be indicated as a variable size, thus meaning that the user is allowed to use quantizing matrices flexibly in various intended forms.
  • scaling list data on each of one or more predetermined sizes is decoded, and the one or more sizes includes the size of the basic quantizing matrix. In this case, the necessity of coding the size specifying information is eliminated.
  • the rule to determine the size of the transform block to which zeroing of high-frequency components is to be applied is set in advance, based on the specifications. This case does not require coding of control parameters for determining whether or not to apply zeroing of high-frequency components to a transform block.
  • the zeroing flag associated with each transform block, the zeroing flag indicating whether zeroing of high-frequency components is applied to the transform block is coded or decoded explicitly, as a control parameter. This allows dynamically controlling application or non-application of zeroing.
  • the zeroing size information indicating the size of the transform block to which zeroing of high-frequency components is applied is coded or decoded explicitly, as a control parameter. This allows dynamically controlling a size to which zeroing is to be applied. According to these examples, a system capable of flexibly reproducing an image expressing even minute high-frequency components is provided according to the user's needs or system requirements or constraints.
  • the partial matrix is loaded onto the memory and is stored temporarily therein.
  • a sufficient amount of memory resources are secured, which optimizes the device performance, such as the encoder function and decoder function.
  • the technique according to the present disclosure can be applied to any given video coding method (video decoding method).
  • specifications of processes related to coding and decoding such as transformation (inverse transformation), quantization (inverse quantization), coding (decoding), prediction, and filtering, do not put limitations on the technical scope of the present disclosure unless such specifications clearly lead to any inconsistency.
  • some of those processes may be omitted unless the omission clearly leads to any inconsistency.
  • block refers to any given partial area of an image (such as a picture, slice, and a tile) (In an exceptional case, this term may refer to a functional block that exerts some functionality).
  • this term may refer to a functional block that exerts some functionality.
  • the sizes and shapes of blocks do not put limitations on the scope of the technique according to the present disclosure.
  • the concept of “block” encompasses various partial areas (process units) mentioned in the reference documents REF1 to REF3, such as a TB(transform block), TU(transform unit), PB(prediction block), PU(prediction unit), SCU(smallest coding unit), CU(coding unit), LCU(largest coding unit), CTB(coding tree block), CTU(coding tree unit), sub-block, macro block, tile, and slice.
  • Ranges to which various parameters or pieces of data or information mentioned in the present specification are applied are not limited to the ranges described in the above examples but may be ranges of any form.
  • ranges in which various processes mentioned in the present specification are executed are not limited to the ranges described in the above examples but may be ranges of any form.
  • those ranges may be set in units (data unit/process unit) of at least one of TB, TU, PB, PU, SCU, CU, LCU, CTB, CTU, sub-block, block, tile, slice, picture, sequence, and component.
  • data unit/process unit may be set for each of parameters, pieces of data and information, or processes, and may be common or not common to all of them.
  • Parameters, data, or information may be stored in or transmitted from any given place. For example, they may be held in the header of the above unit or in a parameter set. In another case, parameters, data, or information may be dispersedly stored in or transmitted from a plurality of places.
  • control information related to the technique according to the present disclosure may be transmitted from the encoding side to the decoding side.
  • control information e.g., enabled flag
  • control information that gives an instruction to enable or disable any given part of various functionalities described above may be transmitted.
  • control information that specifies a range to which any given part of various functionalities described above can be applied (or a range to which the same cannot be applied) may be transmitted.
  • control information specifying block sizes (an upper limit block size or lower limit bock size or both of them), frames, components, or layers to which the technique according to the present disclosure is applied may be transmitted.
  • a block size which is specified in the form of control information, may not be directly expressed as a size value but may be expressed indirectly as an identifier or index that is mapped onto the size value.
  • a size value or an identifier or index corresponding to the size value may be expressed in the form of a ratio to or a difference with a certain reference value (e.g., the size of an LCU or SCU).
  • size information included in a syntax element may indirectly specify a size value according to the method described above. This approach raises a possibility that a volume of information coded, transmitted, or stored may be reduced to improve coding efficiency.
  • the above-described method of specifying a block size may be used also as a method of specifying a range of a block size.
  • flag is a piece of information identifying a plurality of states. It identifies not only the two states of “true (1)” and “false (0)” but may also identify three or more states. In other words, a flag may take, for example, either of two values: “0” and “1”, or may take one of three or more values. One flag, therefore, may be composed of any number of bits, that is, could be composed of a single bit or a plurality of bits. Control information itself, which could include a flag and other identification information, may be included in a bit stream, or control information indicating a difference with some form of reference information may be included in the bit stream.
  • Various data and meta data on such coded data as a coded stream and a coded bit stream may be transmitted or recorded in any form if those data and meta data are associated with the coded data.
  • the phrase “ . . . data are associated with . . . data” used in this context means, for example, a case where one piece of data is made available for processing another piece of data (e.g., both pieces of data are linked or mapped to each other).
  • pieces of data associated with each other may be handled integrally as a single piece of data or may be handled as separate pieces of data.
  • information associated with coded data (coded image) may be transmitted through a transmission path different from a transmission path through which the coded data is transmitted.
  • information associated with coded data may be recorded on a recording medium different from a recording medium having the coded data recorded thereon (or in a different recording area of the same recording medium on which both information and coded data are recorded).
  • data as a whole but different parts of data may be “associated with each other” in the above manner.
  • an image and information on the image may be associated with each other in any given unit, such as a plurality of frames, one frame, and a part of a frame.
  • the technique according to the present disclosure may be implemented by any kind of component making up a device or system (e.g., a processor, such as a system large-scale integration (LSI), a module containing a plurality of processors, a unit containing a plurality of modules, a device set constructed by adding an extra function to a module or unit, or the like).
  • a processor such as a system large-scale integration (LSI)
  • LSI system large-scale integration
  • module containing a plurality of processors e.g., a module containing a plurality of processors, a unit containing a plurality of modules, a device set constructed by adding an extra function to a module or unit, or the like.
  • LSI system large-scale integration
  • a system refers to a set of a plurality of elements (e.g, devices, units, modules, components, or the like). It should be noted, however, that all of these elements do not always need to be placed in the same enclosure.
  • the concept of system encompasses, for example, a set of a plurality of modules housed in separate enclosures and interconnected through a network and a set of a plurality of modules housed in a single housing.
  • a component described as a single component may be configured to be divided into a plurality of components.
  • a plurality of components described in the present specification may be configured to be a single component.
  • a constituent element different from the above-described constituent elements may be added to a component described in the present specification.
  • a part of the configuration of a certain device may be included in another device if a system as a whole substantially offers the same functionality or performs the same operation.
  • the technique according to the present disclosure may be achieved through a cloud computing technology which allows a plurality of devices interconnected through a network to exert a single or a plurality of functions cooperatively or dispersedly.
  • One or more steps depicted in a certain flowchart may not be executed by one device but may be executed dispersedly by a plurality of devices.
  • a plurality of operations making up one step may not be executed by one device but may be executed dispersedly by a plurality of devices.
  • a program instruction making up a program run by a computer may cause the computer to execute two or more process steps described in the present specification in the order described above, or to execute the process steps through parallel processing, or to execute the process steps one by one in response to a trigger event, such as a certain event occurred or an external call made.
  • a trigger event such as a certain event occurred or an external call made.
  • the process steps described in the present specification may be executed in an order different from the order described above if doing so does not lead to obvious inconsistency.
  • a process step that is to be executed based on a certain program or program instruction may be executed in parallel or simultaneously with a process step that is to be executed based on a different program or program instruction.
  • a decoding unit that decodes scaling list data to generate a first quantizing matrix of a first size
  • a generating unit that generates a second quantizing matrix for a transform block of a second size to which zeroing of a high-frequency component is applied, by referring to only a partial matrix of the first quantizing matrix generated by the decoding unit;
  • an inverse quantizing unit that inversely quantizes a quantized transform coefficient of the transform block of the second size, using the second quantizing matrix generated by the generating unit.
  • the decoding unit decodes size specifying information indicating a size of a quantizing matrix generated from the scaling list data
  • the size indicated by the size specifying information includes the first size but does not include the second size.
  • the decoding unit decodes scaling list data on each of one or more predetermined sizes to generate quantizing matrices of the one or more sizes
  • predetermined one or more sizes include the first size but do not include the second size.
  • the image processing device determines whether the zeroing is applied to the transform block of the second size, based on a zeroing flag associated with each transform block, the zeroing flag indicating whether the zeroing is applied to the transform block.
  • the generating unit determines whether the zeroing is applied to the transform block of the second size, based on zeroing size information indicating the size of the transform block to which the zeroing is to be applied.
  • the generating unit includes a memory that temporarily stores the partial matrix at generation of the second quantizing matrix.
  • the image processing device according to any one of (1) to (9), further comprising
  • a memory unit that stores the first quantizing matrix through processing on a plurality of transform blocks
  • the generating unit generates the second quantizing matrix at inverse quantization of a quantized transform coefficient of each transform block of the second size.
  • the generating unit generates the second quantizing matrix before processing on a plurality of transform blocks
  • the image processing device further comprises a memory unit that stores a plurality of quantizing matrices including the first quantizing matrix and the second quantizing matrix, through the processing on the plurality of transform blocks.
  • decoding scaling list data to generate a first quantizing matrix of a first size
  • An image processing device comprising:
  • a generating unit that generates a second quantizing matrix for a transform block of a second size to which zeroing of a high-frequency component is applied, by referring to only a partial matrix of a first quantizing matrix of a first size;
  • a quantizing unit that quantizes a transform coefficient of the transform block of the second size in an image to be coded, using the second quantizing matrix generated by the generating unit, to generate a quantized transform coefficient
  • a coding unit that codes a scaling list expressing the quantized transform coefficient and the first quantizing matrix, to generate a coded stream.
  • An image processing method executed by an image processing device comprising:

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Compression Of Band Width Or Redundancy In Fax (AREA)
  • Editing Of Facsimile Originals (AREA)
US16/980,422 2018-03-28 2019-03-07 Image processing device and image processing method Abandoned US20210006796A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2018062704 2018-03-28
JP2018-062704 2018-03-28
PCT/JP2019/009177 WO2019188097A1 (ja) 2018-03-28 2019-03-07 画像処理装置及び画像処理方法

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/009177 A-371-Of-International WO2019188097A1 (ja) 2018-03-28 2019-03-07 画像処理装置及び画像処理方法

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/859,469 Continuation US20220345708A1 (en) 2018-03-28 2022-07-07 Image processing device and image processing method

Publications (1)

Publication Number Publication Date
US20210006796A1 true US20210006796A1 (en) 2021-01-07

Family

ID=68059867

Family Applications (2)

Application Number Title Priority Date Filing Date
US16/980,422 Abandoned US20210006796A1 (en) 2018-03-28 2019-03-07 Image processing device and image processing method
US17/859,469 Pending US20220345708A1 (en) 2018-03-28 2022-07-07 Image processing device and image processing method

Family Applications After (1)

Application Number Title Priority Date Filing Date
US17/859,469 Pending US20220345708A1 (en) 2018-03-28 2022-07-07 Image processing device and image processing method

Country Status (13)

Country Link
US (2) US20210006796A1 (zh)
EP (1) EP3780621A4 (zh)
JP (2) JP7334726B2 (zh)
KR (1) KR20200136390A (zh)
CN (3) CN111886871B (zh)
AU (1) AU2019246538B2 (zh)
BR (1) BR112020018986A2 (zh)
CA (1) CA3094608A1 (zh)
MX (1) MX2020009803A (zh)
PH (1) PH12020551515A1 (zh)
RU (1) RU2020131045A (zh)
SG (1) SG11202009041WA (zh)
WO (1) WO2019188097A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11170481B2 (en) * 2018-08-14 2021-11-09 Etron Technology, Inc. Digital filter for filtering signals
US20220132127A1 (en) * 2013-09-30 2022-04-28 Nippon Hoso Kyokai Image encoding device, image decoding device, and the programs thereof
US20220182629A1 (en) * 2019-03-10 2022-06-09 Mediatek Inc. Method and Apparatus of the Quantization Matrix Computation and Representation for Video Coding
US11582454B2 (en) * 2019-03-25 2023-02-14 Hfi Innovation Inc. Method and apparatus of the quantization matrix computation and representation for video coding

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020150338A (ja) * 2019-03-11 2020-09-17 キヤノン株式会社 画像復号装置、画像復号方法、及びプログラム
JP2020150340A (ja) * 2019-03-11 2020-09-17 キヤノン株式会社 画像符号化装置、画像符号化方法、及びプログラム
JP7267785B2 (ja) * 2019-03-11 2023-05-02 キヤノン株式会社 画像復号装置、画像復号方法、及びプログラム
TWI762889B (zh) * 2019-03-21 2022-05-01 聯發科技股份有限公司 用於視頻編解碼的量化矩陣計算和表示的方法和裝置
TW202106017A (zh) * 2019-06-21 2021-02-01 法商內數位Vc控股法國公司 用於視訊編碼及解碼的單一索引量化矩陣設計
CN115720265A (zh) * 2019-12-18 2023-02-28 腾讯科技(深圳)有限公司 视频编解码方法、装置、设备及存储介质
CN111083475B (zh) * 2019-12-31 2022-04-01 上海富瀚微电子股份有限公司 量化变换系数管理装置及适用于hevc标准的编码器
KR20220143843A (ko) 2020-02-29 2022-10-25 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 하이 레벨 신택스 엘리먼트들에 대한 제약들
WO2021233450A1 (en) * 2020-05-22 2021-11-25 Beijing Bytedance Network Technology Co., Ltd. Signalling for color component
WO2021233449A1 (en) 2020-05-22 2021-11-25 Beijing Bytedance Network Technology Co., Ltd. Reserved bits in general constraint information of a video
CN113259667B (zh) * 2021-05-17 2023-01-31 北京百度网讯科技有限公司 视频量化方法、装置、电子设备和计算机可读存储介质

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000236547A (ja) * 1998-12-15 2000-08-29 Sony Corp 画像情報変換装置及び画像情報変換方法
CN100571389C (zh) * 2004-06-29 2009-12-16 奥林巴斯株式会社 用于图像编码/解码和扩展图像压缩解压缩的方法和设备
US8902988B2 (en) * 2010-10-01 2014-12-02 Qualcomm Incorporated Zero-out of high frequency coefficients and entropy coding retained coefficients using a joint context model
JP5741729B2 (ja) * 2010-12-09 2015-07-01 ソニー株式会社 画像処理装置及び画像処理方法
JP5741076B2 (ja) * 2010-12-09 2015-07-01 ソニー株式会社 画像処理装置及び画像処理方法
WO2012160890A1 (ja) * 2011-05-20 2012-11-29 ソニー株式会社 画像処理装置及び画像処理方法
CN109194958B (zh) * 2012-01-20 2021-08-20 韩国电子通信研究院 使用量化矩阵的视频编解码装置
US20140192862A1 (en) * 2013-01-07 2014-07-10 Research In Motion Limited Methods and systems for prediction filtering in video coding
CN105580368B (zh) * 2013-09-30 2018-10-19 日本放送协会 图像编码装置和方法以及图像解码装置和方法
US9432696B2 (en) * 2014-03-17 2016-08-30 Qualcomm Incorporated Systems and methods for low complexity forward transforms using zeroed-out coefficients
WO2016044842A1 (en) * 2014-09-19 2016-03-24 Futurewei Technologies, Inc. Method and apparatus for non-uniform mapping for quantization matrix coefficients between different sizes of matrices
US9787987B2 (en) * 2015-04-27 2017-10-10 Harmonic, Inc. Adaptive pre-filtering based on video complexity and output bit rate
JP6911856B2 (ja) * 2016-07-04 2021-07-28 ソニーグループ株式会社 画像処理装置および方法

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220132127A1 (en) * 2013-09-30 2022-04-28 Nippon Hoso Kyokai Image encoding device, image decoding device, and the programs thereof
US11647195B2 (en) * 2013-09-30 2023-05-09 Nippon Hoso Kyokai Image encoding device, image decoding device, and the programs thereof
US11170481B2 (en) * 2018-08-14 2021-11-09 Etron Technology, Inc. Digital filter for filtering signals
US20220182629A1 (en) * 2019-03-10 2022-06-09 Mediatek Inc. Method and Apparatus of the Quantization Matrix Computation and Representation for Video Coding
US11582454B2 (en) * 2019-03-25 2023-02-14 Hfi Innovation Inc. Method and apparatus of the quantization matrix computation and representation for video coding

Also Published As

Publication number Publication date
SG11202009041WA (en) 2020-10-29
MX2020009803A (es) 2020-10-28
US20220345708A1 (en) 2022-10-27
AU2019246538A1 (en) 2020-10-22
KR20200136390A (ko) 2020-12-07
EP3780621A4 (en) 2021-05-26
PH12020551515A1 (en) 2021-05-17
CN116744021A (zh) 2023-09-12
JPWO2019188097A1 (ja) 2021-04-30
BR112020018986A2 (pt) 2020-12-29
WO2019188097A1 (ja) 2019-10-03
JP7334726B2 (ja) 2023-08-29
CA3094608A1 (en) 2019-10-03
JP2023133553A (ja) 2023-09-22
CN111886871A (zh) 2020-11-03
CN111886871B (zh) 2023-07-04
EP3780621A1 (en) 2021-02-17
CN116647705A (zh) 2023-08-25
RU2020131045A (ru) 2022-03-21
AU2019246538B2 (en) 2022-11-10

Similar Documents

Publication Publication Date Title
US20220345708A1 (en) Image processing device and image processing method
CN111869219B (zh) 对图像进行编码或解码的方法和装置
KR20190090865A (ko) 영상 처리 방법 및 이를 위한 장치
US20160080753A1 (en) Method and apparatus for processing video signal
TW202041007A (zh) 用於視頻編解碼的量化矩陣計算和表示的方法和裝置
WO2020008909A1 (ja) 画像処理装置および方法
US20180302620A1 (en) METHOD FOR ENCODING AND DECODING VIDEO SIGNAL, AND APPARATUS THEREFOR (As Amended)
KR20230058033A (ko) 영상을 부호화 또는 복호화하는 방법 및 장치
JP2023101812A (ja) 画像処理装置、及び画像処理方法
KR20180009048A (ko) 영상의 부호화/복호화 방법 및 이를 위한 장치
WO2019123768A1 (ja) 画像処理装置及び画像処理方法
CN113170204B (zh) 编码工具设置方法和图像解码设备
US20240129461A1 (en) Systems and methods for cross-component sample offset filter information signaling
US10432975B2 (en) Method for encoding/decoding image and device for same
CN118250478A (zh) 视频编码/解码设备执行的方法和提供视频数据的方法
CN118250477A (zh) 视频编码/解码设备和提供视频数据的设备
CN113574877A (zh) 用于有效地对残差块编码的方法和装置
CN118250476A (zh) 视频编码/解码设备执行的方法和提供视频数据的方法
CN118250479A (zh) 视频编码/解码设备和提供视频数据的设备

Legal Events

Date Code Title Description
AS Assignment

Owner name: SONY CORPORATION, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TSUKUBA, TAKESHI;REEL/FRAME:053767/0060

Effective date: 20200824

STPP Information on status: patent application and granting procedure in general

Free format text: APPLICATION DISPATCHED FROM PREEXAM, NOT YET DOCKETED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION