WO2014038153A1 - 画像符号化方法、画像復号化方法、画像符号化装置、画像復号化装置、および、画像符号化復号化装置 - Google Patents
画像符号化方法、画像復号化方法、画像符号化装置、画像復号化装置、および、画像符号化復号化装置 Download PDFInfo
- Publication number
- WO2014038153A1 WO2014038153A1 PCT/JP2013/005051 JP2013005051W WO2014038153A1 WO 2014038153 A1 WO2014038153 A1 WO 2014038153A1 JP 2013005051 W JP2013005051 W JP 2013005051W WO 2014038153 A1 WO2014038153 A1 WO 2014038153A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- size
- block
- color difference
- coefficient
- processed
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T9/00—Image coding
- G06T9/007—Transform coding, e.g. discrete cosine transform
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/12—Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
- H04N19/122—Selection of transform size, e.g. 8x8 or 2x4x8 DCT; Selection of sub-band transforms of varying structure or type
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
- H04N19/137—Motion inside a coding unit, e.g. average field, frame or block difference
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/157—Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/96—Tree coding, e.g. quad-tree coding
Definitions
- the present invention relates to an image encoding method for encoding an image or an image decoding method for decoding an image.
- Non-Patent Document 1 As a technique related to an image encoding method for encoding an image (including a moving image) or an image decoding method for decoding an image, there is a technique described in Non-Patent Document 1, for example.
- the image encoding method or the image decoding method according to the related art has a problem that the encoding efficiency is not sufficient.
- the present invention provides an image encoding method and an image decoding method that can improve encoding efficiency.
- An image encoding method is an image encoding method for encoding an input image, wherein the input image includes one or a plurality of transform blocks having a luminance component and a color difference component,
- the size of the luminance component block in the conversion block is the same as the size of the conversion block to be processed, and the size of the color difference component block in the conversion block to be processed is larger than the size of the luminance component block.
- the image encoding method includes: a derivation step of performing conversion processing on the luminance component and the color difference component to derive a coefficient of the luminance component and a coefficient of the color difference component; and a coefficient of the luminance component And an encoding step for encoding the coefficient of the color difference component.
- the size of the transform block to be processed is preliminarily determined.
- the conversion process for the color difference component is executed in a block having the same size as the luminance component, and the coefficient of the color difference component
- the size of the transform block to be processed is the first minimum size, a flag indicating whether or not a non-zero coefficient is included in the coefficient of the color difference component If the size of the conversion block to be processed is different from the first minimum size without encoding, the flag is encoded.
- An image decoding method is an image decoding method for decoding an image from an encoded bitstream, and the image includes one or a plurality of conversion blocks each having a color difference component and a luminance component.
- the size of the luminance component block in the conversion block to be processed is the same as the size of the conversion block to be processed, and the size of the color difference component block in the conversion block to be processed is the luminance component A decoding step of decoding the encoded luminance component coefficient and the chrominance component coefficient included in the encoded bitstream; and the luminance component A conversion process is performed on the coefficient and the coefficient of the color difference component to derive the luminance component and the color difference component.
- the luminance component and the luminance component are combined by combining a plurality of blocks of the color difference component
- the color difference component is derived by performing a conversion process on the coefficient of the color difference component in the same size block, and in the decoding step, the size of the conversion block to be processed is the first minimum size. And a flag indicating whether or not a non-zero coefficient is included in the coefficient of the color difference component.
- the image encoding method and the image decoding method of the present invention can improve encoding efficiency (reduce the amount of codes).
- FIG. 1 is a block diagram illustrating an example of a configuration of an image encoding device according to the first embodiment.
- FIG. 2 is a block diagram showing an example of a configuration of the quadtree encoding unit in the first embodiment.
- FIG. 3 is a flowchart illustrating an example of a processing procedure of the image encoding method according to the first embodiment.
- FIG. 4 is a diagram illustrating an example of dividing an LCU (Large Coding Unit) into CUs (Coding Units).
- FIG. 5 is a diagram illustrating an example of a CU split flag corresponding to the LCU of FIG.
- FIG. 6 is a diagram illustrating an example of a division method into TUs corresponding to the LCU of FIG.
- FIG. 1 is a block diagram illustrating an example of a configuration of an image encoding device according to the first embodiment.
- FIG. 2 is a block diagram showing an example of a configuration of the quadtree encoding unit in the first embodiment.
- FIG. 7 is a diagram illustrating an example of a TU split flag corresponding to the LCU of FIG.
- FIG. 8 is a flowchart illustrating an example of the operation of the quadtree encoding unit.
- FIG. 9 is a flowchart illustrating an example of the operation of the CU encoding unit of the quadtree encoding unit illustrated in FIG.
- FIG. 10 is a flowchart illustrating an example of the operation of the quadtree conversion unit of the CU encoding unit illustrated in FIG.
- FIG. 11 is a flowchart illustrating an example of the operation of the TU encoding unit of the quadtree conversion unit illustrated in FIG. 3.
- FIG. 12 is a diagram illustrating an example of CBF_Cb corresponding to the TU of FIG. FIG.
- FIG. 13 is a block diagram illustrating an example of a configuration of the image decoding apparatus according to the second embodiment.
- FIG. 14 is a block diagram illustrating an example of a configuration of the quadtree decoding unit in the second embodiment.
- FIG. 15 is a flowchart illustrating an example of a processing procedure of the image decoding method according to the second embodiment.
- FIG. 16 is a flowchart showing an example of the operation of the quadtree decoding unit shown in FIG.
- FIG. 17 is a flowchart illustrating an example of the operation of the CU decoding unit illustrated in FIG.
- FIG. 18 is a flowchart illustrating an example of the operation of the quadtree conversion unit illustrated in FIG.
- FIG. 19 is a flowchart showing an example of the TU decoding operation shown in FIG. FIG.
- FIG. 20 is an overall configuration diagram of a content supply system that implements a content distribution service.
- FIG. 21 is an overall configuration diagram of a digital broadcasting system.
- FIG. 22 is a block diagram illustrating a configuration example of a television.
- FIG. 23 is a block diagram illustrating a configuration example of an information reproducing / recording unit that reads and writes information from and on a recording medium that is an optical disk.
- FIG. 24 is a diagram illustrating a structure example of a recording medium that is an optical disk.
- FIG. 25A is a diagram illustrating an example of a mobile phone.
- FIG. 25B is a block diagram illustrating a configuration example of a mobile phone.
- FIG. 26 is a diagram showing a structure of multiplexed data.
- FIG. 27 is a diagram schematically showing how each stream is multiplexed in the multiplexed data.
- FIG. 28 is a diagram showing in more detail how the video stream is stored in the PES packet sequence.
- FIG. 29 is a diagram showing the structure of TS packets and source packets in multiplexed data.
- FIG. 30 is a diagram illustrating a data structure of the PMT.
- FIG. 31 shows the internal structure of multiplexed data information.
- FIG. 32 shows the internal structure of the stream attribute information.
- FIG. 33 is a diagram showing steps for identifying video data.
- FIG. 34 is a block diagram illustrating a configuration example of an integrated circuit that implements the moving picture coding method and the moving picture decoding method according to each embodiment.
- FIG. 35 is a diagram showing a configuration for switching the driving frequency.
- FIG. 36 is a diagram illustrating steps for identifying video data and switching between driving frequencies.
- FIG. 37 is a diagram showing an example of a look-up table in which video data standards are associated with drive frequencies.
- FIG. 38A is a diagram illustrating an example of a configuration for sharing a module of a signal processing unit.
- FIG. 38B is a diagram illustrating another example of a configuration for sharing a module of a signal processing unit.
- an image encoding method for such a video signal includes a step of generating a predicted image of an encoding target image, a step of obtaining a difference image between the prediction image and the encoding target image, and a frequency of the difference image from an image region.
- the method includes a step of obtaining a frequency coefficient (coefficient) by converting into a region, and an arithmetic coding of the frequency coefficient.
- a coding target image is divided into one or a plurality of coding target blocks.
- the encoding target block is divided into one or a plurality of transform blocks.
- the step of obtaining the frequency coefficient described above is executed for each transform block.
- the parameter that is arithmetically encoded in the above-described arithmetic encoding step includes a flag that indicates whether or not a non-zero frequency coefficient exists for each transform block. This flag is set to H.264.
- CBP Coded Block Pattern
- HEVC In HEVC, it is called CBF (Coded Block Flag).
- the conventional method has a problem that the amount of codes increases because it is necessary to encode a flag for each transform block.
- an image encoding method is an image encoding method for encoding an input image, and the input image includes a luminance component and a color difference component.
- the size of the luminance component block in the processing target conversion block is the same as the size of the processing target conversion block, and the size of the color difference component block in the processing target conversion block Is smaller than the block size of the luminance component, and the image encoding method performs conversion processing on the luminance component and the color difference component to derive the coefficient of the luminance component and the coefficient of the color difference component.
- a coding step for coding the coefficient of the luminance component and the coefficient of the chrominance component is an image encoding method for encoding an input image, and the input image includes a luminance component and a color difference component.
- the conversion processing for the color difference component is performed with a block having the same size as the luminance component by combining a plurality of blocks of the color difference component. And the coefficient of the color difference component is derived, and in the encoding step, if the size of the transform block to be processed is the first minimum size, the coefficient of the color difference component includes a non-zero coefficient. If the size of the conversion block to be processed is different from the first minimum size, the flag is encoded.
- the blocks are combined and the block has the same size as the luminance component Process with. This eliminates the need for a processing circuit and software module for encoding in units smaller than the first minimum size.
- the image coding method having the above configuration is configured such that the CBF_Cb and CBF_Cr of the color difference components are not encoded for the first minimum size conversion block.
- CBF_Cb and CBF_Cr of the layer one level above are referred to, and therefore, CBF_Cb and CBF_Cr of the transform block having the first minimum size are not referred to. That is, since it is not necessary to encode CBF_Cb and CBF_Cr of the transform block of the first minimum size, it is possible to improve processing efficiency by not encoding these.
- the transform block is a block obtained by dividing the encoded block using a quadtree structure, and a second minimum size of the encoded block is limited to a size larger than the first minimum size, and the code In the conversion step, (1) the size of the transformation block to be processed is larger than the first minimum size, and (2) the transformation block to be processed is the highest layer of the quadtree structure, or The flag may be encoded when the value of the flag in a layer one layer above the layer in the quadtree structure of the conversion block to be processed is 1.
- an image decoding method for decoding an image from an encoded bitstream, and the image includes a color difference component and a luminance component.
- the size of the luminance component block in the processing target conversion block is the same as the size of the processing target conversion block, and the color difference component of the processing target conversion block has the size of the color difference component in the processing target conversion block.
- the block size is smaller than the block size of the luminance component, and the image decoding method decodes the encoded coefficient of the luminance component and the coefficient of the color difference component included in the encoded bitstream.
- a decoding step a conversion process is performed on the coefficient of the luminance component and the coefficient of the color difference component, and the luminance component and the luminance component
- a derivation step of deriving the chrominance component wherein the derivation step combines a plurality of blocks of the chrominance component when the size of the conversion block to be processed is a predetermined first minimum size.
- the blocks are combined and the block has the same size as the luminance component Process with. This eliminates the need for a processing circuit and software module for decoding in units smaller than the first minimum size.
- the image decoding method having the above configuration is configured so that the CBF_Cb and CBF_Cr of the color difference components are not decoded for the first minimum size conversion block.
- CBF_Cb and CBF_Cr of the layer one level higher are referred to, and therefore, CBF_Cb and CBF_Cr of the transform block having the first minimum size are not referred to. That is, since it is not necessary to encode CBF_Cb and CBF_Cr of the transform block of the first minimum size, even if these flags are included in the encoded bitstream, the processing is performed by not decoding these flags. Efficiency can be increased.
- the transform block is a block obtained by dividing the decoding block using a quadtree structure, and a second minimum size of the decoding block is limited to a size larger than the first minimum size, and the decoding step Then, (1) the size of the transformation block to be processed is larger than the first minimum size, and (2) the transformation block to be processed is the highest layer of the quadtree structure, or The flag may be decoded when the value of the flag in a layer one layer above the layer in the quadtree structure of the transform block to be processed is 1.
- an image encoding device includes a processing circuit and a storage device accessible by the processing circuit, and is provided for color difference components and luminance components of an input image.
- An image encoding device that performs conversion, wherein the input image includes one or a plurality of conversion blocks having a luminance component and a color difference component, and the size of the block of the luminance component in the conversion block to be processed is the processing target The size of the color difference component block in the conversion block to be processed is smaller than the size of the luminance component block, and the processing circuit uses the luminance component and the color difference component.
- a derivation step for performing a conversion process to derive the coefficient of the luminance component and the coefficient of the color difference component, and the coefficient of the luminance component and the previous An encoding step for encoding a coefficient of a color difference component, and in the derivation step, when the size of the transform block to be processed is a predetermined first minimum size, by combining a plurality of blocks Then, the conversion process is performed on the color difference component with a block having the same size as the luminance component to derive the coefficient of the color difference component, and in the encoding step, the size of the conversion block to be processed is the first minimum size. Is not encoded a flag indicating whether a non-zero coefficient is included in the coefficient of the color difference component, and the size of the conversion block to be processed is different from the first minimum size, The flag is encoded.
- an image decoding device includes a processing circuit and a storage device accessible by the processing circuit, and decodes an image from an encoded bitstream.
- the image includes one or a plurality of conversion blocks having a luminance component and a color difference component, and the size of the luminance component block in the processing target conversion block is equal to that of the processing target conversion block.
- the size of the color difference component block in the conversion block to be processed is smaller than the size of the luminance component block, and the processing circuit performs the encoding process included in the encoded bitstream.
- the color difference component is derived by performing an inverse transform process on the coefficient of the color difference component in a block having the same size as the luminance component,
- a flag indicating whether or not a non-zero coefficient is included in the coefficient of the color difference component is decoded.
- an image encoding / decoding device includes the image encoding device and the image decoding device.
- coding may be used to mean encoding.
- FIG. 1 is a block diagram showing an example (part) of the configuration (part) of the image coding apparatus according to the present embodiment.
- the image encoding device 100 is a device that performs frequency conversion on the color difference component and the luminance component of the input image.
- the image coding apparatus 100 divides an input image into one or a plurality of coding blocks using a tree structure, and divides the coding block into one or a plurality of transform blocks using a tree structure.
- a quadtree structure that is an example of a tree structure is used.
- the image encoding device 100 includes an LCU division unit 101, a CU division size determination unit 102, a TU division size determination unit 103, a CBF_CbCr determination unit 104, and a quadtree encoding unit 105. And a frame memory 106 (an example of a storage device).
- the detailed configurations (operations) of the LCU division unit 101, the CU division size determination unit 102, the TU division size determination unit 103, the CBF_CbCr determination unit 104, and the frame memory 106 will be described in 1-3.
- FIG. 2 is a block diagram illustrating an example of the configuration of the quadtree encoding unit 105.
- the quadtree encoding unit 105 includes a CU split flag encoding unit 110 and a CU encoding unit 120.
- the CU encoding unit 120 includes a prediction unit 121, a subtraction unit 122, an addition unit 123, and a quadtree conversion unit 130.
- the quadtree conversion unit 130 includes a TU split flag encoding unit 131, a CBF encoding unit 132, and a TU encoding unit 140.
- the TU encoding unit 140 includes a conversion unit 141, a frequency coefficient encoding unit 142, and an inverse conversion unit 143.
- FIG. 3 is a flowchart illustrating an example of the overall operation of the image encoding device 100.
- the LCU division unit 101 divides an input image into, for example, 64 ⁇ 64 size blocks (LCU: Large Coding Unit), and sequentially divides the LCU into a CU division size determination unit 102 and a TU division size. It outputs to the determination part 103, the CBF_CbCr determination part 104, and the quadtree encoding part 105 (step S101). Since the subsequent processing for the LCU (S102 to S106) is performed for all the LCUs in one picture (input image), S102 to S106 are executed for all the LCUs included in the picture. .
- LCU Large Coding Unit
- the CU partition size determination unit 102 divides the LCU into one or a plurality of CUs (Coding Units, coding units) (S102).
- the size of the CU is variable. Also, it is not necessary for all CUs to be the same size.
- FIG. 4 is a diagram illustrating an example in which an LCU is divided into one or a plurality of CUs.
- the whole a block including all of 1 to 16
- each of the square blocks with numbers 1 to 16 indicates a CU.
- Numerical values 1 to 16 in the block indicate the order of encoding.
- the CU partition size determination unit 102 determines the CU partition size using the encoded image or the characteristics of the input image.
- the minimum size (second minimum size) of the CU is 8 horizontal pixels ⁇ 8 vertical pixels
- the maximum size is 64 horizontal pixels ⁇ 64 vertical pixels. Note that the maximum size and the minimum size of the CU may be other sizes.
- the CU partition size determining unit 102 determines the value of the CU split flag indicating the CU partitioning method, and outputs the value to the quadtree encoding unit 105.
- the CU split flag is a flag indicating whether to divide a block.
- FIG. 5 is a diagram showing an example of the value of the CU split flag corresponding to the LCU shown in FIG.
- the numerical value in the square is the value of the CU split flag.
- a block with a CU split flag of 1 indicates that the block is divided into four, and a block of 0 indicates that the division is stopped.
- a CU split flag is also present in each of the four divided blocks. In other words, it is possible to divide until the CU split flag becomes 0 or until the CU size becomes 8 ⁇ 8.
- CULayer is a parameter indicating a division hierarchy (number of divisions). In other words, the larger the CULayer value, the smaller the CU size.
- the block in which 1 is set in the CU split flag is divided in the hierarchy with CULayer 2.
- the size of the divided CU is 8x8.
- the size of this CU is the minimum size and is not necessarily divided. For this reason, the CU split flag is always 0. Therefore, it is not necessary to encode the CU split flag of the hierarchy in which the CU has the minimum size. For this reason, in FIG. 5, the CU split flag of the hierarchy whose CULayer is 3 is indicated by “(0)” in parentheses.
- the encoding method of the CU split flag will be described later in 1-4. Details are given in quadtree coding.
- the TU partition size determination unit 103 divides a CU into one or a plurality of TUs (conversion units) (step S103).
- the size of the TU is variable. Also, it is not necessary for all TUs to be the same size.
- FIG. 6 is a diagram illustrating an example of dividing the LCU illustrated in FIG. 4 into one or a plurality of TUs.
- the bold square indicates CU
- the thin square indicates TU.
- the numerical value in each TU indicates the order of conversion processing.
- the TU partition size determination unit 103 determines the TU partition size using the encoded image or the characteristics of the input image.
- the minimum size (first minimum size) of TU is 4 horizontal pixels ⁇ 4 vertical pixels
- the maximum size is 64 horizontal pixels ⁇ 64 vertical pixels. Note that the maximum size and the minimum size of the TU may be other sizes.
- the TU partition size determination unit 103 determines the value of the TU split flag indicating the TU partition method, and outputs the value to the quadtree encoding unit 105.
- the TU split flag is a flag indicating whether to divide a block.
- FIG. 7 is a diagram showing the value of the TU split flag when the encoding order 12 CU shown in FIG. 4 is divided into a plurality of TUs by the method shown in FIG.
- the numerical value in the square is the value of the TU split flag.
- a block with a TU split flag of 1 indicates that the block is divided into four, and a block of 0 indicates that the division is stopped.
- a TU split flag also exists in each block divided into four. In other words, it is possible to divide until the TU split flag becomes 0 or until the TU size becomes 4 ⁇ 4.
- TULayer is a parameter indicating the division hierarchy (number of divisions). In other words, the larger the TULayer value, the smaller the TU size.
- the CU size is 32 ⁇ 32 and the minimum size is 4 ⁇ 4, there are 0 to 3 TULayers.
- the block in which the TU split flag is set to 1 in the layer where the TULayer is 2 is divided into four.
- the size of the divided TU is 4x4.
- the size of this TU is the minimum size and is not necessarily divided. For this reason, the TU split flag is always 0. Therefore, it is not necessary to encode the TU split flag of the layer where the TU is the minimum size. For this reason, in FIG. 7, the TU split flag of the hierarchy whose TULayer is 3 is displayed as “(0)” with parentheses.
- the encoding method of the TU split flag 1-6. Details are described in the quadtree conversion flow.
- the block in the hierarchy where the TULayer is 0 has a size of 32 ⁇ 32.
- the size of the block in the hierarchy where TULayer is 0 is 8 ⁇ 8.
- the size of the block of the layer where the TULayer is 0 is 16 ⁇ 16.
- the CBF_CbCr determination unit 104 determines CBF_Cb and CBF_Cr of each TU (S104).
- CBF_Cb and CBF_Cr are flags indicating whether or not there is a frequency coefficient to be encoded in the color difference components (Cb, Cr) of the image.
- the values of CBF_Cb and CBF_Cr are 1 when at least one non-zero coefficient to be encoded is included in the TU, and 0 when there is no non-zero coefficient (when all frequency coefficients are 0).
- the values of CBF_Cb and CBF_Cr are set from the frequency coefficients obtained by actually frequency-converting the difference from the predicted image from the image domain to the frequency domain.
- the quadtree encoding unit 105 performs quadtree encoding (S105). Details will be described later.
- FIG. 8 is a flowchart illustrating an example of a quadtree encoding process.
- the CU split flag encoding unit 110 of the quadtree encoding unit 105 encodes the CU split flag when the value of the CULayer in FIG. 5 is smaller than 3 (“Yes” in S111) (S112). Further, when the value of the CULayer is 3 (“No” in S111), the CU split flag encoding unit 110 sets the CU split flag to 0 without encoding the CU split flag (S113).
- the CU size is 16 ⁇ 16 or more.
- the CU size is 8x8 or larger. Since the minimum size of the CU is 8 ⁇ 8, the value of CULayer is never 4 or more. That is, when the size of the CU is 8 ⁇ 8, the CU split flag encoding unit 110 sets 0 without encoding the CU split flag because it is not necessarily divided.
- the CU encoding unit 120 executes a CU encoding process (S119). Details will be described later.
- the CU split flag is 1 (“Yes” in S114)
- the CU encoding unit 120 divides the block into four. Then, the CU encoding unit 120 recursively performs quadtree encoding, which is the main process, on each of the four divided blocks (S115 to S118).
- FIG. 9 is a flowchart illustrating an example of a processing procedure of CU encoding.
- the prediction unit 121 of the CU encoding unit 120 generates a prediction block using the current encoding target CU (coding target CU) and the decoded image stored in the frame memory 106 (S121).
- the subtraction unit 122 generates a difference block between the encoding target CU and the prediction block generated by the prediction unit 121 (S122).
- the quadtree conversion unit 130 performs frequency conversion processing, frequency coefficient encoding, and inverse frequency conversion on the difference block generated by the subtraction unit 122 (S123).
- the inverse frequency transform reconstructs the difference block (a reconstructed difference block is generated). Details will be described later.
- the adding unit 123 adds the reconstructed difference block reconstructed by the inverse frequency transform of the quadtree transform unit 130 and the prediction block generated by the prediction unit 121 to generate a reconstructed block, and the frame memory 106 (S124).
- FIG. 10 is a flowchart illustrating an example of a quadtree conversion processing procedure.
- the TU split flag encoding unit 131 of the quadtree conversion unit 130 encodes the TU split flag when the sum of the CULayer value and the TULayer value is smaller than 4 (“Yes” in S131) (S132). In addition, when the sum of the CULayer value and the TULayer value is 4 (“No” in S131), the TU split flag encoding unit 131 sets the TU split flag to 0 without encoding the TU split flag. (S133).
- the TU split flag encoding unit 131 determines whether or not the current TU Layer TU size to be processed is 4x4. If the TU size is 4x4, the TU split flag is not encoded. I have to. Since the minimum size of a TU is 4x4, when the size of a TU is 4x4, it is not necessarily divided. Accordingly, when the TU size is 4 ⁇ 4, the TU split flag encoding unit 131 sets 0 without encoding the TU split flag.
- the CU size can be determined by the value of CULayer.
- the TU size cannot be determined only by the value of TULayer, and is determined using the value of CULayer. Since TULayer starts from the CU size, the TU size can be determined by adding the CULayer value and the TULayer value. When the sum of the CULayer value and the TULayer value is 0, the TU size is 64 ⁇ 64, and when the sum is 4, the TU size is 4 ⁇ 4. Since the minimum size of a TU is 4x4, the sum does not exceed 5.
- the CBF encoding unit 132 When the sum of the values of CULayer and TULayer is smaller than 4 (the TU size of the current TU Layer being larger than 4 ⁇ 4) (“Yes” in S134), the CBF encoding unit 132 performs encoding processing of CBF_Cb and CBF_Cr. Perform (S135 to S142). The CBF encoding unit 132 does not perform the encoding process of CBF_Cb and CBF_Cr when the sum of the CULayer value and the TULayer value is 4 (the TU size of the current TU Layer to be processed is 4 ⁇ 4). The reason why CBF_Cb and CBF_Cr are not encoded when the TU size is 4 ⁇ 4 will be described in the following TU encoding flow.
- the CBF encoding unit 132 when TULayer is 0 (“Yes” in S135), or when TULayer is not 0 and the value of CBF_Cb of TU Layer on one layer is 1 (“No” in S135 and S136) “Yes”), CBF_Cb is encoded (S137).
- the CBF encoding unit 132 sets 0 to CBF_Cb when TULayer is not 0 and the value of CBF_Cb of TULayer on one layer is not 1 (“No” in S135 and “No” in S136) (S138) .
- the CBF encoding unit 132 when TULayer is 0 (“Yes” in S139), or when TULayer is not 0 and the value of CBF_Cr of TU Layer on one layer is 1 (“No” in S139) “Yes” in S140), CBF_Cr is encoded (S141).
- the CBF encoding unit 132 sets 0 to CBF_Cr when TULayer is not 0 and the value of CBF_Cr of the TU Layer on one layer is not 1 (“No” in S139 and “No” in S140) (S142) .
- the CBF encoding unit 132 encodes CBF_Cb (Cr) only when the value of TULayer is 0 (the highest TULayer) or when the value of CBF_Cb (Cr) of the higher TULayer is 1. If not, 0 is set in CBF_Cb (Cr) and the subsequent processing is performed. This indicates that the CBF is encoded hierarchically. When the CBF is 0 in the upper TULayer block obtained by combining four blocks of the quadrant, all the CBFs in the lower TULayer are assumed to be 0.
- the TU encoding unit 140 executes the TU encoding process (S148). Details will be described later.
- the TU encoding unit 140 divides the block into four. Then, the TU encoding unit 140 recursively performs the quadtree transformation, which is the main process, on each of the four divided blocks (S144 to S147).
- FIG. 11 is a flowchart illustrating an example of a processing procedure of TU encoding (part of the image encoding method).
- the TU encoding unit 140 first performs TU encoding processing on the Luma (luminance) component of the image (S151 to S153).
- the conversion unit 141 of the TU encoding unit 140 performs frequency conversion to obtain a frequency coefficient by converting the pixel of the Luma component in the TU from the image region to the frequency region (S151). Further, the frequency coefficient encoding unit 142 encodes the frequency coefficient converted by the conversion unit 141 and outputs a code string (S152). The inverse transform unit 143 performs inverse frequency transform for transforming the frequency coefficient converted by the transform unit 141 from the frequency domain to the image domain (S153).
- the TU encoding unit 140 performs a TU encoding process on the chroma (color difference) component of the image (S154 to S171).
- the converting unit 141 proceeds to step S155.
- step S155 when the CBF_Cb is 1 (“Yes” in S155), the conversion unit 141 performs frequency conversion on the Cb component pixel of the TU (S156).
- the frequency coefficient encoding unit 142 performs encoding on the frequency coefficient converted by the conversion unit 141 and outputs a code string (S157).
- the inverse conversion unit 143 performs inverse frequency conversion on the frequency coefficient converted by the conversion unit 141, and the process proceeds to step S159 (S158).
- step S155 when CBF_Cb is 0 (“No” in S155), the conversion unit 141 proceeds to step S159 because the non-zero coefficient to be encoded is not included in the TU.
- step S159 when CBF_Cr is 1 (“Yes” in S159), the conversion unit 141 performs frequency conversion on the Cr component pixel of the TU (S160).
- the frequency coefficient encoding unit 142 performs encoding on the frequency coefficient converted by the conversion unit 141 and outputs a code string (S161).
- the inverse transform unit 143 performs inverse frequency transform on the frequency coefficient transformed by the transform unit 141 (S162). It is assumed that the format is 4: 2: 0, and Cb and Cr are the number of pixels that is a quarter of Luma.
- step S159 when CBF_Cr is 0 (“No” in S159), the conversion unit 141 ends the process for Cr because the TU does not include the non-zero coefficient to be encoded.
- the target TU is divided into four. Only in the lower right block of the blocks (“Yes” in S163), the frequency conversion process and the encoding process are performed (S164 to S171).
- the TUs in the conversion orders 24 to 27 are 4 ⁇ 4 size.
- the TU encoding unit 140 does not perform Cb and Cr frequency conversion processing and encoding processing when the TU to be processed is a TU in the conversion order 24 to 26. Instead, the TU encoding unit 140 collectively processes Cb and Cr of the conversion orders 24 to 27 when the TU to be processed is a TU of the conversion order 27.
- the TU encoding unit 140 when the TU to be processed is a TU with the conversion order 27, the TU encoding unit 140 combines the Cb or Cr pixels with the conversion order 24 to 27 to create a 4x4 pixel, and the TU encoding unit 140 generates a 4x4 pixel.
- the frequency conversion process and the encoding process are executed.
- the frequency conversion process and the encoding process for the Chroma component are executed with the same number of pixels as the Luma component by combining four blocks of the quadtree.
- the TU encoding unit 140 determines whether or not to perform frequency conversion or the like by referring to the CBF_Cb of the upper layer TULayer in order to perform frequency conversion or the like by combining four blocks of the quadtree (S164). For example, in the examples of FIGS. 6 and 7, the TUs in the conversion order 24 to 27 refer to CBF_Cb of TULayer2.
- FIG. 12 is a diagram illustrating an example of CBF_Cb corresponding to the TU of FIG.
- the value of CBF_Cb of TULayer2 is 1. In this case, it is determined to perform frequency conversion or the like (“Yes” in S164).
- step S164 If the TU encoding unit 140 determines in step S164 that frequency conversion or the like is to be performed, the frequency converting process for Cb by the converting unit 141 (S165), the frequency coefficient encoding by the frequency coefficient encoding unit 142 (S166), And the inverse frequency conversion (S167) by the inverse conversion unit 143 is executed.
- the TU encoding unit 140 determines whether or not to perform frequency conversion or the like with reference to CBF_Cr of the next higher TULayer (S168). When it is determined in step S168 that frequency conversion or the like is performed, the TU encoding unit 140 performs frequency conversion processing on Cr by the conversion unit 141 (S169), frequency coefficient encoding by the frequency coefficient encoding unit 142 (S170), Then, inverse frequency conversion (S171) by the inverse conversion unit 143 is executed.
- the CBF_Cb and CBF_Cr of the TU Layer having the size of the TU that is the lowest layer is not referred to. Therefore, in the quadtree transformation shown in FIG. 10, when the sum of the CULayer value and the TULayer value is 4 (TULayer whose TU size is 4 ⁇ 4) (when “No” in S134), CBF_Cb and CBF_Cr are set. Not encoded.
- CBF_Cb and CBF_Cr are configured to be encoded only when CBF_Cb and CBF_Cr of the upper TULayer are 1. Thereby, the code amount and the processing amount are reduced. In other words, when CBF_Cb and CBF_Cr become 0 in a certain TU Layer, it is not necessary to encode CBF_Cb and CBF_Cr in the TU Layer lower than that, regardless of how much the TU is subdivided, and the code amount and the processing amount are reduced. be able to.
- the TU size is 4 ⁇ 4
- Cb and Cr are frequency-converted by combining 4 blocks.
- it is configured so that CBF_Cb and CBF_Cr are not encoded in a TULayer with a TU size of 4 ⁇ 4.
- the minimum size of the frequency conversion process can be set to 4 ⁇ 4 even in the case of Chroma (Cb and Cr) by performing frequency conversion by combining four blocks. For this reason, a circuit for frequency conversion for a size of 2 ⁇ 2 is not required, and the circuit scale can be reduced.
- by combining the four blocks it is possible to eliminate CBF_Cb and CBF_Cr of TU Layer having a TU size of 4 ⁇ 4, and to reduce the code amount.
- CBF_Cb and CBF_Cr are encoded in the highest TULayer, but as described above, CBF_Cb and CBF_Cr are not encoded in the TULayer having a TU size of 4 ⁇ 4.
- the minimum size of the CU larger than 4 ⁇ 4
- the TU of the highest TULayer always becomes a size larger than 4 ⁇ 4, so this contradiction can be resolved. That is, the condition for encoding CBF_Cb and CBF_Cr can be set to the time when the following two conditions are both satisfied.
- the TU size is a TULayer larger than the minimum size
- the value of CBF_Cb or CBF_Cr of the highest-level TULayer or higher-level TULayer is 1.
- condition (1) is realized by limiting the minimum size of a CU to 8 ⁇ 8.
- the TU size of the TU Layer is determined from the sum of the values of CU Layer and TU Layer, but other methods may be used as long as the TU size is known. Other methods for determining the size of the TU include, for example, determination by another parameter, or determination by counting the number of times the quadtree encoding process or quadtree conversion process is recursively executed. Conceivable.
- CBF is not used, but whether or not encoding processing (frequency conversion processing or the like) is performed may be switched using CBF in the same manner as Cb and Cr.
- the TU split flag is not encoded and set to 0 (not divided) when the TU size is the minimum size, but is not limited to this.
- the maximum size of the TU is set, and when the size of the TU is larger than the maximum size, the TU split flag is not encoded and 1 (an example of a value meaning “divide”) is set. It may be set.
- the present invention is not limited to this.
- the minimum size of the TU is 8 ⁇ 8
- four blocks of Cb and Cr may be combined and processed. What is necessary is just to set adaptively the case where it processes by combining according to the minimum size of TU. In other words, the processing may be combined when the size is larger than 8 ⁇ 8. Further, the minimum size of the TU may be configured to be changed so that the processing is combined when the TU is the minimum size.
- the frequency conversion is performed in the quadtree encoding unit 105 (1-7, step S151 in FIG. 11), the present invention is not limited to this.
- the result of frequency conversion (1-3, step S104 in FIG. 3) performed when the CBF_CbCr determination unit 104 determines CBF_Cr and CBF_Cb is stored in a memory, and the frequency coefficient of the quadtree encoding unit 105 is stored.
- a configuration may be adopted in which the data is read from the memory at the time of encoding (frequency coefficient encoding unit 142, step S152 in FIG. 11).
- the determination of the CU partition size (CU partition size determination unit 102) and the determination of the TU partition size (TU partition size determination unit 103) are performed using the characteristics of the encoded image or the input image. It is not limited.
- the frequency coefficient may be obtained by actually frequency-converting the difference from the predicted image, and the division size may be determined from the generated code amount at that time.
- the prediction information, the difference block, and the frequency conversion result at that time may be used in the CBF_CbCr determination unit 104 and the quadtree encoding unit 105.
- the generation of the reconstructed block that adds the difference block reconstructed by the inverse transform unit 143 and the prediction block is processed separately from the quadtree transform (1-6), but is not limited thereto. is not.
- the generation of the reconstructed block and the quadtree conversion may be processed simultaneously.
- a reconstructed block may be generated by adding the reconstructed difference block and the prediction block immediately after the inverse frequency transform in the quadtree transform.
- the LCU is 64 ⁇ 64 blocks, it is not limited to this and may be 32 ⁇ 32 or 128 ⁇ 128, or may be larger or smaller than that.
- the maximum size of the CU is 64 ⁇ 64 and the minimum size is 8 ⁇ 8, the minimum size of the CU only needs to be larger than the minimum size of the TU (for example, 4 ⁇ 4), and may be other large or small sizes. Further, it may be varied depending on the size of the LCU.
- the maximum size of the TU is 64 ⁇ 64 and the minimum size is 4 ⁇ 4
- the minimum size of the CU only needs to be larger than the minimum size of the TU (for example, 4 ⁇ 4), and may be other large or small sizes. Further, it may be varied depending on the size of the LCU.
- processing in the present embodiment may be realized by software.
- the software may be distributed by downloading or the like. Further, this software may be recorded on a recording medium such as a CD-ROM and distributed. This also applies to other embodiments in this specification.
- FIG. 13 is a block diagram showing an example of a configuration (part) of the image decoding apparatus according to the present embodiment.
- the image decoding apparatus 200 includes a quadtree decoding unit 201 and a frame memory 202.
- FIG. 14 is a block diagram illustrating an example of the configuration of the quadtree decoding unit 201.
- the quadtree decoding unit 201 includes a CU split flag decoding unit 211 and a CU decoding unit 220.
- the CU decoding unit 220 includes an addition unit 221 and a quadtree conversion unit 230.
- the quadtree conversion unit 230 includes a TU split flag decoding unit 231, a CBF decoding unit 232, and a TU decoding unit 240.
- the TU decoding unit 240 includes a frequency coefficient decoding unit 241 and an inverse conversion unit 242.
- FIG. 15 is a flowchart showing an example of the overall operation of the image decoding apparatus 200.
- the quadtree decoding unit 201 performs quadtree decoding on the code string as shown in FIG. 15 (S201). Details will be described later.
- This process is a process for the LCU, and is performed for all the LCUs in one picture, so the process is repeated for the number of LCUs in the picture (S202).
- the size of the LCU is 64 ⁇ 64 in the present embodiment.
- FIG. 16 is a flowchart illustrating an example of a quadtree decoding processing procedure.
- the configuration of the LCU, CU, and TU will be described by taking the case where the configuration is the same as that of the first embodiment (FIGS. 4, 6, 5, 7, and 12) as an example.
- the CU split flag decoding unit 211 of the quadtree decoding unit 201 decodes the CU split flag when the value of the CULayer is smaller than 3 (“Yes” in S211) (S212). Further, when the value of CULayer is 3 (“No” in S211), the quadtree decoding unit 201 does not decode the CU split flag and sets 0 to the CU split flag (S213).
- the CU size is 16 ⁇ 16 or more.
- the CU size is 8x8 or larger. Since the minimum size of the CU is 8 ⁇ 8, the value of CULayer is never 4 or more. That is, when the size of the CU is 8 ⁇ 8, the CU split flag decoding unit 211 sets 0 without decoding the CU split flag because it is not necessarily divided.
- the CU decoding unit 220 executes a CU decoding process (S219). Details will be described later.
- the CU decoding unit 220 divides the block into four. Then, the CU decoding unit 220 recursively performs quadtree decoding, which is this processing, on each of the four divided blocks (S215 to S218).
- FIG. 17 is a flowchart illustrating an example of a processing procedure of CU decoding.
- the quadtree conversion unit 230 performs quadtree conversion (S221).
- the TU decoding unit 240 performs frequency coefficient decoding and inverse frequency transformation. Details will be described later.
- the adding unit 221 adds the difference block restored by the inverse frequency transform in the quadtree transform unit 230 and the prediction block generated from the decoded image stored in the frame memory 202 to generate a decoded block,
- the data is stored in the memory 202 (S222).
- the prediction block may be generated by a prediction unit provided between the frame memory 202 and the addition unit 221.
- FIG. 18 is a flowchart illustrating an example of a quadtree conversion processing procedure.
- the TU split flag decoding unit 231 of the quadtree conversion unit 230 decodes the TU split flag when the sum of the CULayer value and the TULayer value is smaller than 4 (“Yes” in S231) (S232). Also, when the sum of the CULayer value and the TULayer value is 4 (“No” in S231), the TU split flag decoding unit 231 sets the TU split flag to 0 without decoding the TU split flag. (S233).
- the TU split flag decoding unit 231 determines whether the TU size of the current TU Layer to be processed is 4x4, and does not decode the TU split flag if the TU size is 4x4. I have to. Since the minimum size of a TU is 4x4, when the size of a TU is 4x4, it is not necessarily divided. Accordingly, when the TU size is 4 ⁇ 4, the TU split flag decoding unit 231 sets 0 without decoding the TU split flag.
- the CU size can be determined by the value of CULayer.
- the TU size cannot be determined only by the value of TULayer, and is determined using the value of CULayer. Since TULayer starts from the CU size, the TU size can be determined by adding the CULayer value and the TULayer value. When the sum of the CULayer value and the TULayer value is 0, the TU size is 64 ⁇ 64, and when the sum is 4, the TU size is 4 ⁇ 4. Since the minimum size of a TU is 4x4, the sum does not exceed 5.
- the CBF decoding unit 232 decodes CBF_Cb and CBF_Cr when the sum of the value of CULayer and the value of TULayer is smaller than 4 (the TU size of the current TU Layer to be processed is larger than 4 ⁇ 4) (“Yes” in S234). Processing is performed (S235 to S242).
- the CBF decoding unit 232 performs the decoding process of CBF_Cb and CBF_Cr when the sum of the value of the CULayer and the value of the TULayer is 4 (the TU size of the TULayer to be processed is 4 ⁇ 4) (“No” in S234). Absent. The reason why CBF_Cb and CBF_Cr are not decoded when the TU size is 4 ⁇ 4 will be described in the following TU decoding flow.
- the CBF decoding unit 232 when TULayer is 0 (“Yes” in S235), or when TULayer is not 0 and the value of CBF_Cb of TU Layer on one layer is 1 (“No” in S235 and S236) “Yes”), CBF_Cb is decoded (S237).
- the CBF decoding unit 232 sets CBF_Cb to 0 when TLayer is not 0 and the value of CBF_Cb of the TU Layer on one layer is not 1 (“No” in S235 and “No” in S236) (S238) .
- the CBF decoding unit 232 when TULayer is 0 (“Yes” in S239), or when TULayer is not 0 and the value of CBF_Cr of TU Layer on one layer is 1 (“No” in S239) “Yes” in S240), CBF_Cr is decoded (S241).
- the CBF decoding unit 232 sets 0 to CBF_Cr when TLayer is not 0 and the value of CBF_Cr of the one-layer TULayer is not 1 (“No” in S239 and “No” in S240) (S242) .
- the CBF decoding unit 232 decodes CBF_Cb (Cr) only when the value of TULayer is 0 (the highest TULayer) or when the value of CBF_Cb (Cr) of the higher TULayer is 1. If not, 0 is set in CBF_Cb (Cr) and the subsequent processing is performed. This means that the CBF is hierarchically decoded. When the CBF is 0 in the upper TULayer block obtained by combining the four blocks of the quadtree, all the CBFs of the lower TULayer are assumed to be 0.
- the TU decoding unit 240 executes the TU decoding process (S248). Details will be described later.
- the TU decoding unit 240 recursively performs quadtree transformation, which is this processing, on each of the four divided blocks (S244 to S247).
- FIG. 19 is a flowchart illustrating an example of a processing procedure of TU decoding (part of the image decoding method).
- the TU decoding unit 240 first performs a TU decoding process on the Luma (luminance) component of the image (S251 to S252).
- the frequency coefficient decoding unit 241 of the TU decoding unit 240 decodes the frequency coefficient of the Luma component in the TU (S251). Further, the inverse transform unit 242 of the TU decoding unit 240 performs inverse frequency transform of the decoded frequency coefficient (S252).
- the TU decoding unit 240 performs a TU decoding process on the chroma (color difference) component of the image (S253 to S266).
- the frequency coefficient decoding unit 241 proceeds to Step S254.
- step S254 when the CBF_Cb is 1 (“Yes” in S254), the frequency coefficient decoding unit 241 performs frequency coefficient decoding on the Cb component of the TU (S255).
- the inverse transform unit 242 performs inverse frequency transform on the decoded frequency coefficient (S256).
- step S254 when CBF_Cr is 1 (“Yes” in S257), the frequency coefficient decoding unit 241 performs frequency coefficient decoding on the Cr component of the TU (S258).
- the inverse transform unit 242 performs inverse frequency transform on the decoded frequency coefficient (S259). Since the format is 4: 2: 0, each of Cb and Cr is a quarter of the number of pixels of Luma.
- step S253 when the TU decoding unit 240 determines that the sum of the value of CULayer and the value of TULayer is 4 (TU size is 4x4) ("No" in S253), the target TU is divided into four. Only in the case of the lower right block of the blocks (“Yes” in S260), the decoding process and the inverse frequency conversion process are performed (S261 to S266). In the decoding process and the inverse frequency conversion process, four blocks of the quadtree are combined and processed with the same number of pixels as Luma.
- the TUs in the conversion orders 24 to 27 are 4 ⁇ 4 size.
- the TU decoding unit 240 does not perform Cb and Cr decoding processing and inverse conversion processing when the TU to be processed is a TU in the conversion order 24 to 26. Instead, the TU decoding unit 240 collectively processes Cb and Cr in the conversion orders 24 to 27 when the TU to be processed is a TU in the conversion order 27. In other words, when the TU to be processed is a TU with the conversion order 27, the TU decoding unit 240 combines the Cb or Cr pixels with the conversion orders 24 to 27 and restores the difference block in units of the combined blocks.
- the inverse frequency transform process and the decoding process for the Chroma component are executed with the same number of pixels as the Luma component by combining four blocks of the quadtree.
- the TU decoding unit 240 determines whether or not to perform reverse frequency conversion or the like with reference to the CBF_Cb of the next higher TULayer, in order to perform reverse frequency conversion or the like by combining the four blocks of the quadtree (S261). ).
- the TUs in the conversion orders 24 to 27 refer to CBF_Cb of TULayer2.
- the value of CBF_Cb of TULayer2 is 1. In this case, it is determined that reverse frequency conversion or the like is performed (“Yes” in S261).
- the frequency coefficient decoding unit 241 decodes the frequency coefficient for Cb (S262), and the inverse transformation unit 242 performs the frequency coefficient decoding. Inverse frequency conversion (S263) is executed.
- the TU decoding unit 240 determines whether to perform inverse frequency conversion or the like with reference to CBF_Cr of the next higher TULayer (S264). When it is determined in step S264 that the inverse frequency transform or the like is performed, the TU decoding unit 240 decodes the frequency coefficient by the frequency coefficient decoding unit 241 (S265), and the inverse frequency transform by the inverse transform unit 242 (S266). ).
- the CBF_Cb and CBF_Cr of the upper TULayer are referred to. Therefore, the CBF_Cb and CBF_Cr of the TU Layer having the size of the TU that is the lowest layer is not referred to. . Therefore, in the quadtree transformation shown in FIG. 18, when the sum of the value of CULayer and the value of TULayer is 4 (TULayer of TU size 4 ⁇ 4) (when “No” in S234), CBF_Cb and CBF_Cr are Not decrypted.
- the CBF_Cb and CBF_Cr are decoded only when the CBF_Cb and CBF_Cr of the upper TUL Layer are 1. Thereby, the code amount and the processing amount are reduced. That is, when CBF_Cb and CBF_Cr become 0 in a certain TU Layer, it is not necessary to decode CBF_Cb and CBF_Cr in the TU Layer lower than that, so that the amount of code and the amount of processing are reduced. Can do.
- Cb and Cr are inversely frequency-converted by combining 4 blocks.
- it is configured such that CBF_Cb and CBF_Cr are not decoded in a TULayer with a TU size of 4 ⁇ 4.
- the initial size of the inverse frequency transform process can be set to 4 ⁇ 4 even in the case of Chroma (Cb and Cr). For this reason, a circuit for inverse frequency conversion for a 2 ⁇ 2 size is not required, and the circuit scale can be reduced.
- by combining the four blocks it is possible to eliminate CBF_Cb and CBF_Cr of TU Layer having a TU size of 4 ⁇ 4, and to reduce the code amount.
- CBF_Cb and CBF_Cr are decoded in the highest TULayer, as described above, CBF_Cb and CBF_Cr are not decoded in a TULayer having a TU size of 4 ⁇ 4.
- the condition for decoding CBF_Cb and CBF_Cr can be set to the time when the following two conditions are both satisfied.
- the TU size is a TULayer larger than the minimum size
- the value of CBF_Cb or CBF_Cr of the highest-level TULayer or higher-level TULayer is 1.
- condition (1) is realized by limiting the minimum size of a CU to 8 ⁇ 8.
- the TU size of the TU Layer is determined from the sum of the values of CU Layer and TU Layer, but other methods may be used as long as the TU size is known. Other methods for determining the size of the TU include, for example, determination by another parameter, or determination by counting the number of times the quadtree decoding process or quadtree conversion process is recursively executed. Conceivable.
- CBF is not used, but execution of decoding processing (inverse frequency conversion processing, etc.) may be switched using CBF in the same manner as Cb and Cr.
- the TU split flag is set to 0 (not divided) without decoding when the TU size is the minimum size, but is not limited thereto.
- the maximum size of the TU is set, and when the size of the TU is larger than the maximum size, the TU split flag is not decoded and 1 (an example of a value meaning “divide”) is set. It may be set.
- the processing is performed by combining the four blocks of Cb and Cr.
- the present invention is not limited to this.
- the minimum size of the TU is 8 ⁇ 8
- four blocks of Cb and Cr may be combined and processed. What is necessary is just to set adaptively the case where it processes by combining according to the minimum size of TU. In other words, the processing may be combined when the size is larger than 8 ⁇ 8. Further, the minimum size of the TU may be configured to be changed so that the processing is combined when the TU is the minimum size.
- the generation of the decoding block that adds the difference block restored by the TU decoding unit 240 and the prediction block is processed separately from the quadtree transformation, the present invention is not limited to this.
- Decoding block generation and quadtree transformation may be processed simultaneously. For example, immediately after the inverse frequency transform in the quadtree transform, the decoded difference block and the prediction block may be added to generate a decoded block.
- the LCU is 64 ⁇ 64 blocks, it is not limited to this and may be 32 ⁇ 32 or 128 ⁇ 128, or may be larger or smaller than that.
- the maximum size of the CU is 64 ⁇ 64 and the minimum size is 8 ⁇ 8, the minimum size of the CU only needs to be larger than the minimum size of the TU (for example, 4 ⁇ 4), and may be other large or small sizes. Further, it may be varied depending on the size of the LCU.
- the maximum size of the TU is 64 ⁇ 64 and the minimum size is 4 ⁇ 4
- the minimum size of the CU only needs to be larger than the minimum size of the TU (for example, 4 ⁇ 4), and may be other large or small sizes. Further, it may be varied depending on the size of the LCU.
- each of the functional blocks can usually be realized by an MPU, a memory, or the like. Further, the processing by each of the functional blocks can be usually realized by software (program), and the software is recorded in a recording medium such as a ROM. Such software may be distributed by downloading or the like, or may be recorded on a recording medium such as a CD-ROM for distribution. Naturally, each functional block can be realized by hardware (dedicated circuit).
- each embodiment may be realized by performing centralized processing using a single device (system), or may be realized by performing distributed processing using a plurality of devices.
- the computer that executes the program may be singular or plural. That is, centralized processing may be performed, or distributed processing may be performed.
- each component may be configured by dedicated hardware (processing circuit) or may be realized by executing a software program suitable for each component.
- Each component may be realized by a program execution unit such as a CPU or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
- the storage medium may be any medium that can record a program, such as a magnetic disk, an optical disk, a magneto-optical disk, an IC card, and a semiconductor memory.
- the system has an image encoding / decoding device including an image encoding device using an image encoding method and an image decoding device using an image decoding method.
- image encoding / decoding device including an image encoding device using an image encoding method and an image decoding device using an image decoding method.
- Other configurations in the system can be appropriately changed according to circumstances.
- FIG. 20 is a diagram showing an overall configuration of a content supply system ex100 that realizes a content distribution service.
- a communication service providing area is divided into desired sizes, and base stations ex106, ex107, ex108, ex109, and ex110, which are fixed wireless stations, are installed in each cell.
- the content supply system ex100 includes a computer ex111, a PDA (Personal Digital Assistant) ex112, a camera ex113, a mobile phone ex114, a game machine ex115 via the Internet ex101, the Internet service provider ex102, the telephone network ex104, and the base stations ex106 to ex110. Etc. are connected.
- PDA Personal Digital Assistant
- each device may be directly connected to the telephone network ex104 without going from the base station ex106, which is a fixed wireless station, to ex110.
- the devices may be directly connected to each other via short-range wireless or the like.
- the camera ex113 is a device that can shoot moving images such as a digital video camera
- the camera ex116 is a device that can shoot still images and movies such as a digital camera.
- the mobile phone ex114 is a GSM (registered trademark) (Global System for Mobile Communications) system, a CDMA (Code Division Multiple Access) system, a W-CDMA (Wideband-Code Division Multiple Access) system, or an LTE (Long Terminal Term Evolution). It is possible to use any of the above-mentioned systems, HSPA (High Speed Packet Access) mobile phone, PHS (Personal Handyphone System), or the like.
- the camera ex113 and the like are connected to the streaming server ex103 through the base station ex109 and the telephone network ex104, thereby enabling live distribution and the like.
- live distribution content that is shot by a user using the camera ex113 (for example, music live video) is encoded as described in each of the above embodiments (that is, in one aspect of the present invention).
- the streaming server ex103 stream-distributes the content data transmitted to the requested client. Examples of the client include a computer ex111, a PDA ex112, a camera ex113, a mobile phone ex114, and a game machine ex115 that can decode the encoded data.
- Each device that receives the distributed data decodes the received data and reproduces it (that is, functions as an image decoding device according to one embodiment of the present invention).
- the captured data may be encoded by the camera ex113, the streaming server ex103 that performs data transmission processing, or may be shared with each other.
- the decryption processing of the distributed data may be performed by the client, the streaming server ex103, or may be performed in common with each other.
- still images and / or moving image data captured by the camera ex116 may be transmitted to the streaming server ex103 via the computer ex111.
- the encoding process in this case may be performed by any of the camera ex116, the computer ex111, and the streaming server ex103, or may be performed in a shared manner.
- these encoding / decoding processes are generally performed in the computer ex111 and the LSI ex500 included in each device.
- the LSI ex500 may be configured as a single chip or a plurality of chips.
- moving image encoding / decoding software is incorporated into some recording medium (CD-ROM, flexible disk, hard disk, etc.) that can be read by the computer ex111, etc., and encoding / decoding processing is performed using the software. May be.
- moving image data acquired by the camera may be transmitted.
- the moving image data at this time is data encoded by the LSI ex500 included in the mobile phone ex114.
- the streaming server ex103 may be a plurality of servers or a plurality of computers, and may process, record, and distribute data in a distributed manner.
- the encoded data can be received and reproduced by the client.
- the information transmitted by the user can be received, decrypted and reproduced by the client in real time, and personal broadcasting can be realized even for a user who does not have special rights or facilities.
- the digital broadcasting system ex200 also includes at least the moving image encoding device (image encoding device) or the moving image decoding of each of the above embodiments. Any of the devices (image decoding devices) can be incorporated.
- the broadcast station ex201 multiplexed data obtained by multiplexing music data and the like on video data is transmitted to a communication or satellite ex202 via radio waves.
- This video data is data encoded by the moving image encoding method described in each of the above embodiments (that is, data encoded by the image encoding apparatus according to one aspect of the present invention).
- the broadcasting satellite ex202 transmits a radio wave for broadcasting, and this radio wave is received by a home antenna ex204 capable of receiving satellite broadcasting.
- the received multiplexed data is decoded and reproduced by an apparatus such as the television (receiver) ex300 or the set top box (STB) ex217 (that is, functions as an image decoding apparatus according to one embodiment of the present invention).
- a reader / recorder ex218 that reads and decodes multiplexed data recorded on a recording medium ex215 such as a DVD or a BD, or encodes a video signal on the recording medium ex215 and, in some cases, multiplexes and writes it with a music signal. It is possible to mount the moving picture decoding apparatus or moving picture encoding apparatus described in the above embodiments. In this case, the reproduced video signal is displayed on the monitor ex219, and the video signal can be reproduced in another device or system using the recording medium ex215 on which the multiplexed data is recorded.
- a moving picture decoding apparatus may be mounted in a set-top box ex217 connected to a cable ex203 for cable television or an antenna ex204 for satellite / terrestrial broadcasting and displayed on the monitor ex219 of the television.
- the moving picture decoding apparatus may be incorporated in the television instead of the set top box.
- FIG. 22 is a diagram illustrating a television (receiver) ex300 that uses the video decoding method and the video encoding method described in each of the above embodiments.
- the television ex300 obtains or outputs multiplexed data in which audio data is multiplexed with video data via the antenna ex204 or the cable ex203 that receives the broadcast, and demodulates the received multiplexed data.
- the modulation / demodulation unit ex302 that modulates multiplexed data to be transmitted to the outside, and the demodulated multiplexed data is separated into video data and audio data, or the video data and audio data encoded by the signal processing unit ex306 Is provided with a multiplexing / demultiplexing unit ex303.
- the television ex300 also decodes the audio data and the video data, or encodes the information, the audio signal processing unit ex304, the video signal processing unit ex305 (the image encoding device or the image according to one embodiment of the present invention) A signal processing unit ex306 that functions as a decoding device), a speaker ex307 that outputs the decoded audio signal, and an output unit ex309 that includes a display unit ex308 such as a display that displays the decoded video signal. Furthermore, the television ex300 includes an interface unit ex317 including an operation input unit ex312 that receives an input of a user operation. Furthermore, the television ex300 includes a control unit ex310 that performs overall control of each unit, and a power supply circuit unit ex311 that supplies power to each unit.
- the interface unit ex317 includes a bridge unit ex313 connected to an external device such as a reader / recorder ex218, a recording unit ex216 such as an SD card, and an external recording unit such as a hard disk.
- a driver ex315 for connecting to a medium, a modem ex316 for connecting to a telephone network, and the like may be included.
- the recording medium ex216 is capable of electrically recording information by using a nonvolatile / volatile semiconductor memory element to be stored.
- Each part of the television ex300 is connected to each other via a synchronous bus.
- the television ex300 receives a user operation from the remote controller ex220 or the like, and demultiplexes the multiplexed data demodulated by the modulation / demodulation unit ex302 by the multiplexing / demultiplexing unit ex303 based on the control of the control unit ex310 having a CPU or the like. Furthermore, in the television ex300, the separated audio data is decoded by the audio signal processing unit ex304, and the separated video data is decoded by the video signal processing unit ex305 using the decoding method described in each of the above embodiments.
- the decoded audio signal and video signal are output from the output unit ex309 to the outside. At the time of output, these signals may be temporarily stored in the buffers ex318, ex319, etc. so that the audio signal and the video signal are reproduced in synchronization. Also, the television ex300 may read multiplexed data from recording media ex215 and ex216 such as a magnetic / optical disk and an SD card, not from broadcasting. Next, a configuration in which the television ex300 encodes an audio signal or a video signal and transmits the signal to the outside or to a recording medium will be described.
- the television ex300 receives a user operation from the remote controller ex220 and the like, encodes an audio signal with the audio signal processing unit ex304, and converts the video signal with the video signal processing unit ex305 based on the control of the control unit ex310. Encoding is performed using the encoding method described in (1).
- the encoded audio signal and video signal are multiplexed by the multiplexing / demultiplexing unit ex303 and output to the outside. When multiplexing, these signals may be temporarily stored in the buffers ex320, ex321, etc. so that the audio signal and the video signal are synchronized.
- a plurality of buffers ex318, ex319, ex320, and ex321 may be provided as illustrated, or one or more buffers may be shared. Further, in addition to the illustrated example, data may be stored in the buffer as a buffer material that prevents system overflow and underflow, for example, between the modulation / demodulation unit ex302 and the multiplexing / demultiplexing unit ex303.
- the television ex300 has a configuration for receiving AV input of a microphone and a camera, and performs encoding processing on the data acquired from them. Also good.
- the television ex300 has been described as a configuration capable of the above-described encoding processing, multiplexing, and external output, but these processing cannot be performed, and only the above-described reception, decoding processing, and external output are possible. It may be a configuration.
- the decoding process or the encoding process may be performed by either the television ex300 or the reader / recorder ex218,
- the reader / recorder ex218 may share with each other.
- FIG. 23 shows a configuration of the information reproducing / recording unit ex400 when data is read from or written to an optical disk.
- the information reproducing / recording unit ex400 includes elements ex401, ex402, ex403, ex404, ex405, ex406, and ex407 described below.
- the optical head ex401 irradiates a laser spot on the recording surface of the recording medium ex215 that is an optical disk to write information, and detects information reflected from the recording surface of the recording medium ex215 to read the information.
- the modulation recording unit ex402 electrically drives a semiconductor laser built in the optical head ex401 and modulates the laser beam according to the recording data.
- the reproduction demodulator ex403 amplifies the reproduction signal obtained by electrically detecting the reflected light from the recording surface by the photodetector built in the optical head ex401, separates and demodulates the signal component recorded on the recording medium ex215, and is necessary To play back information.
- the buffer ex404 temporarily holds information to be recorded on the recording medium ex215 and information reproduced from the recording medium ex215.
- the disk motor ex405 rotates the recording medium ex215.
- the servo control unit ex406 moves the optical head ex401 to a predetermined information track while controlling the rotational drive of the disk motor ex405, and performs a laser spot tracking process.
- the system control unit ex407 controls the entire information reproduction / recording unit ex400.
- the system control unit ex407 uses various types of information held in the buffer ex404, and generates and adds new information as necessary.
- the modulation recording unit ex402, the reproduction demodulation unit This is realized by recording / reproducing information through the optical head ex401 while operating the ex403 and the servo control unit ex406 in a coordinated manner.
- the system control unit ex407 includes, for example, a microprocessor, and executes these processes by executing a read / write program.
- the optical head ex401 has been described as irradiating a laser spot.
- a configuration in which higher-density recording is performed using near-field light may be used.
- FIG. 24 shows a schematic diagram of a recording medium ex215 that is an optical disk.
- Guide grooves grooves
- address information indicating the absolute position on the disc is recorded in advance on the information track ex230 by changing the shape of the groove.
- This address information includes information for specifying the position of the recording block ex231 that is a unit for recording data, and the recording block is specified by reproducing the information track ex230 and reading the address information in a recording or reproducing apparatus.
- the recording medium ex215 includes a data recording area ex233, an inner peripheral area ex232, and an outer peripheral area ex234.
- the area used for recording user data is the data recording area ex233, and the inner circumference area ex232 and the outer circumference area ex234 arranged on the inner or outer circumference of the data recording area ex233 are used for specific purposes other than user data recording. Used.
- the information reproducing / recording unit ex400 reads / writes encoded audio data, video data, or multiplexed data obtained by multiplexing these data with respect to the data recording area ex233 of the recording medium ex215.
- an optical disk such as a single-layer DVD or BD has been described as an example.
- an optical disc with a multi-dimensional recording / reproducing structure such as recording information using light of different wavelengths in the same place on the disc, or recording different layers of information from various angles. It may be.
- the car ex210 having the antenna ex205 can receive data from the satellite ex202 and the like, and the moving image can be reproduced on a display device such as the car navigation ex211 that the car ex210 has.
- the configuration of the car navigation ex211 may include a configuration including a GPS receiving unit in the configuration illustrated in FIG. 22, and the same may be applied to the computer ex111, the mobile phone ex114, and the like.
- FIG. 25A is a diagram showing the mobile phone ex114 using the moving picture decoding method and the moving picture encoding method described in the above embodiment.
- the mobile phone ex114 includes an antenna ex350 for transmitting and receiving radio waves to and from the base station ex110, a camera unit ex365 capable of capturing video and still images, a video captured by the camera unit ex365, a video received by the antenna ex350, and the like Is provided with a display unit ex358 such as a liquid crystal display for displaying the decrypted data.
- the mobile phone ex114 further includes a main body unit having an operation key unit ex366, an audio output unit ex357 such as a speaker for outputting audio, an audio input unit ex356 such as a microphone for inputting audio, a captured video,
- an audio input unit ex356 such as a microphone for inputting audio
- a captured video In the memory unit ex367 for storing encoded data or decoded data such as still images, recorded audio, received video, still images, mails, or the like, or an interface unit with a recording medium for storing data
- a slot ex364 is provided.
- the mobile phone ex114 has a power supply circuit part ex361, an operation input control part ex362, and a video signal processing part ex355 with respect to a main control part ex360 that comprehensively controls each part of the main body including the display part ex358 and the operation key part ex366.
- a camera interface unit ex363, an LCD (Liquid Crystal Display) control unit ex359, a modulation / demodulation unit ex352, a multiplexing / demultiplexing unit ex353, an audio signal processing unit ex354, a slot unit ex364, and a memory unit ex367 are connected to each other via a bus ex370. ing.
- the power supply circuit unit ex361 starts up the mobile phone ex114 in an operable state by supplying power from the battery pack to each unit.
- the cellular phone ex114 converts the audio signal collected by the audio input unit ex356 in the voice call mode into a digital audio signal by the audio signal processing unit ex354 based on the control of the main control unit ex360 having a CPU, a ROM, a RAM, and the like. Then, this is subjected to spectrum spread processing by the modulation / demodulation unit ex352, digital-analog conversion processing and frequency conversion processing are performed by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
- the mobile phone ex114 also amplifies the received data received via the antenna ex350 in the voice call mode, performs frequency conversion processing and analog-digital conversion processing, performs spectrum despreading processing by the modulation / demodulation unit ex352, and performs voice signal processing unit After being converted into an analog audio signal by ex354, this is output from the audio output unit ex357.
- the text data of the e-mail input by operating the operation key unit ex366 of the main unit is sent to the main control unit ex360 via the operation input control unit ex362.
- the main control unit ex360 performs spread spectrum processing on the text data in the modulation / demodulation unit ex352, performs digital analog conversion processing and frequency conversion processing in the transmission / reception unit ex351, and then transmits the text data to the base station ex110 via the antenna ex350.
- almost the reverse process is performed on the received data and output to the display unit ex358.
- the video signal processing unit ex355 compresses the video signal supplied from the camera unit ex365 by the moving image encoding method described in the above embodiments. Encode (that is, function as an image encoding device according to an aspect of the present invention), and send the encoded video data to the multiplexing / demultiplexing unit ex353.
- the audio signal processing unit ex354 encodes the audio signal picked up by the audio input unit ex356 while the camera unit ex365 images a video, a still image, etc., and sends the encoded audio data to the multiplexing / separating unit ex353. To do.
- the multiplexing / demultiplexing unit ex353 multiplexes the encoded video data supplied from the video signal processing unit ex355 and the encoded audio data supplied from the audio signal processing unit ex354 by a predetermined method, and is obtained as a result.
- the multiplexed data is subjected to spread spectrum processing by the modulation / demodulation unit (modulation / demodulation circuit unit) ex352, digital-analog conversion processing and frequency conversion processing by the transmission / reception unit ex351, and then transmitted via the antenna ex350.
- the multiplexing / separating unit ex353 separates the multiplexed data into a video data bit stream and an audio data bit stream, and performs video signal processing on the video data encoded via the synchronization bus ex370.
- the encoded audio data is supplied to the audio signal processing unit ex354 while being supplied to the unit ex355.
- the video signal processing unit ex355 decodes the video signal by decoding using the video decoding method corresponding to the video encoding method described in each of the above embodiments (that is, an image according to an aspect of the present invention).
- video and still images included in the moving image file linked to the home page are displayed from the display unit ex358 via the LCD control unit ex359.
- the audio signal processing unit ex354 decodes the audio signal, and the audio is output from the audio output unit ex357.
- the terminal such as the mobile phone ex114 is referred to as a transmission terminal having only an encoder and a receiving terminal having only a decoder.
- a transmission terminal having only an encoder
- a receiving terminal having only a decoder.
- multiplexed data in which music data or the like is multiplexed with video data is received and transmitted, but data in which character data or the like related to video is multiplexed in addition to audio data It may be video data itself instead of multiplexed data.
- the moving picture encoding method or the moving picture decoding method shown in each of the above embodiments can be used in any of the above-described devices / systems. The described effect can be obtained.
- Embodiment 4 The moving picture coding method or apparatus shown in the above embodiments and the moving picture coding method or apparatus compliant with different standards such as MPEG-2, MPEG4-AVC, and VC-1 are appropriately switched as necessary. Thus, it is also possible to generate video data.
- multiplexed data obtained by multiplexing audio data or the like with video data is configured to include identification information indicating which standard the video data conforms to.
- identification information indicating which standard the video data conforms to.
- FIG. 26 is a diagram showing a structure of multiplexed data.
- multiplexed data is obtained by multiplexing one or more of a video stream, an audio stream, a presentation graphics stream (PG), and an interactive graphics stream.
- the video stream indicates the main video and sub-video of the movie
- the audio stream (IG) indicates the main audio portion of the movie and the sub-audio mixed with the main audio
- the presentation graphics stream indicates the subtitles of the movie.
- the main video indicates a normal video displayed on the screen
- the sub-video is a video displayed on a small screen in the main video.
- the interactive graphics stream indicates an interactive screen created by arranging GUI components on the screen.
- the video stream is encoded by the moving image encoding method or apparatus shown in the above embodiments, or the moving image encoding method or apparatus conforming to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1. ing.
- the audio stream is encoded by a method such as Dolby AC-3, Dolby Digital Plus, MLP, DTS, DTS-HD, or linear PCM.
- Each stream included in the multiplexed data is identified by PID. For example, 0x1011 for video streams used for movie images, 0x1100 to 0x111F for audio streams, 0x1200 to 0x121F for presentation graphics, 0x1400 to 0x141F for interactive graphics streams, 0x1B00 to 0x1B1F are assigned to video streams used for sub-pictures, and 0x1A00 to 0x1A1F are assigned to audio streams used for sub-audio mixed with the main audio.
- FIG. 27 is a diagram schematically showing how multiplexed data is multiplexed.
- a video stream ex235 composed of a plurality of video frames and an audio stream ex238 composed of a plurality of audio frames are converted into PES packet sequences ex236 and ex239, respectively, and converted into TS packets ex237 and ex240.
- the data of the presentation graphics stream ex241 and interactive graphics ex244 are converted into PES packet sequences ex242 and ex245, respectively, and further converted into TS packets ex243 and ex246.
- the multiplexed data ex247 is configured by multiplexing these TS packets into one stream.
- FIG. 28 shows in more detail how the video stream is stored in the PES packet sequence.
- the first row in FIG. 28 shows a video frame sequence of the video stream.
- the second level shows a PES packet sequence.
- a plurality of Video Presentation Units in the video stream are divided into pictures, B pictures, and P pictures, and are stored in the payload of the PES packet.
- Each PES packet has a PES header, and a PTS (Presentation Time-Stamp) that is a display time of a picture and a DTS (Decoding Time-Stamp) that is a decoding time of a picture are stored in the PES header.
- PTS Presentation Time-Stamp
- DTS Decoding Time-Stamp
- FIG. 29 shows the format of TS packets that are finally written in the multiplexed data.
- the TS packet is a 188-byte fixed-length packet composed of a 4-byte TS header having information such as a PID for identifying a stream and a 184-byte TS payload for storing data.
- the PES packet is divided and stored in the TS payload.
- a 4-byte TP_Extra_Header is added to a TS packet, forms a 192-byte source packet, and is written in multiplexed data.
- TP_Extra_Header information such as ATS (Arrival_Time_Stamp) is described.
- ATS indicates the transfer start time of the TS packet to the PID filter of the decoder.
- Source packets are arranged in the multiplexed data as shown in the lower part of FIG. 29, and the number incremented from the head of the multiplexed data is called SPN (source packet number).
- TS packets included in the multiplexed data include PAT (Program Association Table), PMT (Program Map Table), PCR (Program Clock Reference), and the like in addition to each stream such as video / audio / caption.
- PAT indicates what the PID of the PMT used in the multiplexed data is, and the PID of the PAT itself is registered as 0.
- the PMT has the PID of each stream such as video / audio / subtitles included in the multiplexed data and the attribute information of the stream corresponding to each PID, and has various descriptors related to the multiplexed data.
- the descriptor includes copy control information for instructing permission / non-permission of copying of multiplexed data.
- the PCR corresponds to the ATS in which the PCR packet is transferred to the decoder. Contains STC time information.
- FIG. 30 is a diagram for explaining the data structure of the PMT in detail.
- a PMT header describing the length of data included in the PMT is arranged at the head of the PMT.
- a plurality of descriptors related to multiplexed data are arranged.
- the copy control information and the like are described as descriptors.
- a plurality of pieces of stream information regarding each stream included in the multiplexed data are arranged.
- the stream information includes a stream descriptor in which a stream type, a stream PID, and stream attribute information (frame rate, aspect ratio, etc.) are described to identify a compression codec of the stream.
- the multiplexed data is recorded together with the multiplexed data information file.
- the multiplexed data information file is management information of multiplexed data, has a one-to-one correspondence with the multiplexed data, and includes multiplexed data information, stream attribute information, and an entry map.
- the multiplexed data information includes a system rate, a reproduction start time, and a reproduction end time as shown in FIG.
- the system rate indicates a maximum transfer rate of multiplexed data to a PID filter of a system target decoder described later.
- the ATS interval included in the multiplexed data is set to be equal to or less than the system rate.
- the playback start time is the PTS of the first video frame of the multiplexed data
- the playback end time is set by adding the playback interval for one frame to the PTS of the video frame at the end of the multiplexed data.
- attribute information about each stream included in the multiplexed data is registered for each PID.
- the attribute information has different information for each video stream, audio stream, presentation graphics stream, and interactive graphics stream.
- the video stream attribute information includes the compression codec used to compress the video stream, the resolution of the individual picture data constituting the video stream, the aspect ratio, and the frame rate. It has information such as how much it is.
- the audio stream attribute information includes the compression codec used to compress the audio stream, the number of channels included in the audio stream, the language supported, and the sampling frequency. With information. These pieces of information are used for initialization of the decoder before the player reproduces it.
- the stream type included in the PMT is used.
- video stream attribute information included in the multiplexed data information is used.
- the video encoding shown in each of the above embodiments for the stream type or video stream attribute information included in the PMT.
- FIG. 33 shows steps of the moving picture decoding method according to the present embodiment.
- step exS100 the stream type included in the PMT or the video stream attribute information included in the multiplexed data information is acquired from the multiplexed data.
- step exS101 it is determined whether or not the stream type or the video stream attribute information indicates multiplexed data generated by the moving picture encoding method or apparatus described in the above embodiments. To do.
- step exS102 the above embodiments are performed. Decoding is performed by the moving picture decoding method shown in the form.
- the conventional information Decoding is performed by a moving image decoding method compliant with the standard.
- FIG. 34 shows a configuration of the LSI ex500 that is made into one chip.
- the LSI ex500 includes elements ex501, ex502, ex503, ex504, ex505, ex506, ex507, ex508, and ex509 described below, and each element is connected via a bus ex510.
- the power supply circuit unit ex505 is activated to an operable state by supplying power to each unit when the power supply is on.
- the LSI ex500 uses the AV I / O ex509 to perform the microphone ex117 and the camera ex113 based on the control of the control unit ex501 including the CPU ex502, the memory controller ex503, the stream controller ex504, the driving frequency control unit ex512, and the like.
- the AV signal is input from the above.
- the input AV signal is temporarily stored in an external memory ex511 such as SDRAM.
- the accumulated data is divided into a plurality of times as appropriate according to the processing amount and the processing speed and sent to the signal processing unit ex507, and the signal processing unit ex507 encodes an audio signal and / or video. Signal encoding is performed.
- the encoding process of the video signal is the encoding process described in the above embodiments.
- the signal processing unit ex507 further performs processing such as multiplexing the encoded audio data and the encoded video data according to circumstances, and outputs the result from the stream I / Oex 506 to the outside.
- the output multiplexed data is transmitted to the base station ex107 or written to the recording medium ex215. It should be noted that data should be temporarily stored in the buffer ex508 so as to be synchronized when multiplexing.
- the memory ex511 is described as an external configuration of the LSI ex500.
- a configuration included in the LSI ex500 may be used.
- the number of buffers ex508 is not limited to one, and a plurality of buffers may be provided.
- the LSI ex500 may be made into one chip or a plurality of chips.
- control unit ex501 includes the CPU ex502, the memory controller ex503, the stream controller ex504, the drive frequency control unit ex512, and the like, but the configuration of the control unit ex501 is not limited to this configuration.
- the signal processing unit ex507 may further include a CPU.
- the CPU ex502 may be configured to include a signal processing unit ex507 or, for example, an audio signal processing unit that is a part of the signal processing unit ex507.
- the control unit ex501 is configured to include a signal processing unit ex507 or a CPU ex502 having a part thereof.
- LSI LSI
- IC system LSI
- super LSI ultra LSI depending on the degree of integration
- the method of circuit integration is not limited to LSI, and implementation with a dedicated circuit or a general-purpose processor is also possible.
- An FPGA Field Programmable Gate Array
- Such a programmable logic device typically loads or reads a program constituting software or firmware from a memory or the like, so that the moving image encoding method or the moving image described in each of the above embodiments is used.
- An image decoding method can be performed.
- FIG. 35 shows a configuration ex800 in the present embodiment.
- the drive frequency switching unit ex803 sets the drive frequency high when the video data is generated by the moving image encoding method or apparatus described in the above embodiments.
- the decoding processing unit ex801 that executes the moving picture decoding method described in each of the above embodiments is instructed to decode the video data.
- the video data is video data compliant with the conventional standard, compared to the case where the video data is generated by the moving picture encoding method or apparatus shown in the above embodiments, Set the drive frequency low. Then, it instructs the decoding processing unit ex802 compliant with the conventional standard to decode the video data.
- the drive frequency switching unit ex803 includes a CPU ex502 and a drive frequency control unit ex512 in FIG.
- the decoding processing unit ex801 that executes the moving picture decoding method shown in each of the above embodiments and the decoding processing unit ex802 that complies with the conventional standard correspond to the signal processing unit ex507 in FIG.
- the CPU ex502 identifies which standard the video data conforms to.
- the drive frequency control unit ex512 sets the drive frequency.
- the signal processing unit ex507 decodes the video data.
- identification information described in the fourth embodiment may be used.
- the identification information is not limited to that described in the fourth embodiment, and any information that can identify which standard the video data conforms to may be used. For example, it is possible to identify which standard the video data conforms to based on an external signal that identifies whether the video data is used for a television or a disk. In some cases, identification may be performed based on such an external signal. Further, the selection of the driving frequency in the CPU ex502 may be performed based on, for example, a look-up table in which video data standards and driving frequencies are associated with each other as shown in FIG. The look-up table is stored in the buffer ex508 or the internal memory of the LSI, and the CPU ex502 can select the drive frequency by referring to the look-up table.
- FIG. 36 shows steps for executing the method of the present embodiment.
- the signal processing unit ex507 acquires identification information from the multiplexed data.
- the CPU ex502 identifies whether the video data is generated by the encoding method or apparatus described in each of the above embodiments based on the identification information.
- the CPU ex502 sends a signal for setting the drive frequency high to the drive frequency control unit ex512. Then, the drive frequency control unit ex512 sets a high drive frequency.
- step exS203 the CPU ex502 drives the signal for setting the drive frequency low. This is sent to the frequency control unit ex512. Then, in the drive frequency control unit ex512, the drive frequency is set to be lower than that in the case where the video data is generated by the encoding method or apparatus described in the above embodiments.
- the power saving effect can be further enhanced by changing the voltage applied to the LSI ex500 or the device including the LSI ex500 in conjunction with the switching of the driving frequency. For example, when the drive frequency is set low, it is conceivable that the voltage applied to the LSI ex500 or the device including the LSI ex500 is set low as compared with the case where the drive frequency is set high.
- the setting method of the driving frequency may be set to a high driving frequency when the processing amount at the time of decoding is large, and to a low driving frequency when the processing amount at the time of decoding is small. It is not limited to the method.
- the amount of processing for decoding video data compliant with the MPEG4-AVC standard is larger than the amount of processing for decoding video data generated by the moving picture encoding method or apparatus described in the above embodiments. It is conceivable that the setting of the driving frequency is reversed to that in the case described above.
- the method for setting the drive frequency is not limited to the configuration in which the drive frequency is lowered.
- the voltage applied to the LSIex500 or the apparatus including the LSIex500 is set high.
- the driving of the CPU ex502 is stopped.
- the CPU ex502 is temporarily stopped because there is room in processing. Is also possible. Even when the identification information indicates that the video data is generated by the moving image encoding method or apparatus described in each of the above embodiments, if there is a margin for processing, the CPU ex502 is temporarily driven. It can also be stopped. In this case, it is conceivable to set the stop time shorter than in the case where the video data conforms to the conventional standards such as MPEG-2, MPEG4-AVC, and VC-1.
- a plurality of video data that conforms to different standards may be input to the above-described devices and systems such as a television and a mobile phone.
- the signal processing unit ex507 of the LSI ex500 needs to support a plurality of standards in order to be able to decode even when a plurality of video data complying with different standards is input.
- the signal processing unit ex507 corresponding to each standard is used individually, there is a problem that the circuit scale of the LSI ex500 increases and the cost increases.
- a decoding processing unit for executing the moving picture decoding method shown in each of the above embodiments and a decoding conforming to a standard such as MPEG-2, MPEG4-AVC, or VC-1
- the processing unit is partly shared.
- An example of this configuration is shown as ex900 in FIG. 38A.
- the moving picture decoding method shown in each of the above embodiments and the moving picture decoding method compliant with the MPEG4-AVC standard are processed in processes such as entropy coding, inverse quantization, deblocking filter, and motion compensation. Some contents are common.
- the decoding processing unit ex902 corresponding to the MPEG4-AVC standard is shared, and for other processing contents specific to one aspect of the present invention that do not correspond to the MPEG4-AVC standard, a dedicated decoding processing unit A configuration using ex901 is conceivable.
- the decoding processing unit for executing the moving picture decoding method described in each of the above embodiments is shared, and the processing content specific to the MPEG4-AVC standard As for, a configuration using a dedicated decoding processing unit may be used.
- ex1000 in FIG. 38B shows another example in which processing is partially shared.
- a dedicated decoding processing unit ex1001 corresponding to the processing content specific to one aspect of the present invention
- a dedicated decoding processing unit ex1002 corresponding to the processing content specific to another conventional standard
- a common decoding processing unit ex1003 corresponding to the processing contents common to the moving image decoding method according to the above and other conventional moving image decoding methods.
- the dedicated decoding processing units ex1001 and ex1002 are not necessarily specialized in one aspect of the present invention or processing content specific to other conventional standards, and can execute other general-purpose processing. Also good.
- the configuration of the present embodiment can be implemented by LSI ex500.
- the processing content common to the moving picture decoding method according to one aspect of the present invention and the moving picture decoding method of the conventional standard reduces the circuit scale of the LSI by sharing the decoding processing unit, In addition, the cost can be reduced.
- the image encoding method and the image decoding method according to the present invention can be applied to any multimedia data.
- the image encoding method and the image decoding method according to the present invention are useful as an image encoding method and an image decoding method in storage, transmission, communication, etc. using, for example, a mobile phone, a DVD device, and a personal computer. .
Abstract
Description
本発明者は、「背景技術」の欄において記載した、画像を符号化する画像符号化方法、および、画像を復号化する画像復号化方法に関して、以下の問題が生じることを見出した。
本実施の形態の画像符号化装置および画像符号化方法について、図1~図12を用いて説明する。
先ず、本実施の形態の画像符号化装置の全体構成について、図1を基に説明する。図1は、本実施の形態における画像符号化装置の構成(一部)の一例を示すブロック図である。
四分木符号化部105の構成について、図2を基に説明する。図2は、四分木符号化部105の構成の一例を示すブロック図である。
画像符号化装置100の全体動作について、図3を基に説明する。図3は、画像符号化装置100の全体動作の一例を示すフローチャートである。
四分木符号化部105の動作(図3のステップS105の詳細な動作)について、図8を基に説明する。図8は、四分木符号化の処理手順の一例を示すフローチャートである。
CU符号化部120の動作(図8のステップS119の詳細な動作)について、図9を基に説明する。図9は、CU符号化の処理手順の一例を示すフローチャートである。
四分木変換部130の動作(図9のステップS123の詳細な動作)について、図10を基に説明する。図10は、四分木変換の処理手順の一例を示すフローチャートである。
TU符号化部140の動作(図10のステップS148の詳細な動作)について、図11を基に説明する。図11は、TU符号化(画像符号化方法の一部)の処理手順の一例を示すフローチャートである。
以上、本実施の形態によれば、上位TULayerのCBF_Cb、CBF_Crが1の時のみCBF_Cb、CBF_Crを符号化するように構成している。これにより、符号量及び処理量を削減している。つまり、あるTULayerでCBF_Cb、CBF_Crが0になった場合は、それより下のTULayerでは、TUをどれだけ細分化してもCBF_Cb、CBF_Crを符号化する必要はなく、符号量及び処理量を削減することができる。
(1)TUのサイズが最小サイズより大きいTULayerであること、および、
(2)最上位のTULayer、または上位TULayerのCBF_CbまたはCBF_Crの値が1であること。
本実施の形態の画像復号化装置および画像復号化方法について、図13~図19を用いて説明する。
先ず、本実施の形態の画像復号化装置の全体構成について、図13を基に説明する。図13は、本実施の形態における画像復号化装置の構成(一部)の一例を示すブロック図である。
四分木復号化部201の構成について、図14を基に説明する。図14は、四分木復号化部201の構成の一例を示すブロック図である。
画像復号化装置200の全体動作について、図15を基に説明する。図15は、画像復号化装置200の全体動作の一例を示すフローチャートである。
四分木復号化部201の動作(図15のステップS201の詳細な動作)について、図16を基に説明する。図16は、四分木復号化の処理手順の一例を示すフローチャートである。なお、以下の説明において、LCU、CUおよびTUの構成は、実施の形態1(図4、図6、図5、図7、図12)と同じである場合を例に説明する。
CU復号化部220の動作(図16のステップS219の詳細な動作)について、図17を基に説明する。図17は、CU復号化の処理手順の一例を示すフローチャートである。
四分木変換部230の動作(図17のステップS221の詳細な動作)について、図18を基に説明する。図18は、四分木変換の処理手順の一例を示すフローチャートである。
TU復号化部240の動作(図18のステップS248の詳細な動作)について、図19を基に説明する。図19は、TU復号化(画像復号化方法の一部)の処理手順の一例を示すフローチャートである。
以上、本実施の形態によれば、上位TULayerのCBF_Cb、CBF_Crが1の時のみCBF_Cb、CBF_Crを復号化するように構成している。これにより、符号量及び処理量を削減している。つまり、あるTULayerでCBF_Cb、CBF_Crが0になった場合は、それより下のTULayerではTUをどれだけ細分化してもCBF_Cb、CBF_Crを復号化する必要はなく、符号量及び処理量を削減することができる。
(1)TUのサイズが最小サイズより大きいTULayerであること、および、
(2)最上位のTULayer、または上位TULayerのCBF_CbまたはCBF_Crの値が1であること。
上記各実施の形態で示した動画像符号化方法(画像符号化方法)または動画像復号化方法(画像復号方法)の構成を実現するためのプログラムを記憶メディアに記録することにより、上記各実施の形態で示した処理を独立したコンピュータシステムにおいて簡単に実施することが可能となる。記憶メディアは、磁気ディスク、光ディスク、光磁気ディスク、ICカード、半導体メモリ等、プログラムを記録できるものであればよい。
上記各実施の形態で示した動画像符号化方法または装置と、MPEG-2、MPEG4-AVC、VC-1など異なる規格に準拠した動画像符号化方法または装置とを、必要に応じて適宜切替えることにより、映像データを生成することも可能である。
上記各実施の形態で示した動画像符号化方法および装置、動画像復号化方法および装置は、典型的には集積回路であるLSIで実現される。一例として、図34に1チップ化されたLSIex500の構成を示す。LSIex500は、以下に説明する要素ex501、ex502、ex503、ex504、ex505、ex506、ex507、ex508、ex509を備え、各要素はバスex510を介して接続している。電源回路部ex505は電源がオン状態の場合に各部に対して電力を供給することで動作可能な状態に起動する。
上記各実施の形態で示した動画像符号化方法または装置によって生成された映像データを復号する場合、従来のMPEG-2、MPEG4-AVC、VC-1などの規格に準拠する映像データを復号する場合に比べ、処理量が増加することが考えられる。そのため、LSIex500において、従来の規格に準拠する映像データを復号する際のCPUex502の駆動周波数よりも高い駆動周波数に設定する必要がある。しかし、駆動周波数を高くすると、消費電力が高くなるという課題が生じる。
テレビや、携帯電話など、上述した機器・システムには、異なる規格に準拠する複数の映像データが入力される場合がある。このように、異なる規格に準拠する複数の映像データが入力された場合にも復号できるようにするために、LSIex500の信号処理部ex507が複数の規格に対応している必要がある。しかし、それぞれの規格に対応する信号処理部ex507を個別に用いると、LSIex500の回路規模が大きくなり、また、コストが増加するという課題が生じる。
101 LCU分割部
102 CU分割サイズ決定部
103 TU分割サイズ決定部
104 CBF_CbCr決定部
105 四分木符号化部
106 フレームメモリ
110 CUスプリットフラグ符号化部
120 CU符号化部
121 予測部
122 減算部
123 加算部
130 四分木変換部
131 TUスプリットフラグ符号化部
132 CBF符号化部
140 TU符号化部
141 変換部
142 周波数係数符号化部
143、242 逆変換部
200 画像復号化装置
201 四分木復号化部
202 フレームメモリ
211 CUスプリットフラグ復号化部
220 CU復号化部
221 加算部
230 四分木変換部
231 TUスプリットフラグ復号化部
232 CBF復号化部
240 TU復号化部
241 周波数係数復号化部
Claims (7)
- 入力画像を符号化する画像符号化方法であって、
前記入力画像は、輝度成分及び色差成分を有する1または複数の変換ブロックを含み、
処理対象の変換ブロックにおける前記輝度成分のブロックのサイズは、前記処理対象の変換ブロックのサイズと同じであり、
前記処理対象の変換ブロックにおける前記色差成分のブロックのサイズは、前記輝度成分のブロックのサイズよりも小さく、
前記画像符号化方法は、
前記輝度成分及び前記色差成分に対して、変換処理を実行して前記輝度成分の係数及び前記色差成分の係数を導出する導出ステップと、
前記輝度成分の係数及び前記色差成分の係数を符号化する符号化ステップとを含み、
前記導出ステップでは、前記処理対象の変換ブロックのサイズが予め定められた第一最小サイズである場合に、前記色差成分の複数のブロックを結合することにより、前記輝度成分と同じサイズのブロックで前記色差成分に対する前記変換処理を実行して前記色差成分の係数を導出し、
前記符号化ステップでは、前記処理対象の変換ブロックのサイズが前記第一最小サイズである場合は、前記色差成分の前記係数に非ゼロの係数が含まれるか否かを示すフラグを符号化せず、前記処理対象の変換ブロックのサイズが前記第一最小サイズと異なる場合は、前記フラグを符号化する
画像符号化方法。 - 前記変換ブロックは、四分木構造を用いて符号化ブロックを分割したブロックであり、
前記符号化ブロックの第二最小サイズは、前記第一最小サイズより大きいサイズに制限され、
前記符号化ステップでは、(1)前記処理対象の変換ブロックのサイズが、前記第一最小サイズより大きく、且つ、(2)前記処理対象の変換ブロックが前記四分木構造の最上位レイヤである、または、前記処理対象の変換ブロックの前記四分木構造におけるレイヤの1階層上のレイヤにおける前記フラグの値が1である場合に、前記フラグを符号化する
請求項1に記載の画像符号化方法。 - 符号化ビットストリームから、画像を復号する画像復号化方法であって、
前記画像は、色差成分及び輝度成分を有する1または複数の変換ブロックを含み、
処理対象の変換ブロックにおける前記輝度成分のブロックのサイズは、前記処理対象の変換ブロックのサイズと同じであり、
前記処理対象の変換ブロックにおける前記色差成分のブロックのサイズは、前記輝度成分のブロックのサイズよりも小さく、
前記画像復号化方法は、
前記符号化ビットストリームに含まれる符号化された前記輝度成分の係数及び前記色差成分の係数を復号化する復号化ステップと、
前記輝度成分の前記係数及び前記色差成分の前記係数に対して、変換処理を実行し、前記輝度成分及び前記色差成分を導出する導出ステップとを含み、
前記導出ステップでは、前記処理対象の変換ブロックのサイズが予め定められた第一最小サイズである場合に、前記色差成分の複数のブロックを結合することにより、前記輝度成分と同じサイズのブロックで、前記色差成分の前記係数に対して変換処理を実行することにより、前記色差成分を導出し、
前記復号化ステップでは、前記処理対象の変換ブロックのサイズが前記第一最小サイズと異なる場合に、前記色差成分の前記係数に非ゼロの係数が含まれるか否かを示すフラグを復号化する
画像復号化方法。 - 前記変換ブロックは、四分木構造を用いて前記復号ブロックを分割したブロックであり、
前記復号ブロックの第二最小サイズは、前記第一最小サイズより大きいサイズに制限され、
前記復号化ステップでは、(1)前記処理対象の変換ブロックのサイズが、前記第一最小サイズより大きく、且つ、(2)前記処理対象の変換ブロックが前記四分木構造の最上位レイヤである、または、前記処理対象の変換ブロックの前記四分木構造におけるレイヤの1階層上のレイヤにおける前記フラグの値が1である場合に、前記フラグを復号化する
請求項3に記載の画像復号化方法。 - 処理回路と、前記処理回路がアクセス可能な記憶装置とを備え、入力画像の色差成分および輝度成分に対し変換を行う画像符号化装置であって、
前記入力画像は、輝度成分及び色差成分を有する1または複数の変換ブロックを含み、
処理対象の変換ブロックにおける前記輝度成分のブロックのサイズは、前記処理対象の変換ブロックのサイズと同じであり、
前記処理対象の変換ブロックにおける前記色差成分のブロックのサイズは、前記輝度成分のブロックのサイズよりも小さく、
前記処理回路は、
前記輝度成分及び前記色差成分に対して、変換処理を実行して前記輝度成分の係数及び前記色差成分の係数を導出する導出ステップと、
前記輝度成分の係数及び前記色差成分の係数を符号化する符号化ステップとを含み、
前記導出ステップでは、前記処理対象の変換ブロックのサイズが予め定められた第一最小サイズである場合に、複数のブロックを結合することにより、前記輝度成分と同じサイズのブロックで前記色差成分に対する前記変換処理を実行して前記色差成分の係数を導出し、
前記符号化ステップでは、前記処理対象の変換ブロックのサイズが前記第一最小サイズである場合は、前記色差成分の前記係数に非ゼロの係数が含まれるか否かを示すフラグを符号化せず、前記処理対象の変換ブロックのサイズが前記第一最小サイズと異なる場合は、前記フラグを符号化する
画像符号化装置。 - 処理回路と、前記処理回路がアクセス可能な記憶装置とを備え、符号化ビットストリームから、画像を復号する画像復号化装置であって、
前記画像は、輝度成分及び色差成分を有する1または複数の変換ブロックを含み、
処理対象の変換ブロックにおける前記輝度成分のブロックのサイズは、前記処理対象の変換ブロックのサイズと同じであり、
前記処理対象の変換ブロックにおける前記色差成分のブロックのサイズは、前記輝度成分のブロックのサイズよりも小さく、
前記処理回路は、
前記符号化ビットストリームに含まれる符号化された前記輝度成分の係数及び前記色差成分の係数を復号化する復号化ステップと、
前記輝度成分の前記係数及び前記色差成分の前記係数に対して、変換処理を実行し、前記輝度成分及び前記色差成分を導出する導出ステップとを含み、
前記導出ステップでは、前記処理対象の変換ブロックのサイズが予め定められた第一最小サイズである場合に、複数のブロックを結合することにより、前記輝度成分と同じサイズのブロックで、前記色差成分の前記係数に対して逆変換処理を実行することにより、前記色差成分を導出し、
前記復号化ステップでは、前記処理対象の変換ブロックのサイズが前記第一最小サイズと異なる場合に、前記色差成分の前記係数に非ゼロの係数が含まれるか否かを示すフラグを復号化する
画像復号化装置。 - 請求項5に記載の画像符号化装置と、
請求項6に記載の画像復号化装置とを備える
画像符号化復号化装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014534172A JP6341426B2 (ja) | 2012-09-10 | 2013-08-27 | 画像復号化方法および画像復号化装置 |
MX2015001706A MX340434B (es) | 2012-09-10 | 2013-08-27 | Metodo de codificacion de imagenes, metodo de decodificacion de imagenes, aparato de codificacion de imagenes, aparato de decodificacion de imagenes y aparato de codificacion y decodificacion de imagenes. |
KR1020157004116A KR102134367B1 (ko) | 2012-09-10 | 2013-08-27 | 화상 부호화 방법, 화상 복호화 방법, 화상 부호화 장치, 화상 복호화 장치, 및 화상 부호화 복호화 장치 |
CN201380042457.2A CN104604225B (zh) | 2012-09-10 | 2013-08-27 | 图像编码方法、图像解码方法、图像编码装置、图像解码装置及图像编码解码装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261698765P | 2012-09-10 | 2012-09-10 | |
US61/698,765 | 2012-09-10 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014038153A1 true WO2014038153A1 (ja) | 2014-03-13 |
Family
ID=50233333
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/005051 WO2014038153A1 (ja) | 2012-09-10 | 2013-08-27 | 画像符号化方法、画像復号化方法、画像符号化装置、画像復号化装置、および、画像符号化復号化装置 |
Country Status (8)
Country | Link |
---|---|
US (7) | US9031334B2 (ja) |
JP (2) | JP6341426B2 (ja) |
KR (1) | KR102134367B1 (ja) |
CN (1) | CN104604225B (ja) |
AR (1) | AR092495A1 (ja) |
MX (1) | MX340434B (ja) |
TW (1) | TWI609585B (ja) |
WO (1) | WO2014038153A1 (ja) |
Families Citing this family (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9432592B2 (en) | 2011-10-25 | 2016-08-30 | Daylight Solutions, Inc. | Infrared imaging microscope using tunable laser radiation |
JP6341426B2 (ja) | 2012-09-10 | 2018-06-13 | サン パテント トラスト | 画像復号化方法および画像復号化装置 |
WO2014168893A1 (en) | 2013-04-08 | 2014-10-16 | General Instrument Corporation | Signaling for addition or removal of layers in video coding |
EP3117616A4 (en) * | 2014-03-13 | 2017-11-08 | Qualcomm Incorporated | Constrained depth intra mode coding for 3d video coding |
US10298927B2 (en) * | 2014-03-28 | 2019-05-21 | Sony Corporation | Image decoding device and method |
MX364550B (es) | 2014-05-21 | 2019-04-30 | Arris Entpr Llc | Señalización y selección para la mejora de capas en vídeo escalable. |
MX360655B (es) | 2014-05-21 | 2018-11-12 | Arris Entpr Llc | Gestión individual de memorias intermedias en transporte de video escalable. |
JP2021502771A (ja) * | 2018-05-03 | 2021-01-28 | エルジー エレクトロニクス インコーポレイティド | 画像コーディングシステムにおいてブロックサイズに応じた変換を使用する画像デコード方法およびその装置 |
US11616963B2 (en) | 2018-05-10 | 2023-03-28 | Samsung Electronics Co., Ltd. | Method and apparatus for image encoding, and method and apparatus for image decoding |
MX2020011422A (es) * | 2018-06-11 | 2020-11-24 | Panasonic Ip Corp America | Codificador, decodificador, metodo de codificacion y metodo de decodificacion. |
CN111327894B (zh) * | 2018-12-15 | 2022-05-17 | 华为技术有限公司 | 块划分方法、视频编解码方法、视频编解码器 |
PH12019000380A1 (en) * | 2018-12-17 | 2020-09-28 | Nokia Technologies Oy | An apparatus, a method and a computer program for video coding and decoding |
AU2019401811B2 (en) | 2018-12-21 | 2023-11-23 | Huawei Technologies Co., Ltd. | Method and apparatus of interpolation filtering for predictive coding |
CN113273203B (zh) * | 2018-12-22 | 2024-03-12 | 北京字节跳动网络技术有限公司 | 两步交叉分量预测模式 |
AU2019201653A1 (en) * | 2019-03-11 | 2020-10-01 | Canon Kabushiki Kaisha | Method, apparatus and system for encoding and decoding a tree of blocks of video samples |
CN113711591B (zh) | 2019-04-20 | 2023-10-27 | 北京字节跳动网络技术有限公司 | 用于色度残差的联合编解码的语法元素的信令 |
WO2020233514A1 (en) * | 2019-05-17 | 2020-11-26 | Beijing Bytedance Network Technology Co., Ltd. | Signaling of syntax elements according to chroma format |
CN115567707A (zh) | 2019-05-30 | 2023-01-03 | 抖音视界有限公司 | 色度分量的自适应环路滤波 |
CN114615506B (zh) * | 2019-06-13 | 2023-07-04 | 北京达佳互联信息技术有限公司 | 视频解码方法、计算设备、存储介质 |
JP7383119B2 (ja) | 2019-07-26 | 2023-11-17 | 北京字節跳動網絡技術有限公司 | 映像コーディングモードのブロックサイズ依存使用 |
WO2021018084A1 (en) * | 2019-07-26 | 2021-02-04 | Beijing Bytedance Network Technology Co., Ltd. | Interdependence of transform size and coding tree unit size in video coding |
Family Cites Families (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5057911A (en) * | 1989-10-19 | 1991-10-15 | Matsushita Electric Industrial Co., Ltd. | System and method for conversion of digital video signals |
NO319007B1 (no) * | 2003-05-22 | 2005-06-06 | Tandberg Telecom As | Fremgangsmate og apparat for videokomprimering |
US7327786B2 (en) | 2003-06-02 | 2008-02-05 | Lsi Logic Corporation | Method for improving rate-distortion performance of a video compression system through parallel coefficient cancellation in the transform |
KR101138392B1 (ko) * | 2004-12-30 | 2012-04-26 | 삼성전자주식회사 | 색차 성분의 상관관계를 이용한 컬러 영상의 부호화,복호화 방법 및 그 장치 |
EP1737240A3 (en) * | 2005-06-21 | 2007-03-14 | Thomson Licensing | Method for scalable image coding or decoding |
US20070147497A1 (en) * | 2005-07-21 | 2007-06-28 | Nokia Corporation | System and method for progressive quantization for scalable image and video coding |
US8848789B2 (en) * | 2006-03-27 | 2014-09-30 | Qualcomm Incorporated | Method and system for coding and decoding information associated with video compression |
US8275045B2 (en) * | 2006-07-12 | 2012-09-25 | Qualcomm Incorporated | Video compression using adaptive variable length codes |
US8165214B2 (en) * | 2007-05-08 | 2012-04-24 | Freescale Semiconductor, Inc. | Circuit and method for generating fixed point vector dot product and matrix vector values |
KR101391601B1 (ko) * | 2007-10-15 | 2014-05-07 | 삼성전자주식회사 | 최적의 임계치를 이용한 지수 골롬 이진화에 의한 영상부호화 방법 및 그 장치, 및 영상 복호화 방법 및 그 장치 |
AU2007231799B8 (en) * | 2007-10-31 | 2011-04-21 | Canon Kabushiki Kaisha | High-performance video transcoding method |
US8270472B2 (en) * | 2007-11-09 | 2012-09-18 | Thomson Licensing | Methods and apparatus for adaptive reference filtering (ARF) of bi-predictive pictures in multi-view coded video |
US8953673B2 (en) * | 2008-02-29 | 2015-02-10 | Microsoft Corporation | Scalable video coding and decoding with sample bit depth and chroma high-pass residual layers |
US8723891B2 (en) * | 2009-02-27 | 2014-05-13 | Ncomputing Inc. | System and method for efficiently processing digital video |
JP5158003B2 (ja) * | 2009-04-14 | 2013-03-06 | ソニー株式会社 | 画像符号化装置と画像符号化方法およびコンピュータ・プログラム |
US9635368B2 (en) | 2009-06-07 | 2017-04-25 | Lg Electronics Inc. | Method and apparatus for decoding a video signal |
US9100648B2 (en) | 2009-06-07 | 2015-08-04 | Lg Electronics Inc. | Method and apparatus for decoding a video signal |
KR101624649B1 (ko) * | 2009-08-14 | 2016-05-26 | 삼성전자주식회사 | 계층적인 부호화 블록 패턴 정보를 이용한 비디오 부호화 방법 및 장치, 비디오 복호화 방법 및 장치 |
US8885711B2 (en) * | 2009-12-17 | 2014-11-11 | Sk Telecom Co., Ltd. | Image encoding/decoding method and device |
WO2011121715A1 (ja) * | 2010-03-30 | 2011-10-06 | 株式会社 東芝 | 画像復号化方法 |
KR101503269B1 (ko) * | 2010-04-05 | 2015-03-17 | 삼성전자주식회사 | 영상 부호화 단위에 대한 인트라 예측 모드 결정 방법 및 장치, 및 영상 복호화 단위에 대한 인트라 예측 모드 결정 방법 및 장치 |
KR101791078B1 (ko) * | 2010-04-16 | 2017-10-30 | 에스케이텔레콤 주식회사 | 영상 부호화/복호화 장치 및 방법 |
RU2580073C2 (ru) * | 2010-07-15 | 2016-04-10 | Мицубиси Электрик Корпорейшн | Устройство кодирования движущихся изображений, устройство декодирования движущихся изображений, способ кодирования движущихся изображений и способ декодирования движущихся изображений |
HUE042742T2 (hu) * | 2010-08-17 | 2019-07-29 | Samsung Electronics Co Ltd | Videodekódolási eljárás variábilis faszerkezetû transzformációs egység felhasználásával |
CN105791856B (zh) * | 2010-11-23 | 2019-07-12 | Lg电子株式会社 | 由编码装置和解码装置执行的间预测方法 |
US8860785B2 (en) * | 2010-12-17 | 2014-10-14 | Microsoft Corporation | Stereo 3D video support in computing devices |
EP2661080A4 (en) * | 2010-12-31 | 2016-06-29 | Korea Electronics Telecomm | METHOD FOR CODING VIDEO INFORMATION AND METHOD FOR DECODING VIDEO INFORMATION AND DEVICE THEREFOR |
US9848197B2 (en) * | 2011-03-10 | 2017-12-19 | Qualcomm Incorporated | Transforms in video coding |
US20120294353A1 (en) | 2011-05-16 | 2012-11-22 | Mediatek Inc. | Apparatus and Method of Sample Adaptive Offset for Luma and Chroma Components |
US9807401B2 (en) * | 2011-11-01 | 2017-10-31 | Qualcomm Incorporated | Transform unit partitioning for chroma components in video coding |
KR20130049526A (ko) * | 2011-11-04 | 2013-05-14 | 오수미 | 복원 블록 생성 방법 |
US20130113880A1 (en) * | 2011-11-08 | 2013-05-09 | Jie Zhao | High Efficiency Video Coding (HEVC) Adaptive Loop Filter |
US9451287B2 (en) * | 2011-11-08 | 2016-09-20 | Qualcomm Incorporated | Context reduction for context adaptive binary arithmetic coding |
CA2773990C (en) * | 2011-11-19 | 2015-06-30 | Research In Motion Limited | Multi-level significance map scanning |
US20130128971A1 (en) * | 2011-11-23 | 2013-05-23 | Qualcomm Incorporated | Transforms in video coding |
KR20130058524A (ko) * | 2011-11-25 | 2013-06-04 | 오수미 | 색차 인트라 예측 블록 생성 방법 |
US20130188698A1 (en) * | 2012-01-19 | 2013-07-25 | Qualcomm Incorporated | Coefficient level coding |
US9185405B2 (en) * | 2012-03-23 | 2015-11-10 | Qualcomm Incorporated | Coded block flag inference in video coding |
US20130258052A1 (en) * | 2012-03-28 | 2013-10-03 | Qualcomm Incorporated | Inter-view residual prediction in 3d video coding |
WO2014000160A1 (en) * | 2012-06-26 | 2014-01-03 | Intel Corporation | Inter-layer coding unit quadtree pattern prediction |
JP6341426B2 (ja) * | 2012-09-10 | 2018-06-13 | サン パテント トラスト | 画像復号化方法および画像復号化装置 |
CN104704827B (zh) | 2012-11-13 | 2019-04-12 | 英特尔公司 | 用于下一代视频的内容自适应变换译码 |
US9948939B2 (en) * | 2012-12-07 | 2018-04-17 | Qualcomm Incorporated | Advanced residual prediction in scalable and multi-view video coding |
JP2014209175A (ja) * | 2013-03-27 | 2014-11-06 | キヤノン株式会社 | 画像表示装置 |
US10291934B2 (en) * | 2013-10-02 | 2019-05-14 | Arris Enterprises Llc | Modified HEVC transform tree syntax |
-
2013
- 2013-08-27 JP JP2014534172A patent/JP6341426B2/ja active Active
- 2013-08-27 CN CN201380042457.2A patent/CN104604225B/zh active Active
- 2013-08-27 MX MX2015001706A patent/MX340434B/es active IP Right Grant
- 2013-08-27 WO PCT/JP2013/005051 patent/WO2014038153A1/ja active Application Filing
- 2013-08-27 KR KR1020157004116A patent/KR102134367B1/ko active IP Right Grant
- 2013-09-05 TW TW102132010A patent/TWI609585B/zh active
- 2013-09-05 US US14/018,657 patent/US9031334B2/en active Active
- 2013-09-09 AR ARP130103209A patent/AR092495A1/es active IP Right Grant
-
2015
- 2015-01-30 US US14/609,811 patent/US9326005B2/en active Active
-
2016
- 2016-02-12 US US15/042,410 patent/US9781437B2/en active Active
-
2017
- 2017-07-18 US US15/652,510 patent/US9955175B2/en active Active
-
2018
- 2018-03-14 US US15/921,180 patent/US10063865B2/en active Active
- 2018-05-02 JP JP2018088617A patent/JP2018139438A/ja not_active Revoked
- 2018-07-26 US US16/046,348 patent/US10313688B2/en active Active
-
2019
- 2019-04-22 US US16/390,651 patent/US10616589B2/en active Active
Non-Patent Citations (3)
Title |
---|
BENJAMIN BROSS ET AL.: "High efficiency video coding (HEVC) text specification draft 7", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 9TH MEETING, 27 April 2012 (2012-04-27) - 7 May 2012 (2012-05-07), GENEVA, CH, pages 7.3.8, 7.3.9 * |
LIWEI GUO ET AL.: "Unified CBFU and CBFV Coding in RQT", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11 9TH MEETING, 27 April 2012 (2012-04-27) - 7 May 2012 (2012-05-07), GENEVA * |
TIM HELLMAN ET AL.: "Changing Luma/Chroma Coefficient Interleaving from CU to TU level", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT- VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/ WG11 7TH MEETING, 21 November 2011 (2011-11-21) - 30 November 2011 (2011-11-30), GENEVA, CH * |
Also Published As
Publication number | Publication date |
---|---|
US9955175B2 (en) | 2018-04-24 |
MX2015001706A (es) | 2015-04-14 |
TW201415899A (zh) | 2014-04-16 |
TWI609585B (zh) | 2017-12-21 |
JP2018139438A (ja) | 2018-09-06 |
JP6341426B2 (ja) | 2018-06-13 |
US20170318303A1 (en) | 2017-11-02 |
US10313688B2 (en) | 2019-06-04 |
KR20150056533A (ko) | 2015-05-26 |
JPWO2014038153A1 (ja) | 2016-08-08 |
US20160165240A1 (en) | 2016-06-09 |
US20180338150A1 (en) | 2018-11-22 |
KR102134367B1 (ko) | 2020-07-15 |
CN104604225A (zh) | 2015-05-06 |
US20180205959A1 (en) | 2018-07-19 |
US10063865B2 (en) | 2018-08-28 |
AR092495A1 (es) | 2015-04-22 |
US20190253723A1 (en) | 2019-08-15 |
MX340434B (es) | 2016-07-08 |
CN104604225B (zh) | 2018-01-26 |
US9031334B2 (en) | 2015-05-12 |
US9781437B2 (en) | 2017-10-03 |
US9326005B2 (en) | 2016-04-26 |
US20140072215A1 (en) | 2014-03-13 |
US20150156512A1 (en) | 2015-06-04 |
US10616589B2 (en) | 2020-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6341426B2 (ja) | 画像復号化方法および画像復号化装置 | |
JP6305590B2 (ja) | 画像復号方法及び画像復号装置 | |
WO2013021619A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置、及び、画像符号化復号装置 | |
WO2012108205A1 (ja) | 画像符号化方法、画像符号化装置、画像復号方法、画像復号装置および画像符号化復号装置 | |
WO2013183268A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置および画像符号化復号装置 | |
WO2013111593A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置及び画像符号化復号装置 | |
JP6327435B2 (ja) | 画像符号化方法、画像復号方法、画像符号化装置、及び、画像復号装置 | |
JP6210368B2 (ja) | 画像復号方法および画像復号装置 | |
WO2012096178A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置及び画像符号化復号装置 | |
WO2013094199A1 (ja) | 画像符号化方法、画像符号化装置、画像復号化方法、画像復号化装置、および画像符号化復号化装置 | |
WO2014002407A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置および画像符号化復号装置 | |
JP2017201820A (ja) | 画像符号化方法及び画像符号化装置 | |
WO2013073184A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置、および画像符号化復号装置 | |
WO2014038130A1 (ja) | 画像符号化/復号方法、装置、および画像符号化復号装置 | |
WO2013118485A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置および画像符号化復号装置 | |
WO2012098868A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置、画像復号装置、及び画像符号化復号装置 | |
WO2012114693A1 (ja) | 算術復号方法および算術符号化方法 | |
WO2011132400A1 (ja) | 画像符号化方法及び画像復号化方法 | |
WO2015015681A1 (ja) | 画像符号化方法および画像符号化装置 | |
JP2016146556A (ja) | 画像符号化方法、画像復号方法、画像符号化装置及び画像復号装置 | |
WO2012095930A1 (ja) | 画像符号化方法、画像復号方法、画像符号化装置及び画像復号装置 | |
WO2012077349A1 (ja) | 画像符号化方法および画像復号化方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13836137 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014534172 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2015/001706 Country of ref document: MX |
|
ENP | Entry into the national phase |
Ref document number: 20157004116 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13836137 Country of ref document: EP Kind code of ref document: A1 |