WO2014007131A1 - Dispositif de décodage d'image et dispositif de codage d'image - Google Patents

Dispositif de décodage d'image et dispositif de codage d'image Download PDF

Info

Publication number
WO2014007131A1
WO2014007131A1 PCT/JP2013/067618 JP2013067618W WO2014007131A1 WO 2014007131 A1 WO2014007131 A1 WO 2014007131A1 JP 2013067618 W JP2013067618 W JP 2013067618W WO 2014007131 A1 WO2014007131 A1 WO 2014007131A1
Authority
WO
WIPO (PCT)
Prior art keywords
prediction
intra
unit
prediction mode
layer
Prior art date
Application number
PCT/JP2013/067618
Other languages
English (en)
Japanese (ja)
Inventor
将伸 八杉
山本 智幸
Original Assignee
シャープ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by シャープ株式会社 filed Critical シャープ株式会社
Publication of WO2014007131A1 publication Critical patent/WO2014007131A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/11Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards

Definitions

  • the present invention relates to an image decoding apparatus that decodes hierarchically encoded data in which an image is hierarchically encoded, and an image encoding apparatus that generates hierarchically encoded data by hierarchically encoding an image.
  • One of information transmitted in a communication system or information recorded in a storage device is an image or a moving image. 2. Description of the Related Art Conventionally, a technique for encoding an image for transmitting and storing these images (hereinafter including moving images) is known.
  • Non-Patent Document 1 As the video encoding system, H.264 H.264 / MPEG-4. AVC and HEVC (High-Efficiency Video Coding) which is a successor codec are known (Non-Patent Document 1).
  • a predicted image is usually generated based on a local decoded image obtained by encoding / decoding an input image, and obtained by subtracting the predicted image from the input image (original image).
  • Prediction residuals (sometimes referred to as “difference images” or “residual images”) are encoded.
  • examples of the method for generating a predicted image include inter-screen prediction (inter prediction) and intra-screen prediction (intra prediction).
  • predicted images in a corresponding frame are sequentially generated based on a locally decoded image in the same frame.
  • a predicted image is generated by motion compensation between frames.
  • Information relating to motion compensation (motion compensation parameters) is often not directly encoded to reduce the amount of code. Therefore, in the inter prediction, a motion compensation parameter is estimated based on a decoding situation or the like around the target block.
  • Hierarchical coding methods include ISO / IEC and ITU-T standards as H.264. H.264 / AVC Annex G Scalable Video Coding (SVC).
  • SVC supports spatial scalability, temporal scalability, and SNR scalability.
  • spatial scalability an image obtained by down-sampling an original image to a desired resolution is used as a lower layer. It is encoded with H.264 / AVC.
  • the upper layer performs inter-layer prediction in order to remove redundancy between layers.
  • inter-layer prediction there is motion information prediction in which information related to motion prediction is predicted from information in a lower layer at the same time, or texture prediction in which prediction is performed from an image obtained by up-sampling a decoded image in a lower layer at the same time (non-patent document 2).
  • motion information prediction motion information is encoded using motion information of a reference layer as an estimated value.
  • FIG. 30 is a diagram illustrating syntax referred to for inter-layer prediction, in which (a) illustrates syntax included in a slice header, and (b) illustrates syntax included in a macroblock layer. Shows the tax.
  • the syntax adaptive_base_mode_flag shown in FIG. 30A is a flag that specifies whether or not the base mode flag (base_mode_flag) is encoded for each macroblock, and default_base_mode_flag is for specifying the initial value of the base mode flag. Flag.
  • the base mode flag base_mode_flag shown in FIG. 30 (b) is a flag that specifies whether or not to perform inter-layer prediction for each macroblock.
  • HEVC High efficiency video coding
  • JCT-VC Joint Collaborative Team on Video Coding
  • the present invention has been made in view of the above problems, and an object of the present invention is to realize an image decoding apparatus and an image encoding apparatus that can improve the encoding efficiency more effectively in the hierarchical encoding system. There is.
  • an image decoding apparatus decodes upper layer encoded data included in hierarchically encoded data and generates a reference with reference to a decoded image from the lower layer
  • An image decoding device that restores an upper layer decoded image using an upper layer prediction image, refers to encoded data, and includes a prediction mode group including at least a part of a plurality of predetermined intra prediction modes, Selection means for selecting one prediction mode, and prediction image generation means for generating a prediction image of the target prediction unit in the upper layer based on the prediction mode selected by the selection means, and the prediction mode group includes
  • the prediction image of the target prediction unit in the upper layer is a prediction unit located at the same time as the target prediction unit in the lower layer in time, and spatially An intra-layer prediction mode generated based on a decoded image of a reference prediction unit that is a prediction unit arranged at a position corresponding to the target prediction unit is included, and the selection unit includes a syntax included in the encoded data.
  • the prediction mode group includes a prediction image of the target prediction unit in the upper layer, and a prediction unit that is temporally located at the same time as the target prediction unit in the lower layer.
  • An intra-layer prediction mode that is generated based on a decoded image of a reference prediction unit that is a prediction unit spatially arranged at a position corresponding to the target prediction unit, and the selection unit includes: One prediction mode is selected from the prediction mode group.
  • a predicted image in the upper layer can be generated based on the decoded image in the lower layer, so that high encoding efficiency can be realized.
  • the selection unit is a syntax included in the encoded data, and refers to a common syntax related to the inter-intra layer prediction mode and the plurality of intra prediction modes. Select one prediction mode.
  • the selection means can select the intra-layer prediction mode without referring to an alternative flag that specifies whether or not to use the inter-intra layer prediction mode.
  • the prediction unit refers to a unit called PU (PredictiondUnit) in, for example, a hierarchical tree block structure, but is not limited thereto, and is a unit called CU (Coding Unit). It may be a unit called TU (Transform Unit).
  • PU PredictiondUnit
  • CU Coding Unit
  • TU Transform Unit
  • the prediction mode group includes the inter-intra-layer prediction mode instead of any one of the predetermined plurality of intra prediction modes.
  • the prediction mode group includes the inter-intra-layer prediction mode instead of any one of the predetermined intra prediction modes, the prediction mode A configuration capable of selecting the intra-layer prediction mode can be realized without increasing the total number of.
  • the prediction mode group includes the intra-intra-layer prediction mode in addition to the plurality of predetermined intra prediction modes.
  • the prediction mode group includes the inter-intra layer prediction mode in addition to the plurality of predetermined intra prediction modes, so that it is possible to improve the coding efficiency. .
  • the prediction mode group includes a plurality of types of intra-layer prediction modes
  • the selection unit encodes any of the plurality of types of intra-layer prediction modes. The selection is preferably made by referring to a flag included in the data.
  • the prediction mode group includes a plurality of types of inter-intra-layer prediction modes, and the selection unit selects one of the plurality of types of inter-intra-layer prediction modes as encoded data. Therefore, the prediction accuracy can be further improved.
  • the prediction mode group includes a plurality of types of inter-layer prediction modes
  • the selection unit includes a syntax included in encoded data, and includes the plurality of types of the prediction modes. It is preferable to select one prediction mode by referring to a common syntax regarding the intra-layer prediction mode and the plurality of intra prediction modes.
  • the prediction mode group includes a plurality of types of inter-layer prediction modes
  • the selection means is a syntax included in encoded data, and includes the plurality of types of intra-layer prediction modes. Since one prediction mode is selected by referring to the common syntax regarding the inter-layer prediction mode and the plurality of intra prediction modes, the prediction accuracy can be further improved.
  • the image decoding apparatus decodes higher layer encoded data included in hierarchically encoded data, and uses a higher layer predicted image generated by referring to a decoded image from the lower layer.
  • An image decoding device that restores a decoded image of an upper layer and selects one prediction mode from a prediction mode group including at least a part of a plurality of predetermined intra prediction modes with reference to encoded data Selection means and prediction image generation means for generating a prediction image of the target prediction unit in the upper layer based on the prediction mode selected by the selection means, and the prediction mode group includes the target prediction unit in the upper layer.
  • a prediction unit that is temporally located at the same time as the target prediction unit in the lower layer and spatially corresponds to the target prediction unit.
  • An intra-layer prediction mode that is generated based on a decoded image of a reference prediction unit that is a prediction unit arranged at a position, and the selection means is a syntax included in encoded data, and includes the intra One prediction mode is selected by referring to a flag indicating whether or not to select an inter-layer prediction mode.
  • the prediction mode group includes a prediction image of the target prediction unit in the upper layer, and a prediction unit that is temporally located at the same time as the target prediction unit in the lower layer.
  • An intra-layer prediction mode that is generated based on a decoded image of a reference prediction unit that is a prediction unit spatially arranged at a position corresponding to the target prediction unit, and the selection unit includes: One prediction mode is selected from the prediction mode group.
  • a predicted image in the upper layer can be generated based on the decoded image in the lower layer, so that high encoding efficiency can be realized.
  • the prediction mode included in the prediction mode group is a prediction mode related to luminance
  • the selection unit selects the inter-intra-layer prediction mode as the prediction mode related to luminance
  • the prediction image of the target prediction unit in the upper layer is a prediction unit temporally located at the same time as the target prediction unit in the lower layer, and is spatially arranged at a position corresponding to the target prediction unit It is preferable to select an intra-layer prediction mode that is generated based on a decoded image of a reference prediction unit that is a prediction unit.
  • the selection unit selects the inter-intra-layer prediction mode as the prediction mode related to luminance, the prediction mode related to color difference.
  • the intra-layer prediction mode is selected, it is possible to improve the prediction accuracy of the prediction image related to the color difference.
  • the prediction mode included in the prediction mode group is a prediction mode related to luminance
  • the prediction mode group related to color difference is a mode using the same prediction mode as the prediction mode selected for luminance.
  • the selection unit refers to a flag value included in the encoded data and actually selects the DM mode as a prediction mode related to color difference.
  • the prediction image of the target prediction unit in the upper layer is a prediction unit temporally located at the same time as the target prediction unit in the lower layer, and spatially corresponds to the target prediction unit. Decide whether to select the inter-intra-layer prediction mode to be generated based on the decoded image of the reference prediction unit that is the placed prediction unit Rukoto is preferable.
  • the prediction mode included in the prediction mode group is a prediction mode related to luminance
  • the prediction mode group related to color difference is a mode using the same prediction mode as the prediction mode selected regarding luminance.
  • the prediction image of the target prediction unit in the upper layer is a prediction unit temporally located at the same time as the target prediction unit in the lower layer, and is spatially arranged at a position corresponding to the target prediction unit Since it is determined whether to select the intra-layer prediction mode to be generated based on the decoded image of the reference prediction unit that is the prediction unit, It is possible to improve the prediction accuracy of the prediction image related.
  • the prediction mode included in the prediction mode group is a prediction mode related to luminance
  • the prediction mode group related to color difference is a mode using the same prediction mode as the prediction mode selected for luminance.
  • the selection unit predicts a prediction image of a target prediction unit in an upper layer as a prediction mode related to color difference, and temporally calculates the target prediction in a lower layer.
  • Select an intra-layer prediction mode that is generated based on a decoded image of a reference prediction unit that is a prediction unit that is located at the same time as the unit and is spatially located at a position corresponding to the target prediction unit It is preferable to do.
  • the prediction mode included in the prediction mode group is a prediction mode related to luminance
  • the prediction mode group related to color difference is a mode using the same prediction mode as the prediction mode selected regarding luminance.
  • the selection unit temporarily selects a prediction image of the target prediction unit in the upper layer as a prediction mode related to color difference, and temporally selects the target prediction unit in the lower layer.
  • the intra-intra-layer prediction mode that is generated based on the decoded image of the reference prediction unit that is a prediction unit that is located at the same time as the prediction prediction unit that is spatially located at a position corresponding to the target prediction unit Therefore, it is possible to improve the prediction accuracy of the prediction image related to the color difference.
  • An image decoding apparatus decodes upper layer encoded data included in hierarchically encoded data, and uses an upper layer predicted image generated by referring to a decoded image from the lower layer.
  • An image decoding apparatus for restoring a decoded image of a higher layer, wherein a selection unit that selects one prediction mode from a group of prediction modes including a plurality of predetermined intra prediction modes, and a prediction mode selected by the selection unit And a prediction image generation means for generating a prediction image in the target prediction unit of the upper layer based on the prediction mode group.
  • the prediction mode group includes a prediction unit of the lower layer located at the same time as the target prediction unit in the upper layer.
  • an intra prediction mode selected with respect to a reference prediction unit that is a prediction unit spatially arranged at a position corresponding to the target prediction unit.
  • the target prediction unit in the above layer and the reference prediction unit in the lower layer have the same prediction direction even if the resolution is different.
  • the prediction mode group includes a prediction unit of a lower layer located at the same time as a target prediction unit in the upper layer, and spatially corresponds to the target prediction unit.
  • the intra prediction mode selected with respect to the reference prediction unit that is the prediction unit arranged at the position to be included, and the selection unit selects one prediction mode from the prediction mode group, so that the encoding efficiency is Can be improved.
  • the selection unit is an estimated prediction mode group set to include a part of the plurality of predetermined intra prediction modes, and includes a periphery of the target prediction unit.
  • One prediction mode is selected from the estimated prediction mode group determined according to the prediction mode assigned to the prediction unit, and the estimated prediction mode group is the intra prediction mode selected for the reference prediction unit. It is preferable that it is set to include.
  • the intra prediction mode selected for the reference prediction unit since the estimated prediction mode group is set to include the intra prediction mode selected for the reference prediction unit, the intra prediction mode selected for the reference prediction unit. Can be suitably used to generate a predicted image in the upper layer. Therefore, according to the above configuration, the encoding efficiency can be improved.
  • the estimated prediction modes included in the estimated prediction mode group are distinguished from each other by an index, and are selected for the reference prediction unit included in the estimated prediction mode group. It is preferable that a predetermined index is attached to the intra prediction mode.
  • the estimated prediction modes included in the estimated prediction mode group are mutually identified by an index, and the intra prediction selected for the reference prediction unit included in the estimated prediction mode group. Since the mode has a fixed index, the frequency with which the intra prediction mode selected for the reference prediction unit is selected is improved, and the coding efficiency can be improved.
  • the prediction mode group includes the reference prediction unit only when the target prediction unit is a prediction unit having a specific position in the processing order in the coding unit.
  • the selected intra prediction mode is included.
  • the prediction mode group is selected for the reference prediction unit only when the target prediction unit is a prediction unit having a specific position in the processing order in the coding unit. Therefore, the memory for holding the intra prediction mode selected for the reference prediction unit can be reduced.
  • An image encoding apparatus hierarchically encodes a residual obtained by subtracting a predicted image of an upper layer generated by referring to a decoded image from a lower layer from an original image to generate encoded data of the upper layer.
  • a prediction image generating means for generating a prediction image of the target prediction unit in the upper layer based on the prediction mode group, and the prediction mode group includes the prediction image of the target prediction unit in the upper layer in time in the lower layer.
  • a reference prediction unit that is a prediction unit located at the same time as the target prediction unit and is a prediction unit spatially arranged at a position corresponding to the target prediction unit
  • An intra-layer prediction mode generated based on an image is included, and the inter-intra-layer prediction mode and the plurality of intra-prediction modes are specified using a common syntax, To do.
  • the prediction mode group includes a prediction image of the target prediction unit in the upper layer and a prediction in which the lower layer is temporally located at the same time as the target prediction unit.
  • An intra-layer prediction mode that is generated based on a decoded image of a reference prediction unit that is a unit and is a prediction unit spatially arranged at a position corresponding to the target prediction unit; Then, one prediction mode is selected from the prediction mode group.
  • a predicted image in the upper layer can be generated based on the decoded image in the lower layer, and thus high coding efficiency can be realized.
  • the selecting means refers to a syntax included in the encoded data, and refers to a common syntax related to the inter-intra layer prediction mode and the plurality of intra prediction modes. Select one prediction mode. In other words, the selection means can select the intra-layer prediction mode without referring to an alternative flag that specifies whether or not to use the inter-intra layer prediction mode.
  • an increase in the amount of code included in the encoded data can be suppressed, so that the encoding efficiency can be improved.
  • the image decoding apparatus decodes the upper layer encoded data included in the hierarchically encoded data, and generates the upper layer generated by referring to the decoded image from the lower layer.
  • An image decoding apparatus that restores a decoded image of an upper layer using a prediction image, wherein one prediction is made from a prediction mode group including at least a part of a plurality of predetermined intra prediction modes with reference to encoded data Selection means for selecting a mode, and prediction image generation means for generating a prediction image of the target prediction unit in the upper layer based on the prediction mode selected by the selection means.
  • the prediction image of the target prediction unit is a prediction unit that is temporally located at the same time as the target prediction unit in the lower layer, and spatially the target prediction unit.
  • One prediction mode is selected by referring to a common syntax regarding the intra-intra-layer prediction mode and the plurality of intra prediction modes.
  • FIG. 1 It is a functional block diagram which illustrates about the structure of the prediction parameter decompression
  • FIG. 7 is a diagram illustrating PU partition type patterns, where (a) to (h) are PU partition types 2N ⁇ N, 2N ⁇ nU, 2N ⁇ nD, 2N ⁇ N, 2N ⁇ nU, and 2N, respectively. The partition shape in the case of xnD is shown. It is a functional block diagram which shows the schematic structure of the said hierarchy moving image decoding apparatus.
  • lead-out a color difference prediction mode (a) has shown the table when LM mode is included in color difference prediction mode, (b) has shown color difference.
  • the table in case the LM mode is not included in the prediction mode is shown.
  • It is a functional block diagram which shows the schematic structure of the texture restoration part with which the said hierarchy moving image decoding apparatus is provided.
  • (A) shows a recording device equipped with a hierarchical video encoding device
  • (b) shows a playback device equipped with a hierarchical video decoding device. It is a figure which shows the syntax referred in the inter-layer prediction which concerns on a prior art example, Comprising: (a) has shown the syntax contained in a slice header, (b) has shown the syntax contained in a macroblock layer. Show.
  • a hierarchical moving picture decoding apparatus 1 and the hierarchical moving picture encoding apparatus 2 according to an embodiment of the present invention will be described as follows based on FIG. 1 to FIG. ⁇ Overview ⁇
  • a hierarchical video decoding device (image decoding device) 1 according to the present embodiment receives encoded data that has been subjected to scalable video coding (SVC) by a hierarchical video encoding device (image encoding device) 2.
  • Scalable video coding is a coding method that hierarchically encodes moving images from low quality to high quality. Scalable video coding is, for example, H.264. H.264 / AVC Annex G SVC.
  • the quality of a moving image here widely means an element that affects the appearance of a subjective and objective moving image.
  • the quality of the moving image includes, for example, “resolution”, “frame rate”, “image quality”, and “pixel representation accuracy”. Therefore, hereinafter, if the quality of the moving image is different, it means that, for example, “resolution” is different, but it is not limited thereto.
  • “resolution” is different, but it is not limited thereto.
  • SVC is also classified into (1) spatial scalability, (2) temporal scalability, and (3) SNR (Signal-to-Noise-Ratio) scalability from the viewpoint of the type of information layered.
  • Spatial scalability is a technique for hierarchizing resolution and image size.
  • Time scalability is a technique for layering at a frame rate (the number of frames per unit time).
  • SNR scalability is a technique for hierarchizing in coding noise.
  • the hierarchical video encoding device 2 Prior to detailed description of the hierarchical video encoding device 2 and the hierarchical video decoding device 1 according to the present embodiment, first, (1) the hierarchical video encoding device 2 generates and the hierarchical video decoding device 1 performs decoding.
  • the layer structure of the hierarchically encoded data to be performed will be described, and then (2) a specific example of the data structure that can be adopted in each layer will be described.
  • FIG. 2 is a diagram schematically illustrating a case where a moving image is hierarchically encoded / decoded by three layers of a lower layer L3, a middle layer L2, and an upper layer L1. That is, in the example shown in FIGS. 2A and 2B, of the three layers, the upper layer L1 is the highest layer and the lower layer L3 is the lowest layer.
  • a decoded image corresponding to a specific quality that can be decoded from hierarchically encoded data is referred to as a decoded image of a specific hierarchy (or a decoded image corresponding to a specific hierarchy) (for example, in the upper hierarchy L1).
  • Decoded image POUT # A a decoded image of a specific hierarchy (or a decoded image corresponding to a specific hierarchy) (for example, in the upper hierarchy L1).
  • FIG. 2A shows a hierarchical moving image encoding apparatus 2 # A to 2 # C that generates encoded data DATA # A to DATA # C by hierarchically encoding input images PIN # A to PIN # C, respectively. Is shown.
  • FIG. 2B shows a hierarchical moving picture decoding apparatus 1 # A ⁇ that generates decoded images POUT # A ⁇ POUT # C by decoding the encoded data DATA # A ⁇ DATA # C, which are encoded hierarchically. 1 # C is shown.
  • the input images PIN # A, PIN # B, and PIN # C that are input on the encoding device side have the same original image but different image quality (resolution, frame rate, image quality, and the like).
  • the image quality decreases in the order of the input images PIN # A, PIN # B, and PIN # C.
  • the hierarchical video encoding device 2 # C of the lower hierarchy L3 encodes the input image PIN # C of the lower hierarchy L3 to generate encoded data DATA # C of the lower hierarchy L3.
  • Basic information necessary for decoding the decoded image POUT # C of the lower layer L3 is included (indicated by “C” in FIG. 2). Since the lower layer L3 is the lowest layer, the encoded data DATA # C of the lower layer L3 is also referred to as basic encoded data.
  • the hierarchical video encoding apparatus 2 # B of the middle hierarchy L2 encodes the input image PIN # B of the middle hierarchy L2 with reference to the encoded data DATA # C of the lower hierarchy, and performs the middle hierarchy L2 Encoded data DATA # B is generated.
  • additional data necessary for decoding the decoded image POUT # B of the intermediate hierarchy is added to the encoded data DATA # B of the intermediate hierarchy L2.
  • Information (indicated by “B” in FIG. 2) is included.
  • the hierarchical video encoding apparatus 2 # A of the upper hierarchy L1 encodes the input image PIN # A of the upper hierarchy L1 with reference to the encoded data DATA # B of the intermediate hierarchy L2 to Encoded data DATA # A is generated.
  • the encoded data DATA # A of the upper layer L1 is used to decode the basic information “C” necessary for decoding the decoded image POUT # C of the lower layer L3 and the decoded image POUT # B of the middle layer L2.
  • additional information indicated by “A” in FIG. 2 necessary for decoding the decoded image POUT # A of the upper layer is included.
  • the encoded data DATA # A of the upper layer L1 includes information related to decoded images of a plurality of different qualities.
  • the decoding device side will be described with reference to FIG.
  • the decoding devices 1 # A, 1 # B, and 1 # C corresponding to the layers of the upper layer L1, the middle layer L2, and the lower layer L3 are encoded data DATA # A and DATA # B, respectively.
  • And DATA # C are decoded to output decoded images POUT # A, POUT # B, and POUT # C.
  • the hierarchy decoding apparatus 1 # B of the middle hierarchy L2 receives information necessary for decoding the decoded image POUT # B from the hierarchy encoded data DATA # A of the upper hierarchy L1 (that is, the hierarchy encoded data DATA # A decoded image POUT # B may be decoded by extracting “B” and “C”) included in A.
  • the decoded images POUT # A, POUT # B, and POUT # C can be decoded based on information included in the hierarchically encoded data DATA # A of the upper hierarchy L1.
  • the hierarchical encoded data is not limited to the above three-layer hierarchical encoded data, and the hierarchical encoded data may be hierarchically encoded with two layers or may be hierarchically encoded with a number of layers larger than three. Good.
  • Hierarchically encoded data may be configured as described above. For example, in the example described above with reference to FIGS. 2A and 2B, it has been described that “C” and “B” are referred to for decoding the decoded image POUT # B, but the present invention is not limited thereto. It is also possible to configure the hierarchically encoded data so that the decoded image POUT # B can be decoded using only “B”.
  • Hierarchically encoded data can also be generated so that In that case, the lower layer hierarchical video encoding device generates hierarchical encoded data by quantizing the prediction residual using a larger quantization width than the upper layer hierarchical video encoding device. To do.
  • Upper layer A layer located above a certain layer is referred to as an upper layer.
  • the upper layers of the lower layer L3 are the middle layer L2 and the upper layer L1.
  • the decoded image of the upper layer means a decoded image with higher quality (for example, high resolution, high frame rate, high image quality, etc.).
  • Lower layer A layer located below a certain layer is referred to as a lower layer.
  • the lower layers of the upper layer L1 are the middle layer L2 and the lower layer L3.
  • the decoded image of the lower layer refers to a decoded image with lower quality.
  • Target layer A layer that is the target of decoding or encoding.
  • Reference layer A specific lower layer referred to for decoding a decoded image corresponding to the target layer is referred to as a reference layer.
  • the reference layers of the upper hierarchy L1 are the middle hierarchy L2 and the lower hierarchy L3.
  • the hierarchically encoded data can be configured so that it is not necessary to refer to all of the lower layers in decoding of the specific layer.
  • the hierarchical encoded data can be configured such that the reference layer of the upper hierarchy L1 is either the middle hierarchy L2 or the lower hierarchy L3.
  • Base layer A layer located at the lowest layer is referred to as a base layer.
  • the decoded image of the base layer is the lowest quality decoded image that can be decoded from the encoded data, and is referred to as a basic decoded image.
  • the basic decoded image is a decoded image corresponding to the lowest layer.
  • the partially encoded data of the hierarchically encoded data necessary for decoding the basic decoded image is referred to as basic encoded data.
  • the basic information “C” included in the hierarchically encoded data DATA # A of the upper hierarchy L1 is the basic encoded data.
  • Extension layer The upper layer of the base layer is called the extension layer.
  • the layer identifier is for identifying the hierarchy, and corresponds to the hierarchy one-to-one.
  • the hierarchically encoded data includes a hierarchical identifier used for selecting partial encoded data necessary for decoding a decoded image of a specific hierarchy.
  • a subset of hierarchically encoded data associated with a layer identifier corresponding to a specific layer is also referred to as a layer representation.
  • a layer representation of the layer and / or a layer representation corresponding to a lower layer of the layer is used. That is, in decoding the decoded image of the target layer, layer representation of the target layer and / or layer representation of one or more layers included in a lower layer of the target layer are used.
  • Inter-layer prediction is based on the syntax element value, the value derived from the syntax element value included in the layer expression of the layer (reference layer) different from the layer expression of the target layer, and the decoded image. It is to predict the syntax element value of the target layer, the encoding parameter used for decoding of the target layer, and the like. Inter-layer prediction in which information related to motion prediction is predicted from reference layer information (at the same time) may be referred to as motion information prediction. Further, inter-layer prediction that predicts a decoded image of a lower layer (at the same time) from an up-sampled image may be referred to as texture prediction (or inter-layer intra prediction). Note that the hierarchy used for inter-layer prediction is, for example, a lower layer of the target layer. In addition, performing prediction within a target layer without using a reference layer may be referred to as intra-layer prediction.
  • the lower layer and the upper layer may be encoded by different encoding methods.
  • the encoded data of each layer may be supplied to the hierarchical video decoding device 1 via different transmission paths, or supplied to the hierarchical video decoding device 1 via the same transmission path. It may be done.
  • the base layer when transmitting ultra-high-definition video (moving image, 4K video data) with a base layer and one extended layer in a scalable encoding, the base layer downscales 4K video data, and interlaced video data.
  • MPEG-2 or H.264 The enhancement layer may be encoded by H.264 / AVC and transmitted over a television broadcast network, and the enhancement layer may encode 4K video (progressive) with HEVC and transmit over the Internet.
  • FIG. 3 is a diagram illustrating a data structure of encoded data (hierarchically encoded data DATA # C in the example of FIG. 2) that can be employed in the base layer.
  • Hierarchically encoded data DATA # C illustratively includes a sequence and a plurality of pictures constituting the sequence.
  • FIG. 3 shows a hierarchical structure of data in the hierarchical encoded data DATA # C.
  • 3A to 3E show a sequence layer that defines a sequence SEQ, a picture layer that defines a picture PICT, a slice layer that defines a slice S, and a tree block that defines a tree block TBLK. It is a figure which shows the CU layer which prescribes
  • coding unit Coding
  • sequence layer a set of data referred to by the hierarchical video decoding device 1 for decoding a sequence SEQ to be processed (hereinafter also referred to as a target sequence) is defined.
  • the sequence SEQ includes a sequence parameter set SPS (Sequence Parameter Set), a picture parameter set PPS (Picture Parameter Set), an adaptive parameter set APS (Adaptation Parameter Set), and pictures PICT 1 to PICT. It includes NP (NP is the total number of pictures included in the sequence SEQ) and supplemental enhancement information (SEI).
  • the sequence parameter set SPS defines a set of encoding parameters that the hierarchical video decoding device 1 refers to in order to decode the target sequence.
  • a set of encoding parameters referred to by the hierarchical video decoding device 1 for decoding each picture in the target sequence is defined.
  • a plurality of PPS may exist. In that case, one of a plurality of PPSs is selected from each picture in the target sequence.
  • the adaptive parameter set APS defines a set of encoding parameters that the hierarchical video decoding device 1 refers to in order to decode each slice in the target sequence. There may be a plurality of APSs. In that case, one of a plurality of APSs is selected from each slice in the target sequence.
  • Picture layer In the picture layer, a set of data that is referred to by the hierarchical video decoding device 1 in order to decode a picture PICT to be processed (hereinafter also referred to as a target picture) is defined. As shown in FIG. 3B, the picture PICT includes a picture header PH and slices S 1 to S NS (NS is the total number of slices included in the picture PICT).
  • the picture header PH includes a coding parameter group referred to by the hierarchical video decoding device 1 in order to determine a decoding method of the target picture.
  • the encoding parameter group is not necessarily included directly in the picture header PH, and may be included indirectly, for example, by including a reference to the picture parameter set PPS.
  • slice layer In the slice layer, a set of data that is referred to by the hierarchical video decoding device 1 in order to decode a slice S (also referred to as a target slice) to be processed is defined. As shown in FIG. 3C, the slice S includes a slice header SH and a sequence of tree blocks TBLK 1 to TBLK NC (NC is the total number of tree blocks included in the slice S).
  • the slice header SH includes a coding parameter group that the hierarchical video decoding device 1 refers to in order to determine a decoding method of the target slice.
  • Slice type designation information (slice_type) for designating a slice type is an example of an encoding parameter included in the slice header SH.
  • I slice that uses only intra prediction at the time of encoding (2) P slice that uses unidirectional prediction or intra prediction at the time of encoding, (3) B-slice using unidirectional prediction, bidirectional prediction, or intra prediction at the time of encoding may be used.
  • the slice header SH may include a reference to the picture parameter set PPS (pic_parameter_set_id) and a reference to the adaptive parameter set APS (aps_id) included in the sequence layer.
  • the slice header SH includes a filter parameter FP that is referred to by an adaptive filter provided in the hierarchical video decoding device 1.
  • the filter parameter FP includes a filter coefficient group.
  • the filter coefficient group includes (1) tap number designation information for designating the number of taps of the filter, (2) filter coefficients a 0 to a NT-1 (NT is the total number of filter coefficients included in the filter coefficient group), and , (3) offset is included.
  • Tree block layer In the tree block layer, a set of data referred to by the hierarchical video decoding device 1 for decoding a processing target tree block TBLK (hereinafter also referred to as a target tree block) is defined. Note that the tree block may be referred to as a coding tree block (CTB) or a maximum coding unit (LCU).
  • CTB coding tree block
  • LCU maximum coding unit
  • the tree block TBLK includes a tree block header TBLKH and coding unit information CU 1 to CU NL (NL is the total number of coding unit information included in the tree block TBLK).
  • NL is the total number of coding unit information included in the tree block TBLK.
  • the tree block TBLK is divided into partitions for specifying a block size for each process of intra prediction or inter prediction and conversion.
  • the above partition of the tree block TBLK is divided by recursive quadtree partitioning.
  • the tree structure obtained by this recursive quadtree partitioning is hereinafter referred to as a coding tree.
  • a partition corresponding to a leaf that is a node at the end of the coding tree is referred to as a coding node.
  • the encoding node is also referred to as an encoding unit (CU).
  • the coding node may be called a coding block (CB: Coding Block).
  • coding unit information (hereinafter referred to as CU information)
  • CU 1 to CU NL is information corresponding to each coding node (coding unit) obtained by recursively dividing the tree block TBLK into quadtrees. is there.
  • the root of the coding tree is associated with the tree block TBLK.
  • the tree block TBLK is associated with the highest node of the tree structure of the quadtree partition that recursively includes a plurality of encoding nodes.
  • each encoding node is half the size of the encoding node to which the encoding node directly belongs (that is, the partition of the node one layer higher than the encoding node).
  • the size of the tree block TBLK and the size that each coding node can take are the size specification information of the minimum coding node and the maximum coding node included in the sequence parameter set SPS of the hierarchical coding data DATA # C.
  • the minimum coding node hierarchy depth difference For example, when the size of the minimum coding node is 8 ⁇ 8 pixels and the difference in the layer depth between the maximum coding node and the minimum coding node is 3, the size of the tree block TBLK is 64 ⁇ 64 pixels.
  • the size of the encoding node can take any of four sizes, namely, 64 ⁇ 64 pixels, 32 ⁇ 32 pixels, 16 ⁇ 16 pixels, and 8 ⁇ 8 pixels.
  • the tree block header TBLKH includes an encoding parameter referred to by the hierarchical video decoding device 1 in order to determine a decoding method of the target tree block. Specifically, as shown in FIG. 3D, tree block division information SP_TBLK that specifies a division pattern of the target tree block into each CU, and a quantization parameter difference that specifies the size of the quantization step ⁇ qp (qp_delta) is included.
  • the tree block division information SP_TBLK is information representing a coding tree for dividing the tree block. Specifically, the shape and size of each CU included in the target tree block, and the position in the target tree block Is information to specify.
  • the tree block division information SP_TBLK may not explicitly include the shape or size of the CU.
  • the tree block division information SP_TBLK may be a set of flags indicating whether the entire target tree block or a partial region of the tree block is to be divided into four. In that case, the shape and size of each CU can be specified by using the shape and size of the tree block together.
  • the quantization parameter difference ⁇ qp is a difference qp ⁇ qp ′ between the quantization parameter qp in the target tree block and the quantization parameter qp ′ in the tree block encoded immediately before the target tree block.
  • CU layer In the CU layer, a set of data referred to by the hierarchical video decoding device 1 for decoding a CU to be processed (hereinafter also referred to as a target CU) is defined.
  • the encoding node is a node at the root of a prediction tree (PT) and a transformation tree (TT).
  • PT prediction tree
  • TT transformation tree
  • the encoding node is divided into one or a plurality of prediction blocks, and the position and size of each prediction block are defined.
  • the prediction block is one or a plurality of non-overlapping areas constituting the encoding node.
  • the prediction tree includes one or a plurality of prediction blocks obtained by the above division.
  • Prediction processing is performed for each prediction block.
  • a prediction block that is a unit of prediction is also referred to as a prediction unit (PU).
  • PU partitioning There are roughly two types of partitioning in the prediction tree (hereinafter abbreviated as PU partitioning): intra prediction and inter prediction.
  • the division method is 2N ⁇ 2N (the same size as the encoding node), 2N ⁇ N, 2N ⁇ nU, 2N ⁇ nD, N ⁇ 2N, nL ⁇ 2N, nR ⁇ 2N, and N XN etc.
  • the types of PU division will be described later with reference to the drawings.
  • the encoding node is divided into one or a plurality of transform blocks, and the position and size of each transform block are defined.
  • the transform block is one or a plurality of non-overlapping areas constituting the encoding node.
  • the conversion tree includes one or a plurality of conversion blocks obtained by the above division.
  • the division in the transformation tree includes the one in which an area having the same size as the encoding node is assigned as the transformation block, and the one in the recursive quadtree division as in the above-described division of the tree block.
  • transform processing is performed for each conversion block.
  • the transform block which is a unit of transform is also referred to as a transform unit (TU).
  • the CU information CU specifically includes a skip flag SKIP, prediction tree information (hereinafter abbreviated as PT information) PTI, and conversion tree information (hereinafter abbreviated as TT information). Include TTI).
  • PT information prediction tree information
  • TT information conversion tree information
  • the skip flag SKIP is a flag indicating whether or not the skip mode is applied to the target PU.
  • the value of the skip flag SKIP is 1, that is, when the skip mode is applied to the target CU, A part of the PT information PTI and the TT information TTI in the CU information CU are omitted. Note that the skip flag SKIP is omitted for the I slice.
  • the PT information PTI is information related to a prediction tree (hereinafter abbreviated as PT) included in the CU.
  • PT prediction tree
  • the PT information PTI is a set of information related to each of one or a plurality of PUs included in the PT, and is referred to when a predicted image is generated by the hierarchical video decoding device 1.
  • the PT information PTI includes prediction type information PType and prediction information PInfo.
  • Prediction type information PType is information that specifies whether intra prediction or inter prediction is used as a prediction image generation method for the target PU.
  • the prediction information PInfo includes intra prediction information PP_Intra or inter prediction information PP_Inter depending on which prediction method the prediction type information PType specifies.
  • a PU to which intra prediction is applied is also referred to as an intra PU
  • a PU to which inter prediction is applied is also referred to as an inter PU.
  • Inter prediction information PP_Inter includes an encoding parameter that is referred to when the hierarchical video decoding device 1 generates an inter prediction image by inter prediction. More specifically, the inter prediction information PP_Inter includes inter PU division information that specifies a division pattern of the target CU into each inter PU, and inter prediction parameters for each inter PU.
  • the intra prediction information PP_Intra includes an encoding parameter that is referred to when the hierarchical video decoding device 1 generates an intra predicted image by intra prediction. More specifically, the intra prediction information PP_Intra includes intra PU division information that specifies a division pattern of the target CU into each intra PU, and intra prediction parameters for each intra PU.
  • the intra prediction parameter is a parameter for designating an intra prediction method (prediction mode) for each intra PU.
  • the intra prediction parameter is a parameter for restoring intra prediction (prediction mode) for each intra PU.
  • Parameters for restoring the prediction mode include mpm_flag which is a flag related to MPM (Most Probable Mode, the same applies hereinafter), mpm_idx which is an index for selecting the MPM, and an index for designating a prediction mode other than the MPM. Rem_idx is included.
  • MPM is an estimated prediction mode that is highly likely to be selected in the target partition.
  • the MPM may include an estimated prediction mode estimated based on prediction modes assigned to partitions around the target partition, and a DC mode or Planar mode that generally has a high probability of occurrence.
  • the intra prediction parameter may be configured to further include a flag intra_layer_pred_flag that specifies whether or not to use the intra-layer prediction mode.
  • the intra prediction parameter may be configured to further include a flag intra_layer_pred_mode for designating any of a plurality of types of intra-layer prediction modes.
  • the intra prediction parameter may include a flag chroma_intra_layer_pred_flag for designating either the DM mode or the intra-intra-layer prediction mode as the prediction mode related to the color difference when the DM mode is temporarily selected.
  • prediction mode when simply described as “prediction mode”, it means the luminance prediction mode unless otherwise specified.
  • the color difference prediction mode is described as “color difference prediction mode” and is distinguished from the luminance prediction mode.
  • the parameter for restoring the prediction mode includes chroma_mode, which is a parameter for designating the color difference prediction mode.
  • mpm_flag mpm_idx
  • rem_idx chroma_mode
  • chroma_mode corresponds to “intra_chroma_pred_mode”.
  • the PU partition information may include information specifying the shape, size, and position of the target PU. Details of the PU partition information will be described later.
  • the TT information TTI is information regarding a conversion tree (hereinafter abbreviated as TT) included in the CU.
  • TT conversion tree
  • the TT information TTI is a set of information regarding each of one or a plurality of TUs included in the TT, and is referred to when the hierarchical video decoding device 1 decodes residual data.
  • a TU may be referred to as a block.
  • the TT information TTI includes TT division information SP_TT that designates a division pattern for each transform block of the target CU, and quantized prediction residuals QD 1 to QD NT (NT is the target The total number of blocks included in the CU).
  • TT division information SP_TT is information for determining the shape and size of each TU included in the target CU and the position in the target CU.
  • the TT division information SP_TT can be realized from information (split_transform_unit_flag) indicating whether or not the target node is divided and information (trafoDepth) indicating the division depth.
  • each TU obtained by the division can have a size from 32 ⁇ 32 pixels to 4 ⁇ 4 pixels.
  • Each quantized prediction residual QD is encoded data generated by the hierarchical video encoding device 2 performing the following processes 1 to 3 on a target block that is a processing target block.
  • Process 1 The prediction residual obtained by subtracting the prediction image from the encoding target image is subjected to frequency conversion (for example, DCT conversion (Discrete Cosine Transform) and DST conversion (Discrete Sine Transform));
  • Process 2 Quantize the transform coefficient obtained in Process 1;
  • the prediction information PInfo includes an inter prediction parameter or an intra prediction parameter.
  • the inter prediction parameters include, for example, a merge flag (merge_flag), a merge index (merge_idx), an estimated motion vector index (mvp_idx), a reference image index (ref_idx), an inter prediction flag (inter_pred_flag), and a motion vector residual (mvd). Is mentioned.
  • examples of the intra prediction parameters include an estimated prediction mode flag, an estimated prediction mode index, and a residual prediction mode index.
  • the PU partition type specified by the PU partition information includes the following eight patterns in total, assuming that the size of the target CU is 2N ⁇ 2N pixels. That is, 4 symmetric splittings of 2N ⁇ 2N pixels, 2N ⁇ N pixels, N ⁇ 2N pixels, and N ⁇ N pixels, and 2N ⁇ nU pixels, 2N ⁇ nD pixels, nL ⁇ 2N pixels, And four asymmetric splittings of nR ⁇ 2N pixels.
  • N 2 m (m is an arbitrary integer of 1 or more).
  • an area obtained by dividing the target CU is also referred to as a partition.
  • 4 (a) to 4 (h) specifically show the positions of the PU partition boundaries in the CU for each partition type.
  • FIG. 4A shows a 2N ⁇ 2N PU partition type that does not perform CU partitioning.
  • FIGS. 4B, 4C, and 4D show the partition shapes when the PU partition types are 2N ⁇ N, 2N ⁇ nU, and 2N ⁇ nD, respectively.
  • 4 (e), (f), and (g) show the shapes of the partitions when the PU partition types are N ⁇ 2N, nL ⁇ 2N, and nR ⁇ 2N, respectively.
  • FIG. 4H shows the shape of the partition when the PU partition type is N ⁇ N.
  • the PU partition types shown in FIGS. 4A and 4H are also referred to as square partitions based on the shape of the partition.
  • the PU partition types shown in FIGS. 4B to 4G are also referred to as non-square partitioning.
  • the numbers assigned to the respective regions indicate the region identification numbers, and the regions are processed in the order of the identification numbers. That is, the identification number represents the scan order of the area.
  • Partition type for inter prediction In the inter PU, seven types other than N ⁇ N (FIG. 4 (h)) are defined among the above eight division types. The six asymmetric partitions are sometimes called AMP (Asymmetric Motion Partition).
  • a specific value of N is defined by the size of the CU to which the PU belongs, and specific values of nU, nD, nL, and nR are determined according to the value of N.
  • a 128 ⁇ 128 pixel inter-CU includes 128 ⁇ 128 pixels, 128 ⁇ 64 pixels, 64 ⁇ 128 pixels, 64 ⁇ 64 pixels, 128 ⁇ 32 pixels, 128 ⁇ 96 pixels, 32 ⁇ 128 pixels, and 96 ⁇ It is possible to divide into 128-pixel inter PUs.
  • Partition type for intra prediction In the intra PU, the following two types of division patterns are defined.
  • the division patterns (a) and (h) can be taken in the example shown in FIG.
  • an 128 ⁇ 128 pixel intra CU can be divided into 128 ⁇ 128 pixel and 64 ⁇ 64 pixel intra PUs.
  • Enhancement layer For the enhancement layer encoded data, for example, a data structure substantially similar to the data structure shown in FIG. 3 can be adopted. However, in the encoded data of the enhancement layer, additional information can be added or parameters can be omitted as follows.
  • Information indicating hierarchical coding may be encoded in the SPS.
  • spatial scalability, temporal scalability, and SNR scalability hierarchy identification information may be encoded.
  • Filter information and filter on / off information can be encoded by a PPS, a slice header, a macroblock header, or the like.
  • a skip flag (skip_flag), a base mode flag (base_mode_flag), and a prediction mode flag (pred_mode_flag) may be encoded.
  • the CU type of the target CU is an intra CU, an inter CU, a skip CU, or a base skip CU.
  • Intra CU and skip CU can be defined in the same manner as in the HEVC method described above. For example, in the skip CU, “1” is set in the skip flag. If it is not a skip CU, “0” is set in the skip flag. In the intra CU, “0” is set in the prediction mode flag.
  • the inter CU may be defined as a CU that applies non-skip and motion compensation (MC).
  • MC non-skip and motion compensation
  • the base skip CU is a CU type that estimates CU or PU information from a reference layer.
  • “1” is set in the skip flag and “1” is set in the base mode flag.
  • Intra PU, inter PU, and merge PU can be defined similarly to the case of the above-mentioned HEVC system.
  • the base merge PU is a PU type for estimating PU information from a reference layer. Further, for example, in the PT information PTI, a merge flag and a base mode flag may be encoded, and using these flags, it may be determined whether or not the target PU is a PU that performs base merge. That is, in the base merge PU, “1” is set to the merge flag and “1” is set to the base mode flag.
  • the motion vector information included in the enhancement layer the motion vector information that can be derived from the motion vector information included in the lower layer can be omitted from the enhancement layer.
  • the code amount of the enhancement layer can be reduced, so that the coding efficiency is improved.
  • the encoded data of the enhancement layer may be generated by an encoding method different from the encoding method of the lower layer. That is, the encoding / decoding process of the enhancement layer does not depend on the type of the lower layer codec.
  • the lower layer is, for example, MPEG-2 or H.264. It may be encoded by the H.264 / AVC format.
  • the parameters described above may be encoded independently, or a plurality of parameters may be encoded in combination.
  • an index is assigned to the combination of parameter values, and the assigned index is encoded.
  • the encoding of the parameter can be omitted.
  • FIG. 5 is a functional block diagram illustrating a schematic configuration of the hierarchical video decoding device 1.
  • the hierarchical video decoding device 1 decodes the hierarchical encoded data DATA supplied from the hierarchical video encoding device 2 by the HEVC method, and generates a decoded image POUT # T of the target layer.
  • the hierarchical video decoding device 1 includes a NAL demultiplexing unit 11, a variable length decoding unit 12, a prediction parameter restoration unit 14, a texture restoration unit 15, and a base decoding unit 16.
  • the NAL demultiplexing unit 11 demultiplexes hierarchically encoded data DATA transmitted in units of NAL units in NAL (Network Abstraction Layer).
  • NAL is a layer provided to abstract communication between a VCL (Video Coding Layer) and a lower system that transmits and stores encoded data.
  • VCL Video Coding Layer
  • VCL is a layer that performs video encoding processing, and encoding is performed in the VCL.
  • the lower system here is H.264. H.264 / AVC and HEVC file formats and MPEG-2 systems are supported. In the example shown below, the lower system corresponds to the decoding process in the target layer and the reference layer.
  • NAL a bit stream generated by VCL is divided into units called NAL units and transmitted to a destination lower system.
  • the NAL unit includes encoded data encoded by the VCL and a header for appropriately delivering the encoded data to the destination lower system.
  • the encoded data in each layer is stored in the NAL unit, is NAL multiplexed, and is transmitted to the hierarchical moving image decoding apparatus 1.
  • the NAL demultiplexing unit 11 demultiplexes the hierarchical encoded data DATA, and extracts the target layer encoded data DATA # T and the reference layer encoded data DATA # R. Further, the NAL demultiplexing unit 11 supplies the target layer encoded data DATA # T to the variable length decoding unit 12, and also supplies the reference layer encoded data DATA # R to the base decoding unit 16.
  • variable length decoding unit 12 performs a decoding process of information for decoding various syntax values from the binary included in the target layer encoded data DATA # T.
  • variable length decoding unit 12 decodes the prediction information, the encoded information, and the transform coefficient information from the encoded data DATA # T as follows.
  • variable length decoding unit 12 decodes prediction information regarding each CU or PU from the encoded data DATA # T.
  • the prediction information includes, for example, designation of a CU type or a PU type.
  • variable length decoding unit 12 decodes the PU partition information from the encoded DATA # T. In addition, in each PU, the variable length decoding unit 12 further converts motion information such as a reference image index RI, an estimated motion vector index PMVI, and a motion vector residual MVD, and mode information as encoded data DATA as prediction information. Decrypt from #T.
  • variable length decoding unit 12 when the CU is an intra CU, the variable length decoding unit 12 further includes (1) size designation information for designating the size of the prediction unit and (2) prediction index designation for designating the prediction index as the prediction information.
  • the intra prediction information including information is decoded from the encoded data DATA # T.
  • variable length decoding unit 12 decodes the encoded information from the encoded data DATA # T.
  • the encoded information includes information for specifying the shape, size, and position of the CU. More specifically, the encoding information includes tree block division information that specifies a division pattern of the target tree block into each CU, that is, the shape, size, and target tree block of each CU included in the target tree block. Contains information that specifies the position within.
  • variable length decoding unit 12 supplies the decoded prediction information and encoded information to the prediction parameter restoration unit 14.
  • variable length decoding unit 12 decodes the quantization prediction residual QD for each block and the quantization parameter difference ⁇ qp for the tree block including the block from the encoded data DATA # T.
  • the variable length decoding unit 12 supplies the decoded quantization prediction residual QD and the quantization parameter difference ⁇ qp to the texture restoration unit 15 as transform coefficient information.
  • the base decoding unit 16 decodes base decoding information, which is information about a reference layer that is referred to when decoding a decoded image corresponding to the target layer, from the reference layer encoded data DATA # R.
  • the base decoding information includes a base prediction parameter, a base transform coefficient, and a base decoded image.
  • the base decoding unit 16 supplies the decoded base decoding information to the prediction parameter restoration unit 14 and the texture restoration unit 15.
  • the prediction parameter restoration unit 14 restores the prediction parameter using the prediction information and the base decoding information.
  • the prediction parameter restoration unit 14 supplies the restored prediction parameter to the texture restoration unit 15.
  • the prediction parameter restoration unit 14 can refer to motion information stored in a frame memory 155 (described later) included in the texture restoration unit 15 when restoring the prediction parameter.
  • the texture restoration unit 15 generates a decoded image POUT # T using the transform coefficient information, the base decoding information, and the prediction parameter, and outputs the decoded image POUT # T to the outside.
  • the texture restoration unit 15 stores information on the restored decoded image in a frame memory 155 (described later) provided therein.
  • FIG. 1 is a functional block diagram illustrating the configuration of the prediction parameter restoration unit 14.
  • the prediction parameter restoration unit 14 includes a prediction type selection unit 141, a switch 142, an intra prediction mode restoration unit 143, a motion vector candidate derivation unit 144, a motion information restoration unit 145, a merge candidate derivation unit 146, and A merge information restoration unit 147 is provided.
  • the prediction type selection unit 141 sends a switching instruction to the switch 142 according to the CU type or the PU type, and controls the prediction parameter derivation process. Specifically, it is as follows.
  • the prediction type selection unit 141 controls the switch 142 so that the prediction parameter can be derived using the intra prediction mode restoration unit 143.
  • the prediction type selection unit 141 uses the motion information restoration unit 145 to control the switch 142 so that a prediction parameter can be derived.
  • the prediction type selection unit 141 uses the merge information restoration unit 147 to control the switch 142 so that the prediction parameter can be derived.
  • the switch 142 supplies the prediction information to any of the intra prediction mode restoration unit 143, the motion information restoration unit 145, and the merge information restoration unit 147 in accordance with an instruction from the prediction type selection unit 141.
  • a prediction parameter is derived at a supply destination of the prediction information.
  • the intra prediction mode restoration unit 143 derives a prediction mode IntraPredMode [xB] [yB] from the prediction information. That is, the intra prediction mode restoration unit 143 restores the prediction parameter in the prediction mode. Furthermore, the intra prediction mode restoration unit 143 also includes a configuration for deriving the color difference prediction mode IntraPredModeC [xB] [yB].
  • FIG. 6 is a block diagram illustrating a configuration example of the intra prediction mode restoration unit 143. In FIG. 6, only the configuration for decoding the prediction mode among the configurations of the intra prediction mode restoration unit 143 is shown in detail.
  • the intra prediction mode restoration unit 143 includes an MPM derivation unit 122, an MPM determination unit 123, a prediction mode restoration unit 124, a color difference prediction mode restoration unit 126, and a context storage unit 127.
  • the MPM deriving unit 122 derives the MPM based on the prediction mode assigned to the partitions around the target partition.
  • the MPM deriving unit 122 derives, for example, three MPMs.
  • the MPM deriving unit 122 derives the first MPM candidate candModeList [0], the second MPM candidate candModeList [1], and the third MPM candidate candModeList [2] as follows.
  • the prediction mode of the left adjacent PU (denoted as NA in FIG. 7) adjacent to the left of the target PU (denoted as RT in FIG. 7) is adjacent to candIntraPredModeA, and is adjacent to the pmB above the target PU.
  • the prediction mode of the upper adjacent PU (denoted as NB in FIG. 7) is set.
  • “PmA” and “pmB” shown in FIG. 7 indicate the candIntraPredModeA and the candIntraPredModeB, respectively.
  • the MPM deriving unit 122 sets a predetermined prediction mode, for example, “Intra_Planar”.
  • the case where the adjacent PU is unavailable includes a case where the prediction mode of the adjacent PU is not decoded, and the case where the adjacent PU is an upper adjacent PU and belongs to a different LCU (tree block).
  • candIntraPredModeA! candIntraPredModeB Is satisfied
  • candModeList [0] candIntraPredModeA
  • candModeList [1] candIntraPredModeB
  • candModeList [2] is determined as follows.
  • candModeList [0] to candModeList [2] may be expressed as MPM0 to MPM2, respectively.
  • the MPM determination unit 123 determines whether or not the prediction mode of the target PU matches the estimated prediction mode MPM based on mpm_flag (prev_intra_luma_pred_flag) included in the encoded data.
  • FIG. 8 shows an example of syntax referred to for decoding the intra prediction mode.
  • Mpm_flag is “1” when the prediction mode of the target PU matches the estimated prediction mode MPM, and “0” when the prediction mode of the target PU does not match the estimated prediction mode MPM.
  • the MPM determination unit 123 notifies the prediction mode restoration unit 124 of the determination result.
  • the MPM determination unit 123 decodes mpm_flag from the encoded data according to the context stored in the context storage unit 127.
  • the prediction mode restoration unit 124 restores the prediction mode for the target PU.
  • the prediction mode restoration unit 124 restores the prediction mode according to the determination result notified from the MPM determination unit 123.
  • the prediction mode restoration unit 124 decodes mpm_idx from the encoded data, and restores the prediction mode based on the value.
  • mpm_idx is “0” when the prediction mode of the target PU matches with candModeList [0], and “1” when the prediction mode of the target PU matches with candModeList [1].
  • the prediction mode matches with candModeList [2], it is “2”.
  • prediction mode restoration unit 124 may or may not use the context stored in the context storage unit 127 when decoding mpm_idx.
  • the prediction mode restoration unit 124 restores the prediction mode based on rem_idx included in the encoded data. Specifically, first, candModeList [0] to candModeList [2] are sorted in ascending order. That is, (CandModeList [0] mode number) ⁇ (candModeList [1] mode number) ⁇ (candModeList [2] mode number) Sort so that
  • rem_intra_luma_pred_mode rem_intra_luma_pred_mode
  • mode is initialized as
  • rem_intra_luma_pred_mode is an index of a prediction mode excluding MPM.
  • the prediction mode restoration unit 124 restores the prediction mode corresponding to the mode obtained in this way.
  • the color difference prediction mode restoration unit 126 restores the color difference prediction mode for the target PU. More specifically, the color difference prediction mode restoration unit 126 restores the color difference prediction mode as follows.
  • the color difference prediction mode restoration unit 126 decodes intra color difference prediction mode designation information chroma_mode (intra_chroma_pred_mode) included in the encoded data # 1.
  • the color difference prediction mode restoration unit 126 restores the color difference prediction mode based on the restored intra color difference prediction mode designation information chroma_mode and the luminance prediction mode (IntraPredMode [xB] [yB]).
  • FIG. 9 shows an example of the definition of the prediction mode.
  • 36 types of prediction modes are defined, and each prediction mode is a number from “0” to “35” (intrapredmode or intrapredmodec values, also referred to as an intra prediction mode index). Specified).
  • the following names are assigned to the respective prediction modes. That is, “0” is “Intra Planar (planar prediction mode, plane prediction mode)”, “1” is “Intra DC (intra DC prediction mode)”, and “2” to “34” are “Intra Angular (direction prediction)” and “35” is “Intra From Luma”.
  • the color difference prediction mode “35” is unique to the color difference prediction mode, and is a mode for performing color difference prediction based on luminance prediction.
  • the color difference prediction mode “35” is a prediction mode using the correlation between the luminance pixel value and the color difference pixel value.
  • the color difference prediction mode “35” is also referred to as an LM mode.
  • the number of prediction modes (intraPredModeNum) is “36” regardless of the size of the target block.
  • the set including the prediction mode 0 to the prediction mode 35 shown in FIGS. 9 and 10 may be referred to as a basic set.
  • FIG. 11 is a diagram illustrating an example of a table referred to by the color difference prediction mode restoration unit 126 in order to derive the color difference prediction mode. More specifically, FIG. 11 is a diagram showing a table that defines the association between the intra color difference prediction mode designation information chroma_mode and the luminance prediction mode (IntraPredMode [xB] [yB]) and the color difference prediction mode (IntraPredModeC). It is.
  • FIG. 11A is a table when the LM mode is included in the color difference prediction mode
  • FIG. 11B is a table when the LM mode is not included in the color difference prediction mode.
  • LM means that the LM mode is used.
  • X indicates that the value of the luminance prediction mode (IntraPredMode [xB] [yB]) is used as it is.
  • Whether to use a table including the LM mode or a table not including the LM mode is specified by the value of chroma_pred_from_luma_enabled_flag, for example.
  • the motion vector candidate derivation unit 144 uses the base decoding information to derive an estimated motion vector candidate by intra-layer motion estimation processing or inter-layer motion estimation processing.
  • the motion vector candidate derivation unit 144 supplies the derived motion vector candidates to the motion information restoration unit 145.
  • the motion information restoration unit 145 restores motion information related to each inter PU that is not merged. That is, the motion information restoring unit 145 restores motion information as a prediction parameter.
  • the motion information restoration unit 145 restores motion information from the prediction information when the target PU is an inter CU and an inter PU. More specifically, the motion information restoration unit 145 acquires a motion vector residual (mvd), an estimated motion vector index (mvp_idx), an inter prediction flag (inter_pred_flag), and a reference image index (refIdx). Then, based on the value of the inter prediction flag, a reference image list use flag is determined for each of the reference image list L0 and the reference image list L1.
  • mvd motion vector residual
  • mvp_idx estimated motion vector index
  • inter_pred_flag inter prediction flag
  • refIdx reference image index
  • the motion information restoration unit 145 derives an estimated motion vector based on the value of the estimated motion vector index, A motion vector is derived based on the motion vector residual and the estimated motion vector.
  • the motion information restoration unit 145 outputs the motion vector (motion compensation parameter) together with the derived motion vector, the reference image list use flag, and the reference image index.
  • the merge candidate derivation unit 146 derives various merge candidates using the decoded motion information supplied from the frame memory 155 described later and / or the base decoding information supplied from the base decoding unit 16.
  • the merge candidate derivation unit 146 supplies the derived merge candidates to the merge information restoration unit 147.
  • the merge information restoration unit 147 restores motion information regarding each PU that is merged within a layer or between layers. That is, the motion information restoring unit 145 restores motion information as a prediction parameter.
  • the merge information restoration unit 147 uses the merge candidate list derived by the merge candidate derivation unit 146 by intra-layer merging. Then, the motion information is restored by deriving a motion compensation parameter corresponding to the merge index (merge_idx) included in the prediction information.
  • the merge information merging unit 146 derives a merge index (merge_idx) included in the prediction information from the merge candidate list derived by inter-layer merging.
  • the motion information is restored by deriving the corresponding motion compensation parameter.
  • FIG. 12 is a functional block diagram illustrating the configuration of the texture restoration unit 15.
  • the texture restoration unit 15 includes an inverse orthogonal transform / inverse quantization unit 151, a texture prediction unit 152, an adder 153, a loop filter unit 154, and a frame memory 155.
  • the inverse orthogonal transform / inverse quantization unit 151 (1) inversely quantizes the quantized prediction residual QD included in the transform coefficient information supplied from the variable length decoding unit 12, and (2) obtained by inverse quantization.
  • the DCT coefficient is subjected to inverse orthogonal transform (for example, DCT (Discrete Cosine Transform) transform), and (3) the prediction residual D obtained by the inverse orthogonal transform is supplied to the adder 153.
  • inverse orthogonal transform for example, DCT (Discrete Cosine Transform) transform
  • the inverse orthogonal transform / inverse quantization unit 151 derives a quantization step QP from the quantization parameter difference ⁇ qp included in the transform coefficient information.
  • the texture prediction unit 152 refers to the base decoded image included in the base decoding information or the decoded decoded image stored in the frame memory according to the prediction parameter, and generates a predicted image.
  • the texture prediction unit 152 includes an inter prediction unit 152A, an intra-layer intra prediction unit 152B, and an inter-layer intra prediction unit 152C.
  • the inter prediction unit 152A generates a prediction image related to each inter prediction partition by inter prediction. Specifically, the inter prediction unit 152A generates a prediction image from the reference image using the motion information supplied as a prediction parameter from the motion information restoration unit 145 or the merge information restoration unit 147.
  • the intra-layer intra prediction unit 152B generates a prediction image related to each intra-prediction partition by intra-layer intra prediction. Specifically, the intra-layer intra prediction unit 152B generates a prediction image from the decoded image that has been decoded in the target partition, using the prediction mode supplied from the intra prediction mode restoration unit 143 as a prediction parameter.
  • the inter-layer intra prediction unit 152C generates a prediction image related to each intra prediction partition by inter-layer intra prediction. Specifically, the intra-layer intra prediction unit 152C generates a prediction image based on the base decoded image included in the base decoding information, using the prediction mode supplied from the intra prediction mode restoration unit 143 as a prediction parameter. The base decoded image may be appropriately upsampled according to the resolution of the target layer. Details of inter-layer prediction by the intra-layer intra prediction unit 152C will be described later.
  • the texture prediction unit 152 supplies the predicted image generated by the inter prediction unit 152A, the intra-layer intra prediction unit 152B, or the inter-layer intra prediction unit 152C to the adder 153.
  • the adder 153 generates a decoded image by adding the prediction image of the texture prediction unit 153 and the prediction residual D supplied from the inverse orthogonal transform / inverse quantization unit 151.
  • the loop filter unit 154 subjects the decoded image supplied from the adder 153 to deblocking processing and filtering processing using adaptive filter parameters.
  • the frame memory 155 stores the decoded image that has been filtered by the loop filter unit 154.
  • FIG. 13 is a functional block diagram illustrating the configuration of the base decoding unit 16.
  • the base decoding unit 16 includes a variable length decoding unit 161, a base prediction parameter restoration unit 162, a base transform coefficient restoration unit 163, and a base texture restoration unit 164.
  • variable length decoding unit 161 performs a decoding process of information for decoding various syntax values from the binary included in the reference layer encoded data DATA # R.
  • variable length decoding unit 161 decodes prediction information and transform coefficient information from the encoded data DATA # R.
  • the syntax of the prediction information and transform coefficients decoded by the variable length decoding unit 161 is the same as that of the variable length decoding unit 12, and therefore detailed description thereof is omitted here.
  • variable length decoding unit 161 supplies the decoded prediction information to the base prediction parameter restoring unit 162 and also supplies the decoded transform coefficient information to the base transform coefficient restoring unit 163.
  • the base prediction parameter restoration unit 162 restores the base prediction parameter based on the prediction information supplied from the variable length decoding unit 161.
  • the method by which the base prediction parameter restoration unit 162 restores the base prediction parameter is the same as that of the prediction parameter restoration unit 14, and thus detailed description thereof is omitted here.
  • the base prediction parameter restoration unit 162 supplies the restored base prediction parameter to the base texture restoration unit 164 and outputs it to the outside.
  • the base transform coefficient restoration unit 163 restores transform coefficients based on the transform coefficient information supplied from the variable length decoding unit 161.
  • the method by which the base transform coefficient restoration unit 163 restores the transform coefficients is the same as that of the inverse orthogonal transform / inverse quantization unit 151, and thus detailed description thereof is omitted here.
  • the base conversion coefficient restoration unit 163 supplies the restored base conversion coefficient to the base texture restoration unit 164 and outputs it to the outside.
  • the base texture restoration unit 164 uses the base prediction parameter supplied from the base prediction parameter restoration unit 162 and the base transform coefficient supplied from the base transform coefficient restoration unit 163 to generate a decoded image. Specifically, the base texture restoration unit 164 performs the same texture prediction as the texture prediction unit 152 based on the base prediction parameter, and generates a predicted image. Also, the base texture restoration unit 164 generates a prediction residual based on the base conversion coefficient, and generates a base decoded image by adding the generated prediction residual and the predicted image generated by texture prediction.
  • the base texture restoration unit 164 may perform the same filter processing as the loop filter unit 154 on the base decoded image. Further, the base texture restoration unit 164 may include a frame memory for storing the decoded base decoded image, or may refer to the decoded base decoded image stored in the frame memory in texture prediction. Good.
  • FIG. 14 is a schematic diagram schematically illustrating inter-layer prediction using a decoded image of a base layer.
  • an intra prediction image of a target block of an enhancement layer (referred to as a target prediction block (hereinafter the same)) is temporally detected in the base layer.
  • a decoded image of a reference block (referred to as a predicted block to be referred to (hereinafter the same)) that is a block located at the same time as the block and spatially arranged at a position corresponding to the target block Generated.
  • the upsampling may be performed. Moreover, it is good also as a structure which produces
  • FIG. 15 is a diagram illustrating syntax included in the intra prediction parameters according to the present example.
  • the intra prediction parameters according to the present example include intra_layer_pred_flag [x0] [y0] in addition to the syntaxes illustrated in FIG. 8.
  • intra_layer_pred_flag [x0] [y0] is a flag indicating whether or not to use inter-layer prediction, and is decoded by the above-described intra prediction mode restoration unit 143.
  • intra-layer prediction by the intra-layer intra prediction unit 152C is not performed. In this case, only the prediction by the intra-layer intra prediction unit 152B can be selected as the intra prediction.
  • intra_layer_pred_flag [x0] [y0] indicates true
  • the above intra-layer prediction is performed by the inter-layer intra prediction unit 152C.
  • the syntax prev_intra_luma_pred_flag [x0] [y0], mpm_idx [x0] [y0], rem_intra_luma_pred_mode [x0] [y0], and intra_chroma_pred0 [0] [0] May not be included in the encoded data.
  • intra_layer_pred_flag since it is sufficient to encode intra_layer_pred_flag, the higher the rate at which the intra-layer prediction mode is selected, the higher the encoding efficiency.
  • any one of a plurality of predetermined intra prediction modes (prediction modes 0 to 35 shown in FIG. 10) in a prediction mode group including at least a part of a plurality of predetermined intra prediction modes.
  • an intra-layer prediction mode is included.
  • the prediction mode group according to this example includes an intra-layer prediction mode (indicated as Intra_Base in FIG. 16) instead of the intra DC prediction mode (Intra_DC). It is included.
  • an intra-layer prediction mode is included instead of any one of a plurality of predetermined intra prediction modes (prediction modes 0 to 35 shown in FIG. 10). Accordingly, intra-layer prediction can be suitably performed without increasing the code amount for designating the intra prediction mode.
  • the prediction mode group according to this example includes an intra-layer prediction mode instead of the intra DC prediction mode.
  • the intra-intra-layer prediction mode has higher prediction accuracy, so that the coding efficiency is improved.
  • Inter-layer prediction mode is included in a prediction mode group including at least a part of a plurality of predetermined intra prediction modes, in addition to a plurality of predetermined intra prediction modes (prediction modes 0 to 35 shown in FIG. 10).
  • the index of other prediction modes is raised by one.
  • the total number of prediction modes other than 3 MPMs is 33, and a maximum of 6 bits is required for rem_idx. For this reason, in this example, it is preferable to appropriately use variable length coding in encoding and decoding of rem_idx.
  • the MPM deriving unit 122 included in the intra prediction mode restoration unit 143 sets the prediction mode of the left adjacent block of the target block to candIntraPredModeA and the prediction mode of the upper adjacent block of the target block to candIntraPredModeBMode.
  • candIntraPredModeA candIntraPredModeB
  • candIntraPredModeA candIntraPredModeB
  • candIntraPredModeA candIntraPredModeB
  • candIntraPredModeA candIntraPredModeB
  • candIntraPredModeA ⁇ 3 (Intra_Planar or Intra_DC or Intra_Base)
  • candIntraPredModeA! candIntraPredModeB
  • candModeList [2] candIntraPredModeB Set to.
  • the intra-layer prediction mode (Intra_Base) is preferentially used, it is possible to improve the prediction accuracy while suppressing an increase in the code amount.
  • the case where there are multiple intra-layer prediction modes is handled.
  • intra-layer prediction mode a case where plural types of filters having different characteristics are applied to a decoded image of a base layer is applicable.
  • Intra-layer prediction mode 1 A mode in which an up-sampling filter having a high noise removal effect is used for a decoded image of a base layer, and is used as a prediction image of an enhancement layer.
  • Intra-layer prediction mode 2 A decoded image of a base layer
  • a mode used as a prediction image of an enhancement layer can be used.
  • Intra-layer prediction mode 1 Mode used as a prediction image of an enhancement layer after using an upsample filter of a certain phase for a decoded image of a base layer.
  • Intra-layer prediction mode 2 For a decoded image of a base layer.
  • a mode that is used as a prediction image of an enhancement layer after using an upsampling filter having a phase different from that of intra-intra-layer prediction mode 1 can be used.
  • intra prediction mode restoration unit 143 decodes intra_layer_pred_flag [x0] [y0] and then intra_layer_pred_mode [x0] [y0, for example, as illustrated in FIG. ] (The range is 0 to the number of inter-intra-layer prediction modes-1), so that one of a plurality of intra-layer prediction modes can be selected.
  • intra_layer_pred_mode [x0] [y0] is a syntax for designating any of a plurality of intra-layer prediction modes.
  • the prediction mode related to luminance is derived by any one of the processes of ⁇ first example of intra-layer prediction> to ⁇ fourth example of intra-layer prediction>.
  • the color difference prediction mode restoration unit 126 sets the prediction mode IntraPredModeC related to color difference to the intra-layer prediction mode.
  • the color difference prediction mode restoring unit 126 refers to the prediction mode IntraPredModeC regarding color difference, for example, referring to the table shown in FIG. 11A or FIG. To derive.
  • FIG. 20 is a diagram illustrating a syntax according to this example in a case where a configuration for switching between intra-layer prediction modes (corresponding to ⁇ first example of intra-layer prediction>) is employed as a luminance-related prediction mode.
  • intra_chroma_pred_mode [x0] [y0] is encoded and decoded only when intra_layer_pred_flag [x0] [y0] is not true, that is, only when the intra-layer prediction mode is not used.
  • FIG. 21 shows an example in which a configuration in which a prediction mode index is assigned to an intra-layer prediction mode (corresponding to ⁇ second example of intra-layer prediction> to ⁇ fourth example of intra-layer prediction>) is employed. It is a figure which shows the syntax which concerns on.
  • intra_chroma_pred_mode [x0] [y0] is encoded and decoded only when the prediction mode related to luminance (indicated as IntraLumaPredMode in FIG. 21) is not intra-layer prediction (indicated as Intra_Base in FIG. 21). Is done.
  • the intra-intra-layer prediction mode can be appropriately applied even to the prediction mode related to color difference, so that the coding efficiency is improved.
  • the color difference prediction mode restoration unit 126 decodes a flag indicating whether the DM mode is actually used or the inter-intra layer prediction mode is used. Then, depending on the value of the flag, it is selected whether the DM mode is actually used or the intra-layer prediction mode is used.
  • FIG. 22 is a diagram illustrating a syntax according to this example in a case where a configuration for switching between intra-layer prediction modes (corresponding to ⁇ first example of intra-layer prediction>) is adopted as a prediction mode related to luminance.
  • the color difference prediction mode restoration unit 126 decodes a flag chroma_intra_layer_pred_flag indicating whether the DM mode is actually used or the intra-layer prediction mode is used.
  • the color difference prediction mode restoration unit 126 selects whether the DM mode is actually used or the intra-layer prediction mode is used according to the value of the flag.
  • FIG. 23 shows an example in which a configuration in which a prediction mode index is assigned to an intra-layer prediction mode (corresponding to ⁇ second example of inter-layer prediction> to ⁇ fourth example of intra-layer prediction>) is employed. It is a figure which shows the syntax which concerns on.
  • the color difference prediction mode restoration unit 126 decodes a flag chroma_intra_layer_pred_flag indicating whether the DM mode is actually used or the intra-layer prediction mode is used.
  • the color difference prediction mode restoration unit 126 selects whether the DM mode is actually used or the intra-layer prediction mode is used according to the value of the flag.
  • the intra-intra-layer prediction mode can be appropriately applied even to the prediction mode related to color difference, so that the coding efficiency is improved.
  • the color difference prediction mode restoration unit 126 selects the DM mode between intra layers related to the color difference. Interpret as prediction mode.
  • the color difference prediction mode restoration unit 126 sets the prediction mode related to color difference to the intra-layer prediction mode.
  • FIG. 24A shows that in the table including the LM mode, the DM mode is interpreted as an intra-layer prediction mode (indicated as Base in FIG. 24A).
  • FIG. 24 (b) shows that the DM mode is interpreted as the intra-layer prediction mode in the table not including the LM mode.
  • the prediction mode related to luminance is not the intra-layer prediction mode
  • the DM mode is selected, the DM mode is actually applied.
  • the intra-layer prediction mode can be appropriately applied to the prediction mode related to the color difference. Moreover, since the prediction accuracy can be improved without increasing the amount of codes, the coding efficiency is improved.
  • ⁇ Configuration including base layer prediction mode in estimated prediction mode >>
  • a configuration in which the prediction mode of the base layer is included in the estimated prediction mode may be employed instead of the configuration in which the intra-layer prediction is performed.
  • the prediction mode of the target block of the enhancement layer when deriving the prediction mode of the target block of the enhancement layer, it is a block of the base layer that is temporally located at the same time as the target block of the enhancement layer, and spatially includes the target block.
  • the intra prediction mode selected for the reference block that is a block arranged at a corresponding position may be included in the estimated prediction mode.
  • the MPM deriving unit 122 included in the intra prediction mode restoration unit 143 sets the prediction mode of the reference block of the base layer to any one of the three MPMs.
  • the MPM deriving unit 122 sets the prediction mode of the left adjacent block of the target block as candIntraPredModeA, the prediction mode of the upper adjacent block of the target block as candIntraPredModeB, and the prediction mode of the reference block of the base layer as candIntraPredModeBL.
  • candIntraPredModeB candIntraPredModeB
  • one of the three MPMs is set to the prediction mode selected in the reference block. For this reason, since possibility that the prediction mode selected in an object block will correspond with MPM increases, encoding efficiency improves.
  • the number of MPMs is four, and one MPM is determined as the reference block prediction mode.
  • the value of mpm_idx is 0.3.
  • the MPM deriving unit 122 sets the prediction mode of the left adjacent block of the target block as candIntraPredModeA, the prediction mode of the upper adjacent block of the target block as candIntraPredModeB, and the prediction mode of the reference block of the base layer as candIntraPredModeBL.
  • candIntraPredModeA! candIntraPredModeB Is satisfied
  • candModeList [1] candIntraPredModeA
  • candModeList [2] candIntraPredModeB
  • candModeList [3] is determined as follows.
  • the candModeList matching the candIntraPredModeBL is replaced with Intra_Angular (10).
  • 10 (horizontal) and 26 (vertical) are used as the values (directions) of the Intra_Angular prediction mode set in the estimated prediction mode, but other values may be used.
  • the prediction mode restoration unit 124 restores the prediction mode based on rem_idx included in the encoded data. Specifically, first, candModeList [0] to candModeList [3] are sorted in ascending order. That is, (CandModeList [0] mode number) ⁇ (candModeList [1] mode number) ⁇ (candModeList [2] mode number) ⁇ (candModeList [3] mode number) Sort so that
  • rem_intra_luma_pred_mode rem_intra_luma_pred_mode
  • mode is initialized as
  • rem_intra_luma_pred_mode is an index of a prediction mode excluding MPM.
  • the prediction mode restoration unit 124 restores the prediction mode corresponding to the mode obtained in this way.
  • the index indicating the position is frequently used, and the encoding efficiency is improved. Is possible. Note that “store at a fixed position” can also be expressed as “store with a fixed index”.
  • the above-described processing may be applied only to the first block of the tree block.
  • FIG. 25 is a functional block diagram showing a schematic configuration of the hierarchical video encoding device 2.
  • the hierarchical video encoding device 2 encodes the input image PIN # T of the target layer with reference to the reference layer encoded data DATA # R to generate hierarchical encoded data DATA of the target layer. It is assumed that the reference layer encoded data DATA # R has been encoded in the hierarchical video encoding apparatus corresponding to the reference layer.
  • the hierarchical video encoding apparatus 2 includes a prediction parameter determination unit 21, a prediction information generation unit 22, a base decoding unit 23, a texture information generation unit 24, a variable length encoding unit 25, and a NAL multiplexing unit. 26.
  • the prediction parameter determination unit 21 determines a prediction parameter used for prediction of a prediction image and other encoding settings based on the input image PIN # T.
  • the prediction parameter determination unit 21 performs encoding settings including prediction parameters as follows.
  • the prediction parameter determination unit 21 generates a CU image for the target CU by sequentially dividing the input image PIN # T into slice units, tree block units, and CU units.
  • the prediction parameter determination unit 21 generates encoded information (sometimes referred to as header information) based on the result of the division process.
  • the encoding information includes (1) tree block information that is information about the size and shape of the tree block belonging to the target slice and the position in the target slice, and (2) the size, shape, and target of the CU belonging to each tree block.
  • CU information which is information about the position in the tree block.
  • the prediction parameter determination unit 21 refers to the CU image, the tree block information, and the CU information, and predicts the prediction type of the target CU, the division information of the target CU into the PU, and the prediction parameter (the target CU is an intra CU). If so, the intra prediction mode, and in the case of an inter CU, a motion compensation parameter in each PU is derived.
  • the prediction parameter determination unit 21 includes (1) a prediction type of the target CU, (2) a possible division pattern for each PU of the target CU, and (3) a prediction mode that can be assigned to each PU (if it is an intra CU).
  • the cost is calculated for all combinations of the intra prediction mode and the motion compensation parameter in the case of inter CU), and the prediction type, division pattern, and prediction mode with the lowest cost are determined.
  • the prediction parameter determination unit 21 supplies the encoded information and the prediction parameter to the prediction information generation unit 22 and the texture information generation unit 24. Although not shown for simplicity of explanation, the above-described encoding setting determined by the prediction parameter determination unit 21 can be referred to by each unit of the hierarchical video encoding device 2.
  • the prediction information generation unit 22 generates prediction information including a syntax value related to the prediction parameter based on the prediction parameter supplied from the prediction parameter determination unit 21 and the reference layer encoded data DATA # R.
  • the prediction information generation unit 22 supplies the generated prediction information to the variable length encoding unit 25.
  • the prediction information generation unit 22 can refer to motion information stored in a frame memory 244 (described later) included in the texture information generation 24 when restoring the prediction parameter.
  • the base decoding unit 23 is the same as the base decoding unit 16 of the hierarchical video decoding device 1, the description thereof is omitted here.
  • the texture information generation unit 24 generates transform coefficient information including transform coefficients obtained by orthogonal transform / quantization of the prediction residual obtained by subtracting the predicted image from the input image PIN # T.
  • the texture information generation unit 24 supplies the generated transform coefficient information to the variable length encoding unit 25.
  • information on the restored decoded image is stored in an internal frame memory 244 (described later).
  • variable length coding unit 25 performs variable length coding on the prediction information supplied from the prediction information generation unit 22 and the transform coefficient information supplied from the texture information generation unit 24 to generate target layer encoded data DATA # T.
  • the variable length encoding unit 25 supplies the generated target layer encoded data DATA # T to the NAL multiplexing unit 26.
  • the NAL multiplexing unit 26 stores the target layer encoded data DATA # T and the reference layer encoded data DATA # R supplied from the variable length encoding unit 25 in the NAL unit, and thereby performs hierarchical video that has been NAL multiplexed. Image encoded data DATA is generated and output to the outside.
  • FIG. 26 is a functional block diagram illustrating the configuration of the prediction information generation unit 22.
  • the prediction information generation unit 22 includes a prediction type selection unit 221, a switch 222, an intra prediction mode derivation unit 223, a motion vector candidate derivation unit 224, a motion information generation unit 225, a merge candidate derivation unit (interlayer candidate).
  • the prediction type selection unit 221 sends a switching instruction to the switch 222 according to the CU type or PU type, and controls the prediction parameter derivation process. Specifically, it is as follows.
  • the prediction type selection unit 221 controls the switch 222 so that the prediction information can be derived using the intra prediction mode deriving unit 223.
  • the prediction type selection unit 221 uses the motion information generation unit 225 to control the switch 222 so that a prediction parameter can be derived.
  • the prediction type selection unit 221 uses the merge information generation unit 227 to control the switch 222 so that a prediction parameter can be derived.
  • the switch 222 supplies the prediction parameter to any of the intra prediction mode deriving unit 223, the motion information generating unit 225, and the merge information generating unit 227 in accordance with an instruction from the prediction type selecting unit 221.
  • a prediction parameter is derived at a supply destination of the prediction information.
  • the intra prediction mode deriving unit 223 derives a syntax value related to the prediction mode. That is, the intra prediction mode restoration unit 143 generates a syntax value related to the prediction mode as the prediction information.
  • Specific processing by the intra prediction mode deriving unit 223 includes processing corresponding to the processing described with respect to the intra prediction mode restoration unit 143, particularly ⁇ first example of intra-layer prediction> to ⁇ seventh of intra-layer prediction.
  • Example> and ⁇ First Example of Configuration Adding Base Layer Prediction Mode to Prediction Mode Group> to ⁇ Third Example of Configuration Adding Base Layer Prediction Mode to Prediction Mode Group> The processing corresponding to is included.
  • “intra prediction mode restoration unit 143” in these descriptions is to be read as “intra prediction mode deriving unit 223”
  • “MPM deriving unit 122” is “MPM deriving unit included in intra prediction mode deriving unit 223”. And shall be read as
  • the motion vector candidate derivation unit 224 uses the base decoding information to derive an estimated motion vector candidate by intra-layer motion estimation processing or inter-layer motion estimation processing.
  • the motion vector candidate derivation unit 224 supplies the derived motion vector candidates to the motion information generation unit 225.
  • the motion information generation unit 225 generates a syntax value related to motion information in each inter prediction partition that is not merged. That is, the motion information restoration unit 145 generates a syntax value related to motion information as prediction information. Specifically, the motion information generation unit 225 derives corresponding syntax element values inter_pred_flag, mvd, mvp_idx, and refIdx from the motion compensation parameter in each PU.
  • the motion information generation unit 225 derives the syntax value based on the motion vector candidates supplied from the motion vector candidate derivation unit 224.
  • the motion information restoration unit 145 derives the syntax value based on the motion information included in the prediction parameter.
  • the merge candidate derivation unit 226 uses motion information similar to a motion compensation parameter in each PU using decoded motion information supplied from a frame memory 155 described later and / or base decoding information supplied from the base decoding unit 23, and the like. A merge candidate having a compensation parameter is derived. The merge candidate derivation unit 226 supplies the derived merge candidates to the merge information generation unit 227.
  • the configuration of the merge candidate derivation unit 226 is the same as the configuration of the merge candidate derivation unit 146 included in the hierarchical video decoding device 1, and thus the description thereof is omitted.
  • the merge information generation unit 227 generates a syntax value related to motion information regarding each inter prediction partition to be merged. That is, the merge information generation unit 227 generates a syntax value related to motion information as prediction information. Specifically, the merge information generation unit 227 outputs a syntax element value merge_idx that specifies a merge candidate having a motion compensation parameter similar to the motion compensation parameter in each PU.
  • FIG. 27 is a functional block diagram illustrating the configuration of the texture information generation unit 24.
  • the texture information generation unit 24 includes a texture prediction unit 241, a subtractor 242, an orthogonal transform / quantization unit 243, an inverse orthogonal transform / inverse quantization unit 244, an adder 245, a loop filter unit 246, And a frame memory 247.
  • the subtractor 242 generates a prediction residual D by subtracting the prediction image supplied from the texture prediction unit 241 from the input image PIN # T.
  • the subtractor 242 supplies the generated prediction residual D to the transform / quantization unit 243.
  • the orthogonal transform / quantization unit 243 generates a quantized prediction residual by performing orthogonal transform and quantization on the prediction residual D.
  • the orthogonal transform refers to an orthogonal transform from the pixel region to the frequency region. Examples of orthogonal transformation include DCT transformation (DiscretecreCosine Transform), DST transformation (Discrete Sine Transform), and the like.
  • DCT transformation DiscretecreCosine Transform
  • DST transformation Discrete Sine Transform
  • the specific quantization process is as described above, and the description thereof is omitted here.
  • the orthogonal transform / quantization unit 243 supplies the generated transform coefficient information including the quantized prediction residual to the inverse transform / inverse quantization unit 244 and the variable length coding unit 25.
  • the texture prediction unit 241, the inverse orthogonal transform / inverse quantization unit 244, the adder 245, the loop filter unit 246, and the frame memory 247 are respectively a texture prediction unit 152, an inverse orthogonal transform / Since it is similar to the inverse quantization unit 151, the adder 153, the loop filter unit 154, and the frame memory 155, the description thereof is omitted here. However, the texture prediction unit 241 supplies the predicted image not only to the adder 245 but also to the subtractor 242.
  • the above-described hierarchical moving image encoding device 2 and hierarchical moving image decoding device 1 can be used by being mounted on various devices that perform transmission, reception, recording, and reproduction of moving images.
  • the moving image may be a natural moving image captured by a camera or the like, or may be an artificial moving image (including CG and GUI) generated by a computer or the like.
  • FIG. 28 is a block diagram illustrating a configuration of a transmission device PROD_A in which the hierarchical video encoding device 2 is mounted.
  • the transmission device PROD_A modulates a carrier wave with an encoding unit PROD_A1 that obtains encoded data by encoding a moving image, and with the encoded data obtained by the encoding unit PROD_A1.
  • a modulation unit PROD_A2 that obtains a modulation signal and a transmission unit PROD_A3 that transmits the modulation signal obtained by the modulation unit PROD_A2 are provided.
  • the hierarchical moving image encoding apparatus 2 described above is used as the encoding unit PROD_A1.
  • the transmission device PROD_A is a camera PROD_A4 that captures a moving image, a recording medium PROD_A5 that records the moving image, an input terminal PROD_A6 that inputs the moving image from the outside, as a supply source of the moving image input to the encoding unit PROD_A1.
  • An image processing unit A7 that generates or processes an image may be further provided. In FIG. 28A, a configuration in which all of these are provided in the transmission device PROD_A is illustrated, but a part may be omitted.
  • the recording medium PROD_A5 may be a recording of a non-encoded moving image, or a recording of a moving image encoded by a recording encoding scheme different from the transmission encoding scheme. It may be a thing. In the latter case, a decoding unit (not shown) for decoding the encoded data read from the recording medium PROD_A5 according to the recording encoding method may be interposed between the recording medium PROD_A5 and the encoding unit PROD_A1.
  • FIG. 28 is a block diagram illustrating a configuration of the receiving device PROD_B in which the hierarchical video decoding device 1 is mounted.
  • the receiving device PROD_B includes a receiving unit PROD_B1 that receives the modulated signal, a demodulating unit PROD_B2 that obtains encoded data by demodulating the modulated signal received by the receiving unit PROD_B1, and a demodulator.
  • a decoding unit PROD_B3 that obtains a moving image by decoding the encoded data obtained by the unit PROD_B2.
  • the above-described hierarchical video decoding device 1 is used as the decoding unit PROD_B3.
  • the receiving device PROD_B has a display PROD_B4 for displaying a moving image, a recording medium PROD_B5 for recording the moving image, and an output terminal for outputting the moving image to the outside as a supply destination of the moving image output by the decoding unit PROD_B3.
  • PROD_B6 may be further provided.
  • FIG. 28B a configuration in which all of these are provided in the receiving device PROD_B is illustrated, but a part may be omitted.
  • the recording medium PROD_B5 may be used for recording a non-encoded moving image, or may be encoded using a recording encoding method different from the transmission encoding method. May be. In the latter case, an encoding unit (not shown) for encoding the moving image acquired from the decoding unit PROD_B3 according to the recording encoding method may be interposed between the decoding unit PROD_B3 and the recording medium PROD_B5.
  • the transmission medium for transmitting the modulation signal may be wireless or wired.
  • the transmission mode for transmitting the modulated signal may be broadcasting (here, a transmission mode in which the transmission destination is not specified in advance) or communication (here, transmission in which the transmission destination is specified in advance). Refers to the embodiment). That is, the transmission of the modulation signal may be realized by any of wireless broadcasting, wired broadcasting, wireless communication, and wired communication.
  • a terrestrial digital broadcast broadcasting station (broadcasting equipment or the like) / receiving station (such as a television receiver) is an example of a transmitting device PROD_A / receiving device PROD_B that transmits and receives a modulated signal by wireless broadcasting.
  • a broadcasting station (such as broadcasting equipment) / receiving station (such as a television receiver) of cable television broadcasting is an example of a transmitting device PROD_A / receiving device PROD_B that transmits and receives a modulated signal by cable broadcasting.
  • a server workstation etc.
  • Client television receiver, personal computer, smart phone etc.
  • VOD Video On Demand
  • video sharing service using the Internet is a transmitting device for transmitting and receiving modulated signals by communication.
  • PROD_A / reception device PROD_B usually, either a wireless or wired transmission medium is used in a LAN, and a wired transmission medium is used in a WAN.
  • the personal computer includes a desktop PC, a laptop PC, and a tablet PC.
  • the smartphone also includes a multi-function mobile phone terminal.
  • the video sharing service client has a function of encoding a moving image captured by the camera and uploading it to the server. That is, the client of the video sharing service functions as both the transmission device PROD_A and the reception device PROD_B.
  • FIG. 29A is a block diagram illustrating a configuration of a recording apparatus PROD_C in which the above-described hierarchical video encoding apparatus 2 is mounted.
  • the recording apparatus PROD_C includes an encoding unit PROD_C1 that obtains encoded data by encoding a moving image, and the encoded data obtained by the encoding unit PROD_C1 on the recording medium PROD_M.
  • the hierarchical moving image encoding device 2 described above is used as the encoding unit PROD_C1.
  • the recording medium PROD_M may be of a type built in the recording device PROD_C, such as (1) HDD (Hard Disk Drive) or SSD (Solid State Drive), or (2) SD memory. It may be of the type connected to the recording device PROD_C, such as a card or USB (Universal Serial Bus) flash memory, or (3) DVD (Digital Versatile Disc) or BD (Blu-ray Disc: registration) For example, it may be loaded into a drive device (not shown) built in the recording device PROD_C.
  • the recording device PROD_C is a camera PROD_C3 that captures moving images as a supply source of moving images to be input to the encoding unit PROD_C1, an input terminal PROD_C4 for inputting moving images from the outside, and reception for receiving moving images.
  • the unit PROD_C5 and an image processing unit C6 that generates or processes an image may be further provided.
  • FIG. 29A illustrates a configuration in which the recording apparatus PROD_C includes all of these, but a part of the configuration may be omitted.
  • the receiving unit PROD_C5 may receive a non-encoded moving image, or may receive encoded data encoded by a transmission encoding scheme different from the recording encoding scheme. You may do. In the latter case, a transmission decoding unit (not shown) that decodes encoded data encoded by the transmission encoding method may be interposed between the reception unit PROD_C5 and the encoding unit PROD_C1.
  • Examples of such a recording device PROD_C include a DVD recorder, a BD recorder, and an HDD (Hard Disk Drive) recorder (in this case, the input terminal PROD_C4 or the receiving unit PROD_C5 is a main supply source of moving images).
  • a camcorder in this case, the camera PROD_C3 is a main source of moving images
  • a personal computer in this case, the receiving unit PROD_C5 or the image processing unit C6 is a main source of moving images
  • a smartphone in this case In this case, the camera PROD_C3 or the receiving unit PROD_C5 is a main supply source of moving images
  • the camera PROD_C3 or the receiving unit PROD_C5 is a main supply source of moving images
  • FIG. 29 is a block showing a configuration of a playback device PROD_D in which the above-described hierarchical video decoding device 1 is mounted.
  • the playback device PROD_D reads a moving image by decoding a read unit PROD_D1 that reads encoded data written to the recording medium PROD_M and a coded data read by the read unit PROD_D1. And a decoding unit PROD_D2 to be obtained.
  • the hierarchical moving image decoding apparatus 1 described above is used as the decoding unit PROD_D2.
  • the recording medium PROD_M may be of the type built into the playback device PROD_D, such as (1) HDD or SSD, or (2) such as an SD memory card or USB flash memory, It may be of a type connected to the playback device PROD_D, or (3) may be loaded into a drive device (not shown) built in the playback device PROD_D, such as DVD or BD. Good.
  • the playback device PROD_D has a display PROD_D3 that displays a moving image, an output terminal PROD_D4 that outputs the moving image to the outside, and a transmission unit that transmits the moving image as a supply destination of the moving image output by the decoding unit PROD_D2.
  • PROD_D5 may be further provided.
  • FIG. 29B illustrates a configuration in which the playback apparatus PROD_D includes all of these, but some of the configurations may be omitted.
  • the transmission unit PROD_D5 may transmit an unencoded moving image, or transmits encoded data encoded by a transmission encoding method different from the recording encoding method. You may do. In the latter case, it is preferable to interpose an encoding unit (not shown) that encodes a moving image with an encoding method for transmission between the decoding unit PROD_D2 and the transmission unit PROD_D5.
  • Examples of such a playback device PROD_D include a DVD player, a BD player, and an HDD player (in this case, an output terminal PROD_D4 to which a television receiver or the like is connected is a main supply destination of moving images).
  • a television receiver in this case, the display PROD_D3 is a main supply destination of moving images
  • a digital signage also referred to as an electronic signboard or an electronic bulletin board
  • the display PROD_D3 or the transmission unit PROD_D5 is the main supply of moving images.
  • Desktop PC (in this case, the output terminal PROD_D4 or the transmission unit PROD_D5 is the main video image supply destination), laptop or tablet PC (in this case, the display PROD_D3 or the transmission unit PROD_D5 is a moving image)
  • a smartphone which is a main image supply destination
  • a smartphone in this case, the display PROD_D3 or the transmission unit PROD_D5 is a main moving image supply destination
  • the like are also examples of such a playback device PROD_D.
  • each block of the hierarchical video decoding device 1 and the hierarchical video encoding device 2 may be realized in hardware by a logic circuit formed on an integrated circuit (IC chip), or may be a CPU (Central It may be realized by software using a Processing Unit).
  • IC chip integrated circuit
  • CPU Central It may be realized by software using a Processing Unit
  • each of the devices includes a CPU that executes instructions of a control program that realizes each function, a ROM (Read Memory) that stores the program, a RAM (Random Access Memory) that expands the program, the program, and A storage device (recording medium) such as a memory for storing various data is provided.
  • An object of the present invention is to provide a recording medium in which a program code (execution format program, intermediate code program, source program) of a control program for each of the above devices, which is software that realizes the above-described functions, is recorded so as to be readable by a computer. This can also be achieved by supplying each of the above devices and reading and executing the program code recorded on the recording medium by the computer (or CPU or MPU (Micro Processing Unit)).
  • Examples of the recording medium include tapes such as magnetic tapes and cassette tapes, magnetic disks such as floppy (registered trademark) disks / hard disks, CD-ROMs (Compact Disc-Read-Only Memory) / MO (Magneto-Optical) / Discs including optical discs such as MD (Mini Disc) / DVD (Digital Versatile Disc) / CD-R (CD Recordable), cards such as IC cards (including memory cards) / optical cards, mask ROM / EPROM (Erasable) Programmable Read-only Memory) / EEPROM (registered trademark) (Electrically Eraseable and Programmable Read-only Memory) / Semiconductor memories such as flash ROM, or logic circuits such as PLD (Programmable Logic Device) and FPGA (Field Programmable Gate Array) Etc. can be used.
  • tapes such as magnetic tapes and cassette tapes
  • magnetic disks such as floppy (registered trademark) disks / hard disks
  • each of the above devices may be configured to be connectable to a communication network, and the program code may be supplied via the communication network.
  • the communication network is not particularly limited as long as it can transmit the program code.
  • the Internet intranet, extranet, LAN (Local Area Network), ISDN (Integrated Services Digital Network), VAN (Value-Added Network), CATV (Community Area Antenna Television) communication network, Virtual Private Network (Virtual Private Network), A telephone line network, a mobile communication network, a satellite communication network, etc. can be used.
  • the transmission medium constituting the communication network may be any medium that can transmit the program code, and is not limited to a specific configuration or type.
  • the present invention can also be realized in the form of a computer data signal embedded in a carrier wave in which the program code is embodied by electronic transmission.
  • the present invention is suitable for a hierarchical image decoding device that decodes encoded data in which image data is hierarchically encoded, and a hierarchical image encoding device that generates encoded data in which image data is hierarchically encoded. Can be applied to. Further, the present invention can be suitably applied to the data structure of hierarchically encoded data that is generated by a hierarchical image encoding device and referenced by the hierarchical image decoding device.
  • Hierarchical video decoding device (image decoding device) 11 NAL Demultiplexing Unit 12 Variable Length Decoding Unit 13 Base Decoding Unit 14 Prediction Parameter Restoration Unit 15 Texture Restoration Unit 152 Texture Prediction Unit 152C Inter-layer Intra Prediction Unit (Predicted Image Generation Unit) 143 Intra prediction mode restoration unit (selection means) 122 MPM Deriving Unit 123 MPM Determination Unit 124 Prediction Mode Restoration Unit 126 Color Difference Prediction Mode Restoration Unit 127 Context Storage Unit 2 Hierarchical Video Coding Device (Image Coding Device) 21 Prediction parameter determination unit 22 Prediction information generation unit 223 Intra prediction mode derivation unit (selection means) 23 Base decoding unit 24 Texture information generation unit 241 Texture prediction unit 241C Intra-layer intra prediction unit (predicted image generation unit) 25 Variable length encoding unit 26 NAL multiplexing unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Pour améliorer l'efficacité de codage, un dispositif de décodage vidéo multicouche (1) comprend une section de récupération de mode d'intra-prédiction (143) destinée à sélectionner un mode de prédiction au sein d'un groupe de modes de prédiction qui contient au moins certains d'une pluralité de modes d'intra-prédiction prédéterminés et un mode d'intra-prédiction inter-couche, en faisant référence à une syntaxe commune associée au mode d'intra-prédiction inter-couche et à la pluralité de modes d'intra-prédiction.
PCT/JP2013/067618 2012-07-03 2013-06-27 Dispositif de décodage d'image et dispositif de codage d'image WO2014007131A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012149982A JP2015167267A (ja) 2012-07-03 2012-07-03 画像復号装置、および画像符号化装置
JP2012-149982 2012-07-03

Publications (1)

Publication Number Publication Date
WO2014007131A1 true WO2014007131A1 (fr) 2014-01-09

Family

ID=49881888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/067618 WO2014007131A1 (fr) 2012-07-03 2013-06-27 Dispositif de décodage d'image et dispositif de codage d'image

Country Status (2)

Country Link
JP (1) JP2015167267A (fr)
WO (1) WO2014007131A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015177343A (ja) * 2014-03-14 2015-10-05 三菱電機株式会社 画像符号化装置、画像復号装置、画像符号化方法及び画像復号方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101835358B1 (ko) 2012-10-01 2018-03-08 지이 비디오 컴프레션, 엘엘씨 향상 레이어 예측에 대한 인터-레이어 예측 기여를 이용한 스케일러블 비디오 코딩
CN107980184B (zh) 2015-08-26 2021-07-23 索尼公司 发光装置、显示装置和照明装置
JP7293189B2 (ja) * 2017-07-24 2023-06-19 アリス エンタープライジズ エルエルシー イントラモードjvetコーディング

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008543160A (ja) * 2005-05-26 2008-11-27 エルジー エレクトロニクス インコーポレイティド 階層間予測を通じてエンコードされた映像信号をデコーディングする方法
JP2009500981A (ja) * 2005-07-11 2009-01-08 トムソン ライセンシング マクロブロック適応型レイヤ間テクスチャ内予測の方法及び装置
JP2009538086A (ja) * 2006-11-17 2009-10-29 エルジー エレクトロニクス インコーポレイティド ビデオ信号のデコーディング/エンコーディング方法及び装置

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008543160A (ja) * 2005-05-26 2008-11-27 エルジー エレクトロニクス インコーポレイティド 階層間予測を通じてエンコードされた映像信号をデコーディングする方法
JP2009500981A (ja) * 2005-07-11 2009-01-08 トムソン ライセンシング マクロブロック適応型レイヤ間テクスチャ内予測の方法及び装置
JP2009538086A (ja) * 2006-11-17 2009-10-29 エルジー エレクトロニクス インコーポレイティド ビデオ信号のデコーディング/エンコーディング方法及び装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TOMOYUKI YAMAMOTO ET AL.: "Description of scalable video coding technology proposal by SHARP (proposal 2)", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG16 WP3 AND ISO/IEC JTC1/SC29/WG11, LLTH MEETING, 10 October 2012 (2012-10-10), SHANGHAI, CN *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2015177343A (ja) * 2014-03-14 2015-10-05 三菱電機株式会社 画像符号化装置、画像復号装置、画像符号化方法及び画像復号方法

Also Published As

Publication number Publication date
JP2015167267A (ja) 2015-09-24

Similar Documents

Publication Publication Date Title
JP6456535B2 (ja) 画像符号化装置、画像符号化方法および記録媒体
JP6284661B2 (ja) 画像符号化装置、および画像符号化方法
US10136151B2 (en) Image decoding device and image decoding method
JP6352248B2 (ja) 画像復号装置、および画像符号化装置
US10841600B2 (en) Image decoding device, an image encoding device and a decoding method
US20160249056A1 (en) Image decoding device, image coding device, and coded data
WO2014007131A1 (fr) Dispositif de décodage d'image et dispositif de codage d'image
WO2013161690A1 (fr) Dispositif de décodage d'image et dispositif de codage d'image
WO2014104242A1 (fr) Dispositif de codage d'image et dispositif de décodage d'image
JP2014176039A (ja) 画像復号装置、および画像符号化装置
JP2014013975A (ja) 画像復号装置、符号化データのデータ構造、および画像符号化装置
WO2013161689A1 (fr) Dispositif de décodage de vidéo animée et dispositif de codage de vidéo animée
WO2014050554A1 (fr) Dispositif de décodage d'image et dispositif de codage d'image
JP2014082729A (ja) 画像復号装置、および画像符号化装置
JP2015177318A (ja) 画像復号装置、画像符号化装置
JPWO2015098713A1 (ja) 画像復号装置および画像符号化装置
JP2014013976A (ja) 画像復号装置、および画像符号化装置
JP2015076807A (ja) 画像復号装置、画像符号化装置、および符号化データのデータ構造

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13813138

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13813138

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP