WO2015060295A1 - 画像復号装置、及び画像復号方法 - Google Patents
画像復号装置、及び画像復号方法 Download PDFInfo
- Publication number
- WO2015060295A1 WO2015060295A1 PCT/JP2014/077931 JP2014077931W WO2015060295A1 WO 2015060295 A1 WO2015060295 A1 WO 2015060295A1 JP 2014077931 W JP2014077931 W JP 2014077931W WO 2015060295 A1 WO2015060295 A1 WO 2015060295A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- layer
- unit
- target
- identifier
- decoding
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/61—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present invention relates to an image decoding apparatus that decodes hierarchically encoded data in which an image is hierarchically encoded, and an image encoding apparatus that generates hierarchically encoded data by hierarchically encoding an image.
- One of information transmitted in a communication system or information recorded in a storage device is an image or a moving image. 2. Description of the Related Art Conventionally, a technique for encoding an image for transmitting and storing these images (hereinafter including moving images) is known.
- Non-patent Document 1 As video encoding methods, AVC (H.264 / MPEG-4 Advanced Video Coding) and HEVC (High-Efficiency Video Coding), which is a successor codec, are known (Non-patent Document 1).
- a predicted image is usually generated based on a local decoded image obtained by encoding / decoding an input image, and obtained by subtracting the predicted image from the input image (original image).
- Prediction residuals (sometimes referred to as “difference images” or “residual images”) are encoded.
- examples of the method for generating a predicted image include inter-screen prediction (inter prediction) and intra-screen prediction (intra prediction).
- HEVC uses a technology that realizes temporal scalability (temporal scalability) assuming the case of playing back 60fps content at 30fps at a temporally thinned frame rate.
- a numerical value called a temporal identifier (TemporalID, sublayer identifier) is assigned to each picture, and a restriction is imposed that a picture with a larger temporal identifier does not refer to a picture with a temporal identifier smaller than the temporal identifier. .
- TemporalID temporal identifier
- sublayer identifier a numerical value assigned to each picture, and a restriction is imposed that a picture with a larger temporal identifier does not refer to a picture with a temporal identifier smaller than the temporal identifier.
- SHVC Scalable HEVC
- MV-HEVC MultiView HEVC
- SHVC supports spatial scalability, temporal scalability, and SNR scalability.
- spatial scalability an image downsampled from an original image to a desired resolution is encoded as a lower layer.
- inter-layer prediction is performed in order to remove redundancy between layers (Non-patent Document 2).
- MV-HEVC supports viewpoint scalability (view scalability). For example, when encoding three viewpoint images, that is, viewpoint image 0 (layer 0), viewpoint image 1 (layer 1), and viewpoint image 2 (layer 2), the viewpoint image that is the upper layer from the lower layer (layer 0) 1. Redundancy between layers can be removed by predicting the viewpoint image 2 by inter-layer prediction (Non-patent Document 3).
- Inter-layer prediction used in scalable coding schemes such as SHVC and MV-HEVC includes inter-layer image prediction and inter-layer motion prediction.
- inter-layer image prediction a predicted image of a target layer is generated using texture information (image) of a decoded picture of a lower layer (or another layer different from the target layer).
- inter-layer motion prediction motion information of a decoded picture of a lower layer (or another layer different from the target layer) is used to derive a prediction value of motion information of the target layer. That is, inter-layer prediction is performed by using a decoded picture of a lower layer (or another layer different from the target layer) as a reference picture of the target layer.
- a parameter set that defines a set of encoding parameters necessary for decoding / encoding encoded data (For example, a sequence parameter set SPS, a picture parameter set PPS, etc.)
- one of the parameter sets used in decoding / encoding of higher layers Encoding parameters of a part are predicted (also referred to as reference or inheritance) from corresponding encoding parameters in a parameter set used in decoding / encoding of a lower layer, and decoding / encoding of the encoding parameters is omitted.
- There is a prediction between parameter sets For example, there is a technique (also referred to as syntax prediction between parameter sets) for predicting scaling list information (quantization matrix) of a target layer notified from SPS or PPS from scaling list information of a lower layer.
- Non-Patent Documents 2 and 3 a layer whose SPS or PPS (value of the layer identifier of the parameter set is also nuhLayerIdA) used in decoding / encoding of a lower layer whose layer identifier value is nuhLayerIdA is larger than nuhLayerIdA
- the identifier value (nuhLayerIdB) is allowed to be used when decoding / encoding an upper layer.
- the layer identifier ⁇ ⁇ ⁇ (also referred to as nuh_layer_id, or layerId, lId) for identifying the layer
- a temporal identifier ⁇ ⁇ ⁇ (also referred to as nuh_temporal_id_plus1 or temporalId or tId) for identifying a sublayer associated with the layer
- a NAL unit type (nal_unit_type) indicating the type of encoded data stored in the NAL unit.
- Non-Patent Document 4 a sequence parameter set SPS in which a set of encoding parameters to be referred to for decoding the target sequence is specified, and the encoding parameter to be referred to for decoding each picture in the target sequence.
- JCTVC-O0092_v1 MV-HEVC / SHVC HLS On nuh_layer_id of SPS and PPS, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO / IEC JTC11 / SC 29 / WG 15th Meeting: Geneva, CH, 23 Oct.-1 Nov. 2013 (released on October 14, 2013)
- Non-Patent Documents 2 to 4 at the time of bitstream extraction, a layer A whose layer identifier value is nuhLayerIdA and a layer whose layer identifier value is nuhLayerIdB
- bit stream extraction discards the layer A encoded data and extracts the bit stream composed of only the layer B encoded data
- the layer B is decoded.
- a necessary layer A parameter set (layer identifier value is nuhLayerIdA) may be discarded. In this case, there is a problem that the extracted encoded data of layer B cannot be decoded.
- VCL video coding layer
- FIG. 1A the dependency between layers in the layer set A is VCL (video coding layer) of layer 2 (FIG. 1A) as shown in FIG.
- the upper VCL # L # 2) depends on the VCL of layer 1 (VCL L # 1 on FIG. 1A) as a reference layer for inter-layer prediction (inter-layer image prediction, inter-layer motion prediction) (FIG. 1). (Solid arrow above).
- the reference relationship of the parameter set is that the VPS of layer 0 (VPS L # 0 on FIG. 1 (a)) is the same for each layer 0 to layer 2, as indicated by the thick arrows on FIG.
- VCL Refer to parameter set and VCL (SPS L # 0, PPS L # 0, VCL L # 0, SPS L # 1, PPS L # 1, VCL L # 1 and VCL L # 2 on Fig. 1 (a)) Is done.
- the layer 0 SPS is referenced from the layer 0 PPS and VCL, and the layer 0 PPS is referenced from the layer 0 VCL.
- the Layer 1 SPS is referenced from the Layer 1 PPS, VCL, and Layer 2 VCL, and the Layer 1 PPS is referenced from the Layer 1 VCL and Layer 2 VCL.
- the present invention has been made in view of the above problems, and its object is to define bitstream restrictions on parameter sets and bitstream extraction processing, and generate the bitstream including the layer set by bitstream extraction processing. Another object of the present invention is to realize an image decoding apparatus and an image encoding apparatus that prevent generation of a layer that cannot be decoded on a bitstream including only a layer set that is a subset of the layer set.
- an image decoding apparatus that decodes input image encoded data, and includes a layer ID indicating a decoding target layer set including one or more layers Based on the list, an image encoded data extraction unit that extracts image encoded data related to the decoding target layer set from the input image encoded data, and a decoding target layer set based on the extracted image encoded data.
- a picture decoding unit that decodes a picture, and the input image encoded data extracted from the image encoded data extraction unit has a layer identifier that is not equal to 0 and is not included in the layer ID list It is characterized by not including non-VCL NAL units having
- An image decoding method is an image decoding method for decoding input image encoded data, wherein the input image code is based on a layer ID list indicating a decoding target layer set including one or more layers.
- An encoded image data extracting step for extracting encoded image data related to the decoding target layer set from the encoded data; a picture decoding step for decoding a picture of the decoding target layer set from the extracted encoded image data;
- the input image encoded data extracted in the image encoded data extraction step includes a non-VCL NAL unit having a layer identifier not equal to 0 and having a layer identifier not included in the layer ID list It is characterized by not.
- a subset of the layer set generated by the bitstream extraction process from the bitstream including the layer set by defining the bitstream constraint on the parameter set and the bitstream extraction process It is possible to prevent generation of a layer that cannot be decoded on a bit stream including only the layer set.
- FIG. (A) shows an example of layer set A
- (b) shows an example of layer set B after bitstream extraction.
- FIG. 1 It is a figure for demonstrating the layer and sublayer (temporal layer) which comprise the subset of the layer set extracted by the sub bitstream extraction process from the layer set shown in FIG. It is a figure which shows the example of the data structure which comprises a NAL unit layer. It is a figure which shows the example of the syntax contained in a NAL unit layer. (A) shows an example of syntax that constitutes the NAL unit layer, and (b) shows an example of syntax of the NAL unit header. It is a figure which shows the relationship between the value of a NAL unit type which concerns on embodiment of this invention, and the classification of a NAL unit. It is a figure which shows an example of a structure of the NAL unit contained in an access unit.
- (a) is a sequence layer which prescribes
- (b) is a picture layer which prescribes
- (c) is a slice layer that defines a slice S
- (d) is a slice data layer that defines slice data
- (e) is a coding tree layer that defines a coding tree unit included in the slice data
- (f) ) Is a diagram illustrating a coding unit layer that defines a coding unit (Coding Unit; CU) included in a coding tree. It is a figure for demonstrating the shared parameter set which concerns on this embodiment.
- (A) shows an example of a reference picture list
- (b) is a conceptual diagram showing an example of a reference picture.
- It is an example of the syntax table of VPS which concerns on embodiment of this invention.
- (A) shows an example of including the presence / absence of non-VCL dependency as a dependency type
- (b) is an example of including the presence / absence of a shared parameter set and the presence / absence of prediction between parameter sets as the dependency type.
- It is an example of the syntax table of SPS which concerns on embodiment of this invention.
- (A) shows a recording device equipped with a hierarchical video encoding device
- (b) shows a playback device equipped with a hierarchical video decoding device.
- the hierarchical moving picture decoding apparatus 1 and the hierarchical moving picture encoding apparatus 2 according to an embodiment of the present invention will be described as follows based on FIGS.
- a hierarchical video decoding device (image decoding device) 1 decodes encoded data that has been hierarchically encoded by a hierarchical video encoding device (image encoding device) 2.
- Hierarchical coding is a coding scheme that hierarchically encodes moving images from low quality to high quality.
- Hierarchical coding is standardized in SVC and SHVC, for example.
- the quality of a moving image here widely means an element that affects the appearance of a subjective and objective moving image.
- the quality of the moving image includes, for example, “resolution”, “frame rate”, “image quality”, and “pixel representation accuracy”.
- the quality of the moving image is different, it means that, for example, “resolution” is different, but it is not limited thereto.
- the quality of moving images is different from each other.
- the hierarchical coding technique is (1) spatial scalability, (2) temporal scalability, (3) SNR (Signal to Noise Ratio) scalability, and (4) view scalability from the viewpoint of the type of information to be hierarchized. May be classified. Spatial scalability is a technique for hierarchizing resolution and image size. Time scalability is a technique for layering at a frame rate (number of frames per unit time). SNR scalability is a technique for layering in coding noise. Also, view scalability is a technique for hierarchizing at the viewpoint position associated with each image.
- the hierarchical video encoding device 2 Prior to detailed description of the hierarchical video encoding device 2 and the hierarchical video decoding device 1 according to the present embodiment, first, (1) the hierarchical video encoding device 2 generates and the hierarchical video decoding device 1 performs decoding.
- the layer structure of the hierarchically encoded data to be performed will be described, and then (2) a specific example of the data structure that can be adopted in each layer will be described.
- FIG. 2 is a diagram schematically illustrating a case where a moving image is hierarchically encoded / decoded by three layers of a lower layer L3, a middle layer L2, and an upper layer L1. That is, in the example shown in FIGS. 2A and 2B, of the three layers, the upper layer L1 is the highest layer and the lower layer L3 is the lowest layer.
- a decoded image corresponding to a specific quality that can be decoded from hierarchically encoded data is referred to as a decoded image of a specific hierarchy (or a decoded image corresponding to a specific hierarchy) (for example, in the upper hierarchy L1).
- Decoded image POUT # A a decoded image of a specific hierarchy (or a decoded image corresponding to a specific hierarchy) (for example, in the upper hierarchy L1).
- FIG. 2A shows a hierarchical moving image encoding apparatus 2 # A to 2 # C that generates encoded data DATA # A to DATA # C by hierarchically encoding input images PIN # A to PIN # C, respectively. Is shown.
- FIG. 2B shows a hierarchical moving picture decoding apparatus 1 # A ⁇ that generates decoded images POUT # A ⁇ POUT # C by decoding the encoded data DATA # A ⁇ DATA # C, which are encoded hierarchically. 1 # C is shown.
- the input images PIN # A, PIN # B, and PIN # C that are input on the encoding device side have the same original image but different image quality (resolution, frame rate, image quality, and the like).
- the image quality decreases in the order of the input images PIN # A, PIN # B, and PIN # C.
- the hierarchical video encoding device 2 # C of the lower hierarchy L3 encodes the input image PIN # C of the lower hierarchy L3 to generate encoded data DATA # C of the lower hierarchy L3.
- Basic information necessary for decoding the decoded image POUT # C of the lower layer L3 is included (indicated by “C” in FIG. 2). Since the lower layer L3 is the lowest layer, the encoded data DATA # C of the lower layer L3 is also referred to as basic encoded data.
- the hierarchical video encoding apparatus 2 # B of the middle hierarchy L2 encodes the input image PIN # B of the middle hierarchy L2 with reference to the encoded data DATA # C of the lower hierarchy, and performs the middle hierarchy L2 Encoded data DATA # B is generated.
- additional data necessary for decoding the decoded image POUT # B of the intermediate hierarchy is added to the encoded data DATA # B of the intermediate hierarchy L2.
- Information (indicated by “B” in FIG. 2) is included.
- the hierarchical video encoding apparatus 2 # A of the upper hierarchy L1 encodes the input image PIN # A of the upper hierarchy L1 with reference to the encoded data DATA # B of the intermediate hierarchy L2 to Encoded data DATA # A is generated.
- the encoded data DATA # A of the upper layer L1 is used to decode the basic information “C” necessary for decoding the decoded image POUT # C of the lower layer L3 and the decoded image POUT # B of the middle layer L2.
- additional information indicated by “A” in FIG. 2 necessary for decoding the decoded image POUT # A of the upper layer is included.
- the encoded data DATA # A of the upper layer L1 includes information related to decoded images of different qualities.
- the decoding device side will be described with reference to FIG.
- the decoding devices 1 # A, 1 # B, and 1 # C corresponding to the layers of the upper layer L1, the middle layer L2, and the lower layer L3 are encoded data DATA # A and DATA # B, respectively.
- And DATA # C are decoded to output decoded images POUT # A, POUT # B, and POUT # C.
- a part of information of the higher layer encoded data is extracted (also referred to as bitstream extraction), and a specific quality moving image is obtained by decoding the extracted information in a lower specific decoding device. It can also be played.
- the hierarchy decoding apparatus 1 # B of the middle hierarchy L2 receives information necessary for decoding the decoded image POUT # B from the hierarchy encoded data DATA # A of the upper hierarchy L1 (that is, the hierarchy encoded data DATA # A decoded image POUT # B may be decoded by extracting “B” and “C”) included in A.
- the decoded images POUT # A, POUT # B, and POUT # C can be decoded based on information included in the hierarchically encoded data DATA # A of the upper hierarchy L1.
- the hierarchical encoded data is not limited to the above three-layer hierarchical encoded data, and the hierarchical encoded data may be hierarchically encoded with two layers or may be hierarchically encoded with a number of layers larger than three. Good.
- Hierarchically encoded data may be configured as described above. For example, in the example described above with reference to FIGS. 2A and 2B, it has been described that “C” and “B” are referred to for decoding the decoded image POUT # B, but the present invention is not limited thereto. It is also possible to configure the hierarchically encoded data so that the decoded image POUT # B can be decoded using only “B”. For example, it is possible to configure a hierarchical video decoding apparatus that receives the hierarchically encoded data composed only of “B” and the decoded image POUT # C for decoding the decoded image POUT # B.
- Hierarchically encoded data can also be generated so that In that case, the lower layer hierarchical video encoding device generates hierarchical encoded data by quantizing the prediction residual using a larger quantization width than the upper layer hierarchical video encoding device. To do.
- VCL NAL unit VCL (Video Coding Layer) ⁇ NAL unit is a NAL unit that includes encoded data of moving images (video signals).
- the VCL NAL unit includes slice data (CTU encoded data) and header information (slice header) commonly used through decoding of the slice.
- the encoded data stored in the VCLALNAL unit is called VCL.
- Non-VCL NAL unit Non-VCL (non-Video Coding ⁇ Layer, non-video coding layer, non-VCL) NAL unit is a sequence or picture of video parameter set VPS, sequence parameter set SPS, picture parameter set PPS, etc. This is a NAL unit including encoded data such as header information that is a set of encoding parameters used when decoding.
- the encoded data stored in the non-VCLVNAL unit is referred to as non-VCL.
- a layer identifier (also referred to as a layer ID) is for identifying a layer (layer), and corresponds to the layer one-to-one.
- the hierarchically encoded data includes an identifier used for selecting partial encoded data necessary for decoding a decoded image of a specific hierarchy.
- a subset of hierarchically encoded data associated with a layer identifier corresponding to a specific layer is also referred to as a layer representation.
- a layer representation of the layer and / or a layer representation corresponding to a lower layer of the layer is used. That is, in decoding the decoded image of the target layer, layer representation of the target layer and / or layer representation of one or more layers included in a lower layer of the target layer are used.
- Layer A set of VCL NAL units having a layer identifier value (nuh_layer_id, nuhLayerId) of a specific layer (layer) and non-VCL ⁇ NAL units associated with the VCL NAL unit, or a syntax having a hierarchical relationship One of the set of structures.
- Upper layer A layer located above a certain layer is referred to as an upper layer.
- the upper layers of the lower layer L3 are the middle layer L2 and the upper layer L1.
- the decoded image of the upper layer means a decoded image with higher quality (for example, high resolution, high frame rate, high image quality, etc.).
- Lower layer A layer located below a certain layer is referred to as a lower layer.
- the lower layers of the upper layer L1 are the middle layer L2 and the lower layer L3.
- the decoded image of the lower layer refers to a decoded image with lower quality.
- Target layer A layer that is the target of decoding or encoding.
- a decoded image corresponding to the target layer is referred to as a target layer picture.
- pixels constituting the target layer picture are referred to as target layer pixels.
- Reference layer A specific lower layer referred to for decoding a decoded image corresponding to the target layer is referred to as a reference layer.
- a decoded image corresponding to the reference layer is referred to as a reference layer picture.
- pixels constituting the reference layer are referred to as reference layer pixels.
- the reference layers of the upper hierarchy L1 are the middle hierarchy L2 and the lower hierarchy L3.
- the hierarchically encoded data can be configured so that it is not necessary to refer to all of the lower layers in decoding of the specific layer.
- the hierarchical encoded data can be configured such that the reference layer of the upper hierarchy L1 is either the middle hierarchy L2 or the lower hierarchy L3.
- the reference layer can also be expressed as a layer different from the target layer that is used (referenced) when predicting an encoding parameter or the like used for decoding the target layer.
- a reference layer that is directly referenced in inter-layer prediction of the target layer is also referred to as a direct reference layer.
- the direct reference layer B referred to in the inter-layer prediction of the direct reference layer A of the target layer is also referred to as an indirect reference layer of the target layer.
- Basic layer A layer located at the lowest layer is called a basic layer.
- the decoded image of the base layer is the lowest quality decoded image that can be decoded from the encoded data, and is referred to as a basic decoded image.
- the basic decoded image is a decoded image corresponding to the lowest layer.
- the partially encoded data of the hierarchically encoded data necessary for decoding the basic decoded image is referred to as basic encoded data.
- the basic information “C” included in the hierarchically encoded data DATA # A of the upper hierarchy L1 is the basic encoded data.
- Extension layer The upper layer of the base layer is called the extension layer.
- Inter-layer prediction is based on the syntax element value, the value derived from the syntax element value included in the layer expression of the layer (reference layer) different from the layer expression of the target layer, and the decoded image. It is to predict the syntax element value of the target layer, the encoding parameter used for decoding of the target layer, and the like. Inter-layer prediction in which information related to motion prediction is predicted from reference layer information is sometimes referred to as inter-layer motion information prediction. In addition, inter-layer prediction predicted from a lower layer decoded image may be referred to as inter-layer image prediction (or inter-layer texture prediction). Note that the hierarchy used for inter-layer prediction is, for example, a lower layer of the target layer. In addition, performing prediction within a target layer without using a reference layer may be referred to as intra-layer prediction.
- Temporal identifier (also referred to as temporal ID, temporal identifier, sublayer ID, or sublayer identifier) is an identifier for identifying a layer related to temporal scalability (hereinafter referred to as sublayer).
- the temporal identifier is for identifying the sublayer, and corresponds to the sublayer on a one-to-one basis.
- the encoded data includes a temporal identifier used for selecting partial encoded data necessary for decoding a decoded image of a specific sublayer.
- the temporal (highest) sublayer temporal identifier is referred to as the highest (highest) temporal identifier (highest TemporalId, highestTid).
- a sublayer is a layer related to temporal scalability specified by a temporal identifier. In order to distinguish from other scalability such as spatial scalability, SNR scalability, and the like, they are hereinafter referred to as sub-layers (also referred to as temporal layers). In the following description, temporal scalability is assumed to be realized by sublayers included in encoded data of the base layer or hierarchically encoded data necessary for decoding a certain layer.
- Layer set is a set of layers composed of one or more layers.
- Bitstream extraction processing refers to the highest ID of the target temporal temporal identifier (highest ⁇ ⁇ ⁇ TemporalId, highestTid) from a certain bitstream (hierarchical encoded data, encoded data), and a layer ID that represents a layer included in the target layer set. NAL units that are not included in the set (called TargetSet TargetSet) determined by the list (also called LayerSetLayerIdList [] or LayerIdList []) are removed (discarded), and the bit stream (sub- (Also referred to as a bitstream). Bitstream extraction is also called sub-bitstream extraction.
- the target highest temporal identifier is referred to as “HighestTidTarget”
- the target layer set is referred to as “LayerSetTarget”
- the layer ID list (target layer ID list) of the target layer set is also referred to as “LayerIdListTarget”.
- a bit stream (image encoded data) generated by bit stream extraction and including NAL units included in the target set is also referred to as decoding target image encoded data (BitstreamToDecode).
- hierarchical encoded data including layer set B which is a subset of layer set A
- bitstream extraction processing An example will be described.
- FIG. 3 shows the configuration of layer set A composed of three layers (L # 0, L # 1, L # 2) and each layer consisting of three sublayers (TID1, TID2, TID3).
- symbol L # N indicates a certain layer N
- each box in FIG. 3 represents a picture
- the number in the box represents an example of decoding order.
- the number N in the picture is expressed as P # N (the same applies to FIG. 4).
- the arrows between the pictures indicate the dependency direction (reference relationship) between the pictures.
- An arrow in the same layer indicates a reference picture used for inter prediction.
- An arrow between layers indicates a reference picture (also referred to as a reference layer picture) used for inter-layer prediction.
- AU in FIG. 3 represents an access unit
- symbol #N represents an access unit number
- AU # N represents the (N ⁇ 1) th access unit if the AU at a certain starting point (for example, random access start point) is AU # 0, and represents the order of AUs included in the bitstream. . That is, in the example of FIG. 3, the access units are stored in the order of AU # 0, AU # 1, AU # 2, AU # 3, AU # 4,.
- the access unit represents a set of NAL units aggregated according to a specific classification rule.
- AU # 0 in FIG. 3 can be regarded as a set of VCL NAL including encoded data of pictures P # 1, P # 1, and P # 3. Details of the access unit will be described later.
- the target set (layer set) is obtained from the bit stream including the layer set A.
- the dotted box represents a discarded picture
- the dotted arrow indicates the dependency direction between the discarded picture and the reference picture. It should be noted that the dependency relationship has already been cut off because the NAL units constituting the sub-layer pictures of layer L # 3 and TID3 have been discarded.
- SHVC and MV-HEVC introduce the concept of layers and sub-layers in order to realize SNR scalability, spatial scalability, temporal scalability, and the like.
- a picture highest temporal ID (TID3)
- TID3 highest temporal ID
- FIGS. 3 and 4 by discarding the encoded data of the pictures (10, 13, 11, 14, 12, 15), encoded data with a frame rate of 1 ⁇ 2 is generated.
- the granularity of each scalability can be changed by discarding the encoded data of the layer that is not included in the target set by bitstream extraction.
- the encoded data of the scalability granularity is generated by discarding the encoded data of the pictures (3, 6, 9, 12, 15).
- the lower layer and the upper layer may be encoded by different encoding methods.
- the encoded data of each layer may be supplied to the hierarchical video decoding device 1 via different transmission paths, or may be supplied to the hierarchical video decoding device 1 via the same transmission path. .
- the base layer when transmitting ultra-high-definition video (moving image, 4K video data) with a base layer and one extended layer in a scalable encoding, the base layer downscales 4K video data, and interlaced video data. It may be encoded by MPEG-2 or H.264 / AVC and transmitted over a television broadcast network, and the enhancement layer may encode 4K video (progressive) with HEVC and transmit over the Internet.
- FIG. 5 is a diagram showing a hierarchical structure of data in the hierarchically encoded data DATA.
- the hierarchically encoded data DATA is encoded in units called NAL (Network Abstraction Layer) units.
- the NAL is a layer provided to abstract communication between a VCL (Video Coding Layer) that is a layer that performs a moving image encoding process and a lower system that transmits and stores encoded data.
- VCL Video Coding Layer
- VCL is a layer that performs image encoding processing, and encoding is performed in the VCL.
- the lower system here is H.264. H.264 / AVC and HEVC file formats and MPEG-2 systems are supported. In the example shown below, the lower system corresponds to the decoding process in the target layer and the reference layer.
- NAL a bit stream generated by VCL is divided into units called NAL units and transmitted to a destination lower system.
- FIG. 6A shows a syntax table of a NAL (Network Abstraction Layer) unit.
- the NAL unit includes encoded data encoded by the VCL and a header (NAL unit header: nal_unit_header () (SYNNAL01 in FIG. 6)) for appropriately delivering the encoded data to the destination lower system. included.
- the NAL unit header is represented by the syntax shown in FIG. 6B, for example.
- the NAL unit header includes “nal_unit_type” indicating the type of encoded data stored in the NAL unit, “nuh_temporal_id_plus1” indicating the identifier (temporal identifier) of the sublayer to which the stored encoded data belongs, and stored encoding “Nuh_layer_id” (or nuh_reserved_zero_6bits) representing the identifier (layer identifier) of the layer to which the data belongs is described.
- the NAL unit data (SYNNAL02 in FIG. 6) includes a parameter set, SEI, slice, and the like which will be described later.
- FIG. 7 is a diagram showing the relationship between the value of the NAL unit type and the type of the NAL unit.
- a NAL unit having a NAL unit type with a value of 0 to 15 indicated by SYNA101 is a non-RAP (random access picture) slice.
- a NAL unit having a NAL unit type of 16 to 21 indicated by SYNA 102 is a slice of RAP (random access point picture, IRAP picture).
- RAP pictures are roughly classified into BLA pictures, IDR pictures, and CRA pictures.
- BLA pictures are further classified into BLA_W_LP, BLA_W_DLP, and BLA_N_LP.
- IDR pictures are further classified into IDR_W_DLP and IDR_N_LP.
- Pictures other than the RAP picture include a leading picture (LP picture), a temporal access picture (TSA picture, STSA picture), and a trailing picture (TRAIL picture).
- LP picture leading picture
- TSA picture temporal access picture
- TRAIL picture trailing picture
- the encoded data in each layer is stored in the NAL unit, is NAL-multiplexed, and is transmitted to the hierarchical moving image decoding apparatus 1.
- each NAL unit is classified into data (VCL data) constituting a picture and other data (non-VCL) according to the NAL unit type.
- the pictures are all classified into VCL NAL units regardless of the picture type such as random access picture, leading picture, trailing picture, etc., and a parameter set which is data necessary for decoding the picture, SEI which is auxiliary information of the picture,
- An access unit delimiter (AUD), an end-of-sequence (EOS), an end-of-bit stream (EOB) (SYNA 103 in FIG. 7) and the like representing sequence delimiters are classified as non-VCLVNAL units.
- a set of NAL units aggregated according to a specific classification rule is called an access unit.
- the access unit is a set of NAL units constituting one picture.
- the access unit is a set of NAL units that constitute pictures of a plurality of layers at the same time.
- the encoded data may include a NAL unit called an access unit delimiter.
- the access unit delimiter is included between a set of NAL units constituting an access unit in the encoded data and a set of NAL units constituting another access unit.
- FIG. 8 is a diagram illustrating an example of the configuration of the NAL unit included in the access unit.
- the AU has an access unit delimiter (AUD) indicating the head of the AU, various parameter sets (VPS, SPS, PPS), various SEIs (PrefixPreSEI, Suffix SEI), and the number of layers.
- AUD access unit delimiter
- VCL slice
- SEIs PrefixPreSEI
- Suffix SEI SEI
- the number of layers When the number is 1, VCL (slice) constituting one picture, when the number of layers is larger than 1, VCL constituting pictures corresponding to the number of layers, EOS (End of Sequence) indicating the end of the sequence, and end of the bitstream It is composed of NAL units such as EOB (End of Bitstream).
- a code L # K (K Nmin...
- Nmax after VPS, SPS, SEI, and VCL represents a layer ID.
- SPS, PPS, SEI, and VCL of each layer L # Nmin to layer L # Nmax exist in ascending order of layer IDs except for VPS.
- the VPS is sent only with the lowest layer ID.
- an arrow indicates whether the specific NAL unit exists in the AU or repeatedly exists. For example, if a specific NAL unit exists in the AU, it is indicated by an arrow passing through the NAL unit, and if a specific NAL unit does not exist in the AU, it is indicated by an arrow skipping the NAL unit.
- an arrow heading to the VPS without passing through the AUD indicates a case where the AUD does not exist in the AU.
- a VPS having a higher layer ID other than the lowest order may be included in the AU, but the image decoding apparatus ignores a VPS having a layer ID other than the lowest order.
- various parameter sets (VPS, SPS, PPS) and SEI as auxiliary information may be included as part of the access unit as shown in FIG. 8, or transmitted to the decoder by means different from the bit stream. May be.
- an IRAP access unit that performs initialization of decoding processing of all layers included in the decoding target layer set is referred to as an initialization IRAP access unit.
- an initialized IRAP access unit is followed by zero or more non-initialized IRAP access units (access units other than the initialized IRAP access unit), and a set of access units up to the next initialized IRAP access unit ( However, the next initialization IRAP access unit is excluded) is also referred to as CVS (Coded Video Sequence; coded video sequence, hereinafter including sequence SEQ).
- CVS Coded Video Sequence; coded video sequence, hereinafter including sequence SEQ.
- FIG. 9 is a diagram showing a hierarchical structure of data in the hierarchically encoded data DATA.
- Hierarchically encoded data DATA illustratively includes a sequence and a plurality of pictures constituting the sequence.
- (A) to (f) of FIG. 9 respectively show a sequence layer that defines a sequence SEQ, a picture layer that defines a picture PICT, a slice layer that defines a slice S, a slice data layer that defines slice data, and a slice data.
- sequence layer In the sequence layer, a set of data referred to by the image decoding device 1 for decoding a sequence SEQ to be processed (hereinafter also referred to as a target sequence) is defined.
- the sequence SEQ includes a video parameter set (Sequence Parameter Set), a picture parameter set PPS (Picture Parameter Set), a picture PICT, and an additional extension.
- Information SEI Supplemental Enhancement Information
- # indicates the layer ID.
- FIG. 9 shows an example in which encoded data with # 0 and # 1, that is, layer ID 0 and layer ID 1 exists, the type of layer and the number of layers are not limited to this.
- Video parameter set In the video parameter set VPS, a set of encoding parameters referred to by the image decoding apparatus 1 in order to decode encoded data composed of one or more layers is defined.
- a VPS identifier (video_parameter_set_id) used to identify a VPS referred to by a sequence parameter set and other syntax elements described later, the number of layers included in the encoded data (vps_max_layers_minus1), the number of sublayers included in the layer (vps_sub_layers_minus1) ), The number of layer sets (vps_num_layer_sets_minus1) that specifies a set of layers composed of one or more layers expressed in the encoded data, and layer set configuration information (layer_id_included_flag [i] that specifies the set of layers that make up the layer set) [j]), dependency relationships between layers (direct dependency flag direct_dependency_flag [i] [j], layer dependency type direct_dependency_type [i] [j]), and
- VPS VPS used for decoding
- a VPS used for decoding is selected from a plurality of candidates for each target sequence.
- a VPS used for decoding a specific sequence belonging to a certain layer is called an active VPS.
- VPS means an active VPS for a target sequence belonging to a certain layer.
- bit stream constraint also referred to as bit stream conformance
- bitstream conformance is a condition that the bitstream to be decoded by the hierarchical video decoding device (here, the hierarchical video decoding device according to the embodiment of the present invention) needs to be satisfied.
- the hierarchical video encoding device (here, the bitstream generated by the hierarchical video encoding device according to the embodiment of the present invention is guaranteed to be a bitstream that can be decoded by the hierarchical video decoding device). Therefore, it is necessary to satisfy the bitstream conformance, that is, as the bitstream conformance, the bitstream must satisfy at least the following condition CX1.
- the VPS referenced by the target layer is a VCL included in an access unit that is a set of NAL units of the target layer set and belongs to the same layer as the VCL having the lowest layer identifier”
- a layer in layer set B that is a subset of A is extracted by bitstream extraction in layer set A when referring to a VPS of a layer “included in layer set A but not included in layer set B”
- the layer set B it means that a VPS having the same encoding parameter as that of the VPS is included in the layer set B ”.
- the VPS having the same encoding parameters as the VPS indicates that the VPS identifier and other syntax in the VPS are the same as the VPS except for the layer identifier and the temporal identifier.
- bit stream restriction it is possible to solve the problem that the VPS is not included in the layer set on the bit stream after the bit stream is extracted. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- bit stream conformance the bit stream must satisfy at least the following condition CX2.
- the condition of CX2 can be rephrased as the following condition CX2 ′.
- the VPS referred to by the target layer is the VPS having the lowest layer identifier in the target layer set” means that “the layer in the layer set B that is a subset of the layer set A is not in the layer set A.
- a VPS having the same encoding parameter as the VPS is It means “included in layer set B”.
- the problem that may occur in the prior art shown in FIG. 1 is that decoding is impossible on a bitstream including only a layer set of a subset of the layer set generated from the bitstream extraction process from a bitstream of a certain layer set. Generation of layers can be prevented.
- sequence parameter set a set of encoding parameters referred to by the image decoding device 1 in order to decode the target sequence.
- an active VPS identifier sps_video_parameter_set_id
- an SPS identifier sps_seq_parameter_set_id
- a plurality of SPSs may exist in the encoded data. In that case, an SPS used for decoding is selected from a plurality of candidates for each target sequence.
- An SPS used for decoding a specific sequence belonging to a certain layer is also called an active SPS.
- the SPS applied to the base layer and the enhancement layer may be distinguished, the SPS for the base layer may be referred to as an active SPS, and the SPS for the enhancement layer may be referred to as an active layer SPS.
- the SPS means an active SPS used for decoding a target sequence belonging to a certain layer.
- Picture parameter set a set of encoding parameters referred to by the image decoding device 1 for decoding each picture in the target sequence is defined.
- an active SPS identifier (pps_seq_parameter_set_id) representing an active SPS referred to by the target PPS
- a PPS identifier (pps_pic_parameter_set_id) used to identify a PPS referred to by a slice header or other syntax element described later
- decoding of a picture A quantization width reference value (pic_init_qp_minus26), a flag (weighted_pred_flag) indicating application of weighted prediction, and a scaling list (quantization matrix) are included.
- a plurality of PPS may exist. In that case, one of a plurality of PPSs is selected from each picture in the target sequence.
- a PPS used for decoding a specific picture belonging to a certain layer is called an active PPS.
- PPS applied to the base layer and the enhancement layer may be distinguished, and PPS for the base layer may be referred to as active PPS, and PPS for the enhancement layer may be referred to as active layer PPS.
- PPS means active PPS for a target picture belonging to a certain layer.
- the active SPS and the active PPS may be set to different SPS or PPS for each layer. That is, the decoding process can be executed with reference to different SPSs and PPSs for each layer.
- Picture layer In the picture layer, a set of data that is referred to by the hierarchical video decoding device 1 in order to decode a picture PICT to be processed (hereinafter also referred to as a target picture) is defined. As shown in FIG. 9B, the picture PICT includes slices S0 to SNS-1 (NS is the total number of slices included in the picture PICT).
- slice layer In the slice layer, a set of data that is referred to by the hierarchical video decoding device 1 in order to decode a slice S (also referred to as a target slice) to be processed is defined. As shown in FIG. 9C, the slice S includes a slice header SH and slice data SDATA.
- the slice header SH includes a coding parameter group that the hierarchical video decoding device 1 refers to in order to determine a decoding method of the target slice.
- an active PPS identifier (slice_pic_parameter_set_id) that specifies a PPS (active PPS) that is referred to for decoding the target slice is included.
- the SPS referred to by the active PPS is specified by an active SPS identifier (pps_seq_parameter_set_id) included in the active PPS.
- the VPS (active VPS) referred to by the active SPS is specified by an active VPS identifier (sps_video_parameter_set_id) included in the active SPS.
- FIG. 10 shows the reference relationship between the header information and the encoded data constituting the access unit (AU).
- the PPS (active PPS) used for decoding is designated (also called activation) by the identifier at the start of decoding of each slice.
- the encoding parameters of each PPS, SPS, and VPS referenced from each slice must be the same.
- the activated PPS includes an active SPS identifier that designates an SPS (active SPS) to be referred to in the decryption process, and designates (activates) an SPS (active SPS) used for decryption by the identifier. .
- the activated SPS includes an active VPS identifier that designates a VPS (active VPS) to be referred to in the decoding process of a sequence belonging to each layer, and the VPS (active VPS) is specified (activated).
- the parameter set necessary for executing the decoding process of the encoded data of each layer is determined by the above procedure.
- a slice with layer ID L # Nmin refers to a parameter set having the same layer ID. That is, in the example of FIG.
- slice type designation information for designating a slice type is an example of an encoding parameter included in the slice header SH.
- I slice using only intra prediction at the time of encoding (2) P slice using unidirectional prediction or intra prediction at the time of encoding, (3) B-slice using unidirectional prediction, bidirectional prediction, or intra prediction at the time of encoding may be used.
- the slice data layer a set of data referred to by the hierarchical video decoding device 1 for decoding the slice data SDATA to be processed is defined.
- the slice data SDATA includes a coded tree block (CTB) as shown in (d) of FIG.
- CTB is a fixed-size block (for example, 64 ⁇ 64) constituting a slice, and may be referred to as a maximum coding unit (LCU).
- the coding tree layer defines a set of data that the hierarchical video decoding device 1 refers to in order to decode a coding tree block to be processed.
- the coding tree unit is divided by recursive quadtree division.
- a tree-structured node obtained by recursive quadtree partitioning is called a coding tree.
- An intermediate node of the quadtree is a coded tree unit (CTU), and the coded tree block itself is defined as the highest CTU.
- the CTU includes a split flag (split_flag). When the split_flag is 1, the CTU is split into four coding tree units CTU.
- the coding tree unit CTU is divided into four coding units (CU: Coded Unit).
- the coding unit CU is a terminal node of the coding tree layer and is not further divided in this layer.
- the encoding unit CU is a basic unit of the encoding process.
- the size of the coding tree unit CTU and the size that each coding unit can take are the size designation information of the minimum coding node and the maximum coding node and the minimum coding node included in the sequence parameter set SPS.
- the size of the coding tree unit CTU is 64 ⁇ 64 pixels.
- the size of the encoding node can take any of four sizes, that is, 64 ⁇ 64 pixels, 32 ⁇ 32 pixels, 16 ⁇ 16 pixels, and 8 ⁇ 8 pixels.
- the partial area on the target picture decoded by the coding tree unit is called a coding tree block (CTB: “Coding” Tree ”block).
- the CTB corresponding to the luminance picture that is the luminance component of the target picture is called luminance CTB.
- luminance CTB the partial area on the luminance picture decoded from the CTU.
- color difference CTB the partial area corresponding to the color difference picture decoded from the CTU.
- the luminance CTB size and the color difference CTB size can be converted into each other. For example, when the color format is 4: 2: 2, the color difference CTB size is half of the luminance CTB size.
- the CTB size means the luminance CTB size.
- the CTU size is a luminance CTB size corresponding to the CTU.
- the encoding unit layer defines a set of data that the hierarchical video decoding device 1 refers to in order to decode the processing target encoding unit.
- the coding unit CU (coding unit) includes a CU header CUH, a prediction tree, and a conversion tree.
- the CU header CUH it is defined whether the coding unit is a unit using intra prediction or a unit using inter prediction.
- the encoding unit is the root of a prediction tree (PT) and a transform tree (TT).
- PT prediction tree
- TT transform tree
- CB coding block
- CB on the luminance picture is called luminance CB
- CB on the color difference picture is called color difference CB.
- the CU size (encoding node size) means the luminance CB size.
- the coding unit CU is divided into one or a plurality of transformation blocks, and the position and size of each transformation block are defined.
- the transform block is one or a plurality of non-overlapping areas constituting the encoding unit CU.
- the conversion tree includes one or a plurality of conversion blocks obtained by the above division. Note that information regarding the conversion tree included in the CU and information included in the conversion tree are referred to as TT information.
- the division in the transformation tree includes the one in which an area having the same size as that of the encoding unit is assigned as the transformation block, and the one in the recursive quadtree division like the above-described division in the tree block.
- the conversion process is performed for each conversion block.
- the transform block which is a unit of transform is also referred to as a transform unit (TU).
- TT division information SP_TT for designating a division pattern for the transformation block of the target CU
- quantized prediction residuals QD 1 to QD NT (NT is the total number of transformation units TU included in the target CU) Is included.
- TT division information SP_TT is information for determining the shape of each transformation block included in the target CU and the position in the target CU.
- the TT division information SP_TT can be realized from information (split_transform_unit_flag) indicating whether or not the target node is divided and information (transfoDepth) indicating the division depth.
- split_transform_unit_flag information indicating whether or not the target node is divided
- transfoDepth information
- each transform block obtained by the division can take a size from 32 ⁇ 32 pixels to 4 ⁇ 4 pixels.
- Each quantization prediction residual QD is encoded data generated by the hierarchical video encoding device 2 performing the following processes 1 to 3 on a target block that is a conversion block to be processed.
- Process 1 The prediction residual obtained by subtracting the prediction image from the encoding target image is subjected to frequency conversion (for example, DCT conversion (Discrete Cosine Transform) and DST conversion (Discrete Sine Transform));
- Process 2 Quantize the transform coefficient obtained in Process 1;
- Process 3 Variable length coding is performed on the transform coefficient quantized in Process 2;
- the coding unit CU is divided into one or a plurality of prediction blocks, and the position and size of each prediction block are defined.
- the prediction block is one or a plurality of non-overlapping areas constituting the coding unit CU.
- the prediction tree includes one or a plurality of prediction blocks obtained by the above division. Note that the information regarding the prediction tree included in the CU and the information included in the prediction tree are referred to as PT information.
- Prediction processing is performed for each prediction block.
- a prediction block that is a unit of prediction is also referred to as a prediction unit (PU).
- Intra prediction is prediction within the same picture
- inter prediction refers to prediction processing performed between different pictures (for example, between display times and between layer images).
- inter prediction a decoded picture on a reference picture is determined using either a reference picture (in-layer reference picture) in the same layer as the target layer or a reference picture (inter-layer reference picture) on the reference layer of the target layer as a reference picture. To generate a predicted image.
- the division method is encoded by part_mode of encoded data, and 2N ⁇ 2N (the same size as the encoding unit), 2N ⁇ N, 2N ⁇ nU, 2N ⁇ nD, N ⁇ 2N, nL X2N, nRx2N, and NxN.
- N 2 m (m is an arbitrary integer of 1 or more).
- the PU included in the CU is 1 to 4. These PUs are expressed as PU0, PU1, PU2, and PU3 in this order.
- the prediction image of the prediction unit is derived by a prediction parameter associated with the prediction unit.
- the prediction parameters include a prediction parameter for intra prediction or a prediction parameter for inter prediction.
- the intra prediction parameter is a parameter for restoring intra prediction (prediction mode) for each intra PU.
- Parameters for restoring the prediction mode include mpm_flag which is a flag related to MPM (Most Probable Mode, the same applies hereinafter), mpm_idx which is an index for selecting the MPM, and an index for designating a prediction mode other than the MPM. Rem_idx is included.
- MPM is an estimated prediction mode that is highly likely to be selected in the target partition.
- the MPM may include an estimated prediction mode estimated based on prediction modes assigned to partitions around the target partition, and a DC mode or Planar mode that generally has a high probability of occurrence.
- prediction mode when simply described as “prediction mode”, it means the luminance prediction mode unless otherwise specified.
- the color difference prediction mode is described as “color difference prediction mode” and is distinguished from the luminance prediction mode.
- the parameter for restoring the prediction mode includes chroma_mode that is a parameter for designating the color difference prediction mode.
- the inter prediction parameter includes prediction list use flags predFlagL0 and predFlagL1, reference picture indexes refIdxL0 and refIdxL1, and vectors mvL0 and mvL1.
- the prediction list use flags predFlagL0 and predFlagL1 are flags indicating whether or not reference picture lists called L0 reference list and L1 reference list are used, respectively, and a reference picture list corresponding to a value of 1 is used.
- predFlagL0, predFlagL1) (0, 1) corresponds to single prediction.
- Syntax elements for deriving inter prediction parameters included in the encoded data include, for example, a partition mode part_mode, a merge flag merge_flag, a merge index merge_idx, an inter prediction identifier inter_pred_idc, a reference picture index refIdxLX, a prediction vector index mvp_LX_idx, and a difference There is a vector mvdLX.
- Each value of the prediction list use flag is derived as follows based on the inter prediction identifier.
- & is a logical product
- >> is a right shift.
- FIG. 11A is a conceptual diagram illustrating an example of a reference picture list.
- RPL0 the five rectangles arranged in a line on the left and right indicate reference pictures, respectively.
- Reference signs P1, P2, Q0, P3, and P4 shown in order from the left end to the right are signs indicating respective reference pictures.
- codes P4, P3, R0, P2, and P1 shown in order from the left end to the right are codes indicating respective reference pictures.
- a downward arrow directly below refIdxL0 indicates that the reference picture index refIdxL0 is an index that refers to the reference picture Q0 from the reference picture list RPL0 in the decoded picture buffer.
- a downward arrow directly below refIdxL1 indicates that the reference picture index refIdxL1 is an index that refers to the reference picture P3 from the reference picture list RPL1 in the decoded picture buffer.
- FIG. 11B is a conceptual diagram illustrating an example of a reference picture.
- the horizontal axis indicates the display time
- the vertical axis indicates the number of layers.
- the illustrated rectangles of three rows and three columns (total of nine) each indicate a picture.
- the rectangle in the second column from the left in the lower row indicates a picture to be decoded (target picture), and the remaining eight rectangles indicate reference pictures.
- Reference pictures Q2 and R2 indicated by downward arrows from the target picture are pictures having the same display time and different layers as the target picture.
- the reference picture Q2 or R2 is used.
- a reference picture P1 indicated by a left-pointing arrow from the target picture is the same layer as the target picture and is a past picture.
- a reference picture P3 indicated by a rightward arrow from the target picture is the same layer as the target picture and is a future picture.
- motion prediction based on the target picture the reference picture P1 or P3 is used.
- the inter prediction parameter decoding (encoding) method includes a merge prediction mode and an AMVP (Adaptive Motion Vector Prediction) mode.
- the merge flag merge_flag is a flag for identifying these. .
- the prediction parameter of the target PU is derived using the prediction parameter of the already processed block.
- the merge prediction mode is a mode in which the prediction parameter already used is used as it is without including the prediction list use flag predFlagLX (inter prediction identifier inter_pred_idc), the reference picture index refIdxLX, and the vector mvLX in the encoded data.
- the prediction identifier inter_pred_idc, the reference picture index refIdxLX, and the vector mvLX are included in the encoded data.
- the vector mvLX is encoded as a prediction vector index mvp_LX_idx indicating a prediction vector and a difference vector (mvdLX).
- the inter prediction identifier inter_pred_idc is data indicating the type and number of reference pictures, and takes one of the values Pred_L0, Pred_L1, and Pred_Bi.
- Pred_L0 and Pred_L1 indicate that reference pictures stored in a reference picture list called an L0 reference list and an L1 reference list are used, respectively, and that both use one reference picture (single prediction).
- Prediction using the L0 reference list and the L1 reference list are referred to as L0 prediction and L1 prediction, respectively.
- Pred_Bi indicates that two reference pictures are used (bi-prediction), and indicates that two reference pictures stored in the L0 reference list and the L1 reference list are used.
- the prediction vector index mvp_LX_idx is an index indicating a prediction vector
- the reference picture index refIdxLX is an index indicating a reference picture stored in the reference picture list.
- LX is a description method used when L0 prediction and L1 prediction are not distinguished.
- refIdxL0 is a reference picture index used for L0 prediction
- refIdxL1 is a reference picture index used for L1 prediction
- refIdx (refIdxLX) is a notation used when refIdxL0 and refIdxL1 are not distinguished.
- the merge index merge_idx is an index indicating which one of the prediction parameter candidates (merge candidates) derived from the processed block is used as the prediction parameter of the decoding target block.
- the vector mvLX includes a motion vector and a displacement vector (disparity vector).
- a motion vector is a positional shift between the position of a block in a picture at a certain display time of a layer and the position of the corresponding block in a picture of the same layer at a different display time (for example, an adjacent discrete time). It is a vector which shows.
- the displacement vector is a vector indicating a positional shift between the position of a block in a picture at a certain display time of a certain layer and the position of a corresponding block in a picture of a different layer at the same display time.
- the pictures of different layers may be pictures with the same resolution and different quality, pictures with different viewpoints, or pictures with different resolutions.
- a displacement vector corresponding to pictures of different viewpoints is called a disparity vector.
- a vector mvLX A prediction vector and a difference vector related to the vector mvLX are referred to as a prediction vector mvpLX and a difference vector mvdLX, respectively.
- Whether the vector mvLX and the difference vector mvdLX are motion vectors or displacement vectors is determined using a reference picture index refIdxLX associated with the vectors.
- the parameters described above may be encoded independently, or a plurality of parameters may be encoded in combination.
- an index is assigned to the combination of parameter values, and the assigned index is encoded.
- the encoding of the parameter can be omitted.
- FIG. 19 [Hierarchical video decoding device]
- FIG. 19 the configuration of the hierarchical video decoding device 1 according to the present embodiment will be described with reference to FIGS. 19 to 21.
- FIG. 19 the configuration of the hierarchical video decoding device 1 according to the present embodiment will be described with reference to FIGS. 19 to 21.
- FIG. 19 is a schematic diagram illustrating a configuration of the hierarchical video decoding device 1 according to the present embodiment.
- the hierarchical video decoding device 1 specifies the layer ID list LayerIdListTarget of the target layer set LayerSetTarget to be decoded included in the hierarchically encoded data DATA supplied from the outside, and the highest-order sublayer associated with the layer to be decoded Based on the target highest-order temporal identifier HighestTidTarget, the hierarchical encoded data DATA supplied from the hierarchical video encoding device 2 is decoded to generate a decoded image POUT # T of each layer included in the target layer set.
- the hierarchical video decoding device 1 decodes the encoded data of the pictures of each layer in the ascending order from the lowest layer ID to the highest layer ID included in the target layer set, and the decoded image (decoding) Picture).
- the encoded data of the pictures of each layer is decoded in the order of the layer ID list LayerIdListTarget [0]... LayerIdListTarget [N-1] (N is the number of layers included in the target layer set) of the target layer set.
- the hierarchical video decoding device 1 includes a NAL demultiplexing unit 11 and a target layer set picture decoding unit 10. Further, the target layer set picture decoding unit 10 includes a parameter set decoding unit 12, a parameter set management unit 13, a picture decoding unit 14, and a decoded picture management unit 15.
- the NAL demultiplexing unit 11 further includes a bit stream extraction unit 17.
- Hierarchical encoded data DATA includes NAL including parameter sets (VPS, SPS, PPS), SEI, etc. in addition to NAL generated by VCL. Those NALs are called non-VCL NAL (non-VCL) with respect to VCL NAL.
- the bit stream extraction unit 17 included in the NAL demultiplexing unit 11 is roughly based on a layer ID list LayerIdListTarget of a target layer set LayerSetTarget supplied from outside and a target highest temporal identifier HighestTidTarget. Bitstream extraction processing is performed, and the NAL units included in the set (referred to as target set TargetSet) determined from the hierarchically encoded data DATA, the target highest temporal identifier HighestTidTarget, and the layer ID list LayerIdListTarget of the target layer set LayerSetTarget The target layer set encoded data DATA # T (BitstreamToDecode) is extracted. Details of processing highly relevant to the present invention in the bitstream extraction unit 17 will be described later.
- the NAL demultiplexing unit 11 demultiplexes the target layer set encoded data DATA # T (BitstreamToDecode) extracted by the bitstream extraction unit 17 and includes a NAL unit type and a layer identifier (layer) included in the NAL unit. ID) and temporal identifier (temporal ID), and supplies the NAL unit included in the target layer set to the target layer set picture decoding unit 10.
- the target layer set picture decoding unit 10 supplies non-VCL NAL to the parameter set decoding unit 12 and VCL NAL to the picture decoding unit 14 among the NALs included in the supplied target layer set encoded data DATA # T. . That is, the target layer set picture decoding unit 10 decodes the supplied NAL unit header (NAL unit header), and based on the NAL unit type, the layer identifier, and the temporal identifier included in the decoded NAL unit header, The VCL encoded data is supplied to the parameter set decoding unit 12 and the VCL encoded data is supplied to the picture decoding unit 14 together with the decoded NAL unit type, layer identifier, and temporal identifier.
- the parameter set decoding unit 12 decodes the parameter set, that is, VPS, SPS, and PPS, from the input non-VCL NAL, and supplies them to the parameter set management unit 13. Details of processing highly relevant to the present invention in the parameter set decoding unit 12 will be described later.
- the parameter set management unit 13 holds the encoded parameter of the parameter set for each identifier of the parameter set. Specifically, in the case of VPS, a VPS encoding parameter is held for each VPS identifier (video_parameter_set_id). In the case of SPS, the SPS encoding parameter is held for each SPS identifier (sps_seq_parameter_set_id). In the case of PPS, the PPS encoding parameter is held for each PPS identifier (pps_pic_parameter_set_id).
- the parameter set management unit 13 supplies the picture decoding unit 14 with encoding parameters of a parameter set (active parameter set) that is referred to by a picture decoding unit 14 to be described later for decoding a picture.
- the active PPS is specified by the active PPS identifier (slice_pic_parameter_set_id) included in the slice header SH decoded by the picture decoding unit 14.
- an active SPS is specified by an active SPS identifier (pps_seq_parameter_set_id) included in the specified active PPS.
- the active VPS is specified by the active VPS identifier (sps_video_parameter_set_id) included in the active SPS.
- designating a parameter set referred to for decoding a picture is also referred to as “activating a parameter set”.
- designating active PPS, active SPS, and active VPS is referred to as “activate PPS”, “activate SPS”, and “activate VPS”, respectively.
- the picture decoding unit 14 generates a decoded picture based on the input VCL NAL, the active parameter set (active PPS, active SPS, active VPS), and the reference picture, and supplies the decoded picture to the decoded picture management unit 15.
- the supplied decoded picture is recorded in a buffer in the decoded picture management unit 15. Detailed description of the picture decoding unit 14 will be described later.
- the decoded picture management unit 15 records an input decoded picture in an internal decoded picture buffer (DPB: “Decoded” Picture ”Buffer), and generates a reference picture list and determines an output picture. Also, the decoded picture management unit 15 outputs the decoded picture recorded in the DPB to the outside as an output picture POUT # T at a predetermined timing.
- DPB internal decoded picture buffer
- the bitstream extraction unit 17 performs a bitstream extraction process based on the layer ID list LayerIdListTarget of the layers constituting the target layer set LayerSetTarget supplied from the outside and the target highest temporal identifier HighestTidTarget, and input hierarchical encoding NAL units not included in the set (referred to as target set TargetSet) determined by the target highest temporal identifier HighestTidTarget and the layer ID list LayerIdListTarget of the target layer set LayerSetTarget are discarded (discarded) from the data DATA, and included in the target set TargetSet
- the target layer set encoded data DATA # T (BitstreamToDecode) composed of NAL units is extracted and output.
- FIG. 27 is a flowchart showing bit stream extraction processing in units of access units in the bit stream extraction unit 17.
- the bit stream extraction unit 17 decodes the NAL unit header of the supplied target NAL unit according to the syntax table shown in FIG. That is, the NAL unit type (nal_unit_type), the layer identifier (nuh_layer_id), and the temporal identifier (nuh_temporal_id_plus1) are decoded. Note that the layer identifier nuhLayerId of the target NAL unit is set to “nuh_layer_id”, and the temporal identifier temporalId of the target NAL unit is set to “nuh_temporal_id_plus1 ⁇ 1”.
- Whether or not the NAL unit type (nal_unit_type) of the target NAL unit is a parameter set is determined based on “nal_unit_type” and “Name of_nal_unit_type” shown in FIG.
- SG103 It is determined whether the layer identifier of the target NAL unit is included in the target set. More specifically, it is determined whether the layer ID list LayerIdListTarget of the layers constituting the target layer set LayerSetTarget has the same value as the layer identifier of the target NAL unit.
- the process proceeds to Step SG105.
- the process proceeds to step SG104.
- SG105 Whether or not the layer identifier and temporal identifier of the target NAL unit are included in the target set TargetSet is determined based on the layer ID list LayerIdListTarget of the layers constituting the target layer set LayerSetTarget and the target highest temporal identifier. More specifically, it is determined whether the following conditions (1) to (2) are satisfied. If at least one of the conditions is satisfied (true) (Yes in SG105), the process proceeds to step SG106. In other cases (No in SG105), the process proceeds to step SG107.
- the layer ID list LayerIdListTarget of the layers constituting the target layer set LayerSetTarget has the same value as the layer identifier of the target NAL unit”, it is determined to be true, and otherwise (the target layer set LayerSetTarget is set to The layer ID list LayerIdListTarget of the layer to be configured does not have the same value as the layer identifier of the target NAL unit), and is determined to be false.
- the temporal identifier of the target NAL unit is less than or equal to the target highest temporal identifier HighestTidTarget”, it is determined to be true; otherwise (the temporal identifier of the target NAL unit is greater than the target highest temporal temporal identifier HighestTidTarget), Judge as false.
- (SG106) Discard target NAL unit. That is, since the target NAL unit is not included in the target set TargetSet, the bit stream extraction unit 17 removes the target NAL unit from the input hierarchical encoded data DATA.
- Step SG107 It is determined whether there is an unprocessed NAL unit in the same access unit. If there is an unprocessed NAL unit (Yes in SG107), the process proceeds to Step SG101 in order to continue extracting bitstreams in units of NAL units constituting the target access unit. In other cases (No in SG107), the process proceeds to step SG10A.
- bitstream extraction unit 17 has been described above. However, the operation is not limited to the above step, and the step may be changed within a feasible range.
- the bit stream extraction process is performed based on the layer ID list LayerIdListTarget of the layers constituting the target layer set LayerSetTarget supplied from the outside and the target highest temporal identifier HighestTidTarget.
- NAL units that are not included in the set (called target set TargetSet) determined by the target highest temporal identifier HighestTidTarget and the layer ID list LayerIdListTarget of the target layer set LayerSetTarget are removed (discarded) from the hierarchically encoded data DATA
- the target layer set encoded data DATA # T (BitstreamToDecode) composed of NAL units included in the set TargetSet is extracted and output.
- the bitstream extraction unit 17 converts the layer identifier of the video parameter set to the lowest layer identifier in the target set TargetSet. It is characterized by updating (rewriting).
- bitstream extraction unit 17 is based on the premise that “maximum one VPS having the lowest layer identifier in the AU is included in the AU constituting the input hierarchical encoded data DATA”. However, it is not limited to this. For example, a VPS having a layer identifier other than the lowest layer identifier in the AU may be included in the AU. In this case, in step SG104, the bitstream extraction unit 17 may set the VPS that is the target of updating the layer identifier as the lowest layer identifier that is not included in the target set TargetSet.
- bit stream extraction unit 17 it is possible to prevent the problem that the VPS is not included in the layer set on the bit stream after the bit stream is extracted. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- the parameter set decoding unit 12 decodes a parameter set (VPS, SPS, PPS) used for decoding the target layer set from the input target layer set encoded data.
- the encoded parameters of the decoded parameter set are supplied to the parameter set management unit 13 and recorded for each identifier included in each parameter set.
- the parameter set is decoded based on a predetermined syntax table. That is, a bit string is read from the encoded data according to the procedure defined by the syntax table, and the syntax value of the syntax included in the syntax table is decoded. Further, if necessary, a variable derived based on the decoded syntax value may be derived and included in the output parameter set. Therefore, the parameter set output from the parameter set decoding unit 12 is a syntax value of syntax related to the parameter set (VPS, SPS, PPS) included in the encoded data, and a variable derived from the syntax value. It can also be expressed as a set of
- the video parameter set VPS is a parameter set for defining parameters common to a plurality of layers.
- As a VPS identifier for identifying each VPS layer information, maximum number of layers information, layer set information, and inter-layer dependence Contains information.
- the VPS identifier is an identifier for identifying each VPS, and is included in the VPS as the syntax “video_parameter_set_id” (SYNVPS01 in FIG. 12).
- a VPS specified by an active VPS identifier (sps_video_parameter_set_id) included in an SPS, which will be described later, is referred to during the decoding process of the encoded data of the target layer in the target layer set.
- the maximum layer number information is information representing the maximum number of layers in the hierarchically encoded data, and is included in the VPS as the syntax “vps_max_layers_minus1” (SYNVPS02 in FIG. 12).
- the maximum number of layers in the hierarchically encoded data (hereinafter, the maximum number of layers MaxNumLayers) is set to a value of (vps_max_layers_minus1 + 1).
- the maximum number of layers defined here is the maximum number of layers related to other scalability (SNR scalability, spatial scalability, view scalability, etc.) excluding temporal scalability.
- the maximum sublayer number information is information indicating the maximum number of sublayers in the hierarchical encoded data, and is included in the VPS as the syntax “vps_max_sub_layers_minus1” (SYNVPS03 in FIG. 12).
- the maximum number of sublayers in the hierarchical encoded data (hereinafter, the maximum number of sublayers MaxNumSubLayers) is set to a value of (vps_max_sub_layers_minus1 + 1).
- the maximum number of sublayers defined here is the maximum number of layers related to temporal scalability.
- the maximum layer identifier information is information indicating the layer identifier (layer ID) of the highest layer included in the hierarchically encoded data, and is included in the VPS as the syntax “vps_max_layer_id” (SYNVPS04 in FIG. 12). . In other words, it is the maximum value of the layer ID (nuh_layer_id) of the NAL unit included in the hierarchically encoded data.
- the layer set number information is information representing the total number of layer sets included in the hierarchically encoded data, and is included in the VPS as the syntax “vps_num_layer_sets_minus1” (SYNVPS05 in FIG. 12).
- the number of layer sets in the hierarchically encoded data (hereinafter, the number of layer sets NumLayerSets) is set to a value of (vps_num_layer_sets_minus1 + 1).
- the layer set information is a list (hereinafter referred to as a layer ID list LayerSetLayerIdList) representing a set of layers constituting the layer set included in the hierarchically encoded data, and is decoded from the VPS.
- the VPS includes a syntax “layer_id_included_flag [i] [j]” (SYNVPS06 in FIG. 12) indicating whether or not the j-th layer (layer identifier nunLayerIdJ) is included in the i-th layer set.
- a layer set is composed of layers having layer identifiers whose syntax value is 1. That is, the layer j constituting the layer set i is included in the layer ID list LayerSetLayerIdList [i].
- VPS extension data presence / absence flag “vps_extension_flag” is a flag indicating whether or not the VPS further includes VPS extension data vps_extension () (SYNVPS08 in FIG. 12).
- VPS extension data vps_extension (SYNVPS08 in FIG. 12).
- “flag indicating whether or not XX” or “XX presence / absence flag” is used, 1 is XX, 0 is not XX, logical negation, logical product, etc. Then, 1 is treated as true and 0 is treated as false (the same applies hereinafter).
- other values can be used as true values and false values in an actual apparatus or method.
- Inter-layer dependency information is decoded from the VPS extension data (vps_extension ()) included in the VPS.
- the inter-layer dependency information included in the VPS extension data will be described with reference to FIG.
- FIG. 13 shows a part of a syntax table that is referred to at the time of VPS extended decoding and related to inter-layer dependency information.
- the VPS extension data includes a direct dependency flag “direct_dependency_flag [i] [j]” (SYNVPS0A in FIG. 13) as inter-layer dependency information.
- the direct dependency flag direct_dependency_flag [i] [j] indicates whether or not the i-th layer is directly dependent on the j-th layer. Takes a value of 0 when not dependent on.
- the i-th layer is directly dependent on the j-th layer
- the parameter set, decoded picture, and related parameters for the j-th layer This means that there is a possibility that the decoded syntax to be directly referenced by the target layer.
- the i-th layer does not depend directly on the j-th layer
- the parameter set, decoded picture, and related parameters for the j-th layer This means that the decrypted syntax is not directly referenced.
- the direct dependency flag for the j-th layer of the i-th layer is 1, the j-th layer can be a direct reference layer for the i-th layer.
- a set of layers that can be a direct reference layer for a specific layer, that is, a set of layers having a corresponding direct dependency flag value of 1 is called a direct dependency layer set.
- i 0, that is, the 0th layer (base layer) has no direct dependency relationship with the jth layer (enhancement layer), so the direct dependency flag “direct_depedency_flag [i] [j]” The value is 0, and the decoding of the direct dependency flag of the jth layer (enhancement layer) with respect to the 0th layer (base layer) as shown in the fact that the loop of i including SYNVPS0A in FIG. Encoding can be omitted.
- the direct reference layer IDX list DirectRefLayerIdx [iNuhLId] [] indicating the number of elements in ascending order in the reference layer set is derived by an expression described later.
- the reference layer ID list RefLayerId [] [] is a two-dimensional array
- the element of the first array stores the layer identifier of the target layer (layer i)
- the elements of the second array include In the direct reference layer set, the layer identifiers of the kth reference layer are stored in ascending order.
- the direct reference layer IDX list DirectRefLayerIdx [] [] is a two-dimensional array.
- the element of the first array stores the layer identifier of the target layer (layer i), and the element of the second array
- the index (direct reference layer IDX) indicating the element number of the layer identifier in ascending order in the direct reference layer set is stored.
- the above reference layer ID list and direct reference layer IDX list are derived by the following pseudo code.
- the layer identifier nuhLayerId of the i-th layer is represented by the syntax of “layer_id_in_nuh [i]” (not shown in FIG. 13) on the VPS.
- “nuhLId # i” is used. If layer_id_in_nuh [j], it is “nuhLId # j”.
- the array NumDirectRefLayers [] represents the number of direct reference layers to which the layer with the layer identifier iNuhLId refers.
- variable i is initialized to zero.
- the processing in the loop is executed when the variable i is less than the number of layers “vps_max_layers_minus1 + 1”. Each time the processing in the loop is executed once, the variable i is incremented by “1”.
- the jth layer is a starting point of a loop related to element addition to the reference layer ID list and the direct reference layer IDX list related to the i-th layer. Prior to the start of the loop, the variable j is initialized to zero. The process in the loop is executed when the variable j (j-th layer) is less than the i-th layer (j ⁇ i), and each time the process in the loop is executed once, the variable j is “1”. Is added.
- the direct dependency flag (direct_dependency_flag [i] [j]) of the jth layer with respect to the ith layer is determined. If the direct dependency flag is 1, the process proceeds to step SL05 in order to execute the processes of steps SL05 to SL07. If the direct dependency flag is 0, the processing of steps SL05 to SL07 is omitted and the process proceeds to SL0A.
- the j-th layer is the end of a loop related to element addition to the reference layer ID list for the i-th layer and the direct reference layer IDX list.
- the layer ID of the kth layer is the element number (direct reference) in all layers.
- the direct reference layer IDX can be grasped as the element number in the direct reference layer set. Note that the derivation procedure is not limited to the above steps, and may be changed within a practicable range.
- the inter-layer dependency information includes a syntax “direct_dependency_len_minusN” (layer dependent type bit length) (SYNVPS0C in FIG. 13) indicating a layer dependent type (direct_dependency_type [i] [j]) bit length M described later.
- the inter-layer dependency information includes a syntax “direct_dependency_type [i] [j]” (SYNVPS0D in FIG. 13) indicating a layer dependency type indicating a reference relationship between the i-th layer and the j-th layer. .
- the presence / absence flag of the type of the layer dependent type includes an inter-layer image prediction presence / absence flag (SamplePredEnabledFlag; inter-layer image prediction presence / absence flag), an inter-layer motion prediction presence / absence flag (MotionPredEnabledFlag; an inter-layer motion prediction presence / absence flag), There is a non-VCL dependency flag (NonVCLDepEnabledFlag; non-VCL dependency flag).
- the non-VCL dependency presence / absence flag indicates whether or not there is a dependency relationship between layers regarding header information (parameter set such as SPS and PPS) included in a non-VCL NAL unit.
- the presence / absence of sharing of parameter sets between layers (shared parameter set) described later, and partial syntax prediction in parameter sets between layers (for example, scaling list information (quantization matrix)) (parameters) Presence / absence of inter-set syntax prediction or parameter set prediction).
- the value encoded with the syntax “direct_dependency_type [i] [j]” is a layer-dependent type value ⁇ 1, that is, a value of “DirectDepType [i] [j] ⁇ 1”. It is.
- the value of the least significant bit (0 bit) of the layer-dependent type indicates the presence or absence of inter-layer image prediction
- the value of the first bit from the least significant bit is the value of inter-layer motion prediction.
- the presence / absence is indicated
- the value of the (N ⁇ 1) th bit from the least significant bit indicates the presence / absence of non-VCL dependency.
- Each bit from the Nth bit to the most significant bit (M ⁇ 1th bit) from the least significant bit is a dependency type extension bit.
- the presence / absence flag for each layer-dependent type of the reference layer j for the target layer i is derived by the following expression.
- NonVCLDepEnabledFlag [iNuhLid] [j] ((direct_dependency_type [i] [j] +1) & (1 ⁇ (N-1))) >>(N-1);
- the variable DirectDepType [i] [j] can be used to express the following expression.
- the (N-1) th bit is a non-VCL dependency type (non-VCL dependency flag), but is not limited to this.
- N 3
- the second bit from the least significant bit may be a bit representing the presence or absence of a non-VCL dependency type.
- the bit position indicating the presence / absence flag for each dependency type may be changed within a feasible range.
- the above-described presence / absence flags may be derived by executing as step SL08 in the above-described (derivation of reference layer ID list and direct reference layer IDX list). Note that the derivation procedure is not limited to the above steps, and may be changed within a practicable range.
- the indirect dependency indicating the dependency of whether the i-th layer depends indirectly on the j-th layer (whether the j-th layer is an indirect reference layer of the i-th layer) or not.
- the flag IndirectDependencyFlag [i] [j]
- the flag can be derived with reference to the direct dependency flag (direct_dependency_flag [i] [j]) by pseudo code described later.
- the j-th layer depends directly on the j-th layer (if the direct dependency flag is 1, the j-th layer is also referred to as a direct reference layer of the i-th layer) or indirectly Dependency flag (DependencyFlag [i] [j]) that depends on whether or not it depends on (if the indirect dependency flag is 1, the j-th layer is also called the indirect reference layer of the i-th layer)
- the flag directly_dependency_flag [i] [j]
- the indirect dependency flag IndirectDepdendencyFlag [i] [j]
- the number of layers is N + 1, and the j-th layer (referred to as L # j and layer j on FIG. 31) is the i-th layer (L # i and layer i on FIG. 31). (J ⁇ i). Further, it is assumed that there is a layer k (L # k in FIG. 31) (j ⁇ k ⁇ i) that is higher than layer j and lower than layer i. In FIG. 31, layer k is directly dependent on layer j (solid arrow in FIG.
- the layer j is referred to as an indirect reference layer of the layer i. In other words, if layer i depends indirectly on layer j via one or more layers k (i ⁇ k ⁇ j), layer j is an indirect reference layer for layer i .
- the indirect dependency flag IndirectDependencyFlag [i] [j] indicates whether or not the i-th layer is indirectly dependent on the j-th layer. Takes a value of 0 if not indirectly dependent.
- the i-th layer is indirectly dependent on the j-th layer
- the parameter set, decoded picture, and related parameters for the j-th layer This means that there is a possibility that the decoded syntax to be indirectly referenced by the target layer.
- the i-th layer does not indirectly depend on the j-th layer
- the parameter set, decoded picture, and related parameters for the j-th layer This means that the decoded syntax is not indirectly referenced.
- the indirect dependency flag for the j-th layer of the i-th layer is 1, the j-th layer can be an indirect reference layer for the i-th layer.
- a set of layers that can be an indirect reference layer for a specific layer, that is, a set of layers having a corresponding indirect dependency flag value of 1 is called an indirect dependency layer set.
- the indirect dependency flag “IndirectDepedencyFlag [i] [j]” The value is 0, and derivation of the indirect dependency flag of the jth layer (enhancement layer) for the 0th layer (base layer) can be omitted.
- the dependency flag DependencyFlag [i] [j] indicates whether or not the i-th layer is dependent on the j-th layer, and is 1 when it is dependent, 0 when it is not dependent. Takes the value of Note that references and dependencies related to the dependency flag DependencyFlag [i] [j] include both direct and indirect (direct reference, indirect reference, direct dependency, indirect dependency) unless otherwise specified.
- references and dependencies related to the dependency flag DependencyFlag [i] [j] include both direct and indirect (direct reference, indirect reference, direct dependency, indirect dependency) unless otherwise specified.
- the i-th layer depends on the j-th layer
- the parameter set, the decoded picture, and the related decoded image related to the j-th layer This means that the syntax may be referenced by the target layer.
- the i-th layer does not depend on the j-th layer
- the parameter set, the decoded picture, and the related decoded image related to the j-th layer are executed.
- the dependency flag of the i-th layer with respect to the j-th layer is 1, the j-th layer can be a direct reference layer or an indirect reference layer of the i-th layer.
- a set of layers that can be a direct reference layer or an indirect reference layer for a specific layer, that is, a set of layers having a corresponding dependency flag value of 1 is referred to as a dependent layer set.
- (SN01) Indirect dependency flag for the i-th layer and the starting point of the loop related to the derivation of the dependency flag.
- the variable i is initialized to zero.
- the process in the loop is executed when the variable i is less than the number of layers “vps_max_layers_minus1 + 1”. Each time the process in the loop is executed once, the variable i is incremented by “1”.
- the j-th layer is not the i-th direct reference layer. Specifically, if the direct dependency flag (direct_dependency_flag [i] [j]) of the jth layer with respect to the ith layer is 0 (not a direct reference layer), it is determined to be true, and the direct dependency flag is 1 ( If it is a direct reference layer), it is determined to be false.
- step SN06 If there is, the process of step SN06 is omitted, and the process proceeds to step SN07.
- the value of the dependency flag (DependencyFlag [i] [j]) is set based on the direct dependency flag (direct_dependency_flag [i] [j]) and the indirect dependency flag (IndirectDependencyFlag [i] [j]). Specifically, the value of the direct dependency flag (direct_dependency_flag [i] [j]) and the value of the logical dependency of the indirect dependency flag (direct_dependency_flag [i] [j]) are set as the dependency flag (DependencyFlag [i] [ j]). That is, it is derived by the following formula. If the value of the direct dependency flag is 1 or the value of the indirect dependency flag is 1, the value of the dependency flag is 1.
- the value of the dependency flag is 0.
- the following derivation formula is an example, and can be changed within a range in which the values set in the dependency flag are the same.
- DependencyFlag [i] [j] (direct_dependency_flag [i] [j]
- a dependency flag (DependencyFlag [i] [j]) indicating a dependency relationship when the i-th layer depends on the j-th layer (when the direct dependency flag is 1 or the indirect dependency flag is 1) is derived.
- the derivation procedure is not limited to the above steps, and may be changed within a practicable range.
- the indirect dependency flag and the dependency flag may be derived by the following pseudo code.
- the variable j is initialized to 0 before the loop starts.
- the process in the loop is executed when the variable j (layer j) is less than the layer k (j ⁇ k), and the variable j is incremented by “1” every time the process in the loop is executed once.
- the layer j is a direct reference layer or an indirect reference layer of the layer k. Specifically, the direct dependency flag of layer j (direct_dependency_flag [k] [j]) for layer k is 1, or the indirect dependency flag of layer j for layer k (IndirectDependencyFlag [k] [j]) is 1. If true, it is determined to be true (direct reference layer or indirect reference layer). If the direct dependency flag is 0 (not a direct reference layer) and the indirect dependency flag is 0 (not an indirect reference layer), it is determined to be false.
- layer k is a direct reference layer of layer i. Specifically, if the direct dependency flag (direct_dependency_flag [i] [k]) of layer k with respect to layer i is 1, it is determined to be true (direct reference layer), and the direct dependency flag is 0 (direct reference layer) Is not), it is determined to be false.
- layer j is not a direct reference layer of layer i. Specifically, if the direct dependency flag (direct_dependency_flag [i] [j]) of layer j with respect to layer i is 0 (not a direct reference layer), it is determined to be true, and the direct dependency flag is 1 (in the direct reference layer). If yes, it is determined to be false.
- direct_dependency_flag [i] [j] direct_dependency_flag [i] [j]
- step SO05 the processing of step SO05 is omitted, and step SO06 Transition to.
- (S00A) This is the starting point of the loop related to the derivation of the dependency flag for layer i.
- the variable i is initialized to zero.
- the process in the loop is executed when the variable i is less than the number of layers “vps_max_layers_minus1 + 1”. Each time the process in the loop is executed once, the variable i is incremented by “1”.
- the value of the dependency flag (DependencyFlag [i] [j]) is set based on the direct dependency flag (direct_dependency_flag [i] [j]) and the indirect dependency flag (IndirectDependencyFlag [i] [j]). Specifically, the value of the direct dependency flag (direct_dependency_flag [i] [j]) and the value of the logical dependency of the indirect dependency flag (direct_dependency_flag [i] [j]) are set as the dependency flag (DependencyFlag [i] [ j]). That is, it is derived by the following formula. If the value of the direct dependency flag is 1 or the value of the indirect dependency flag is 1, the value of the dependency flag is 1.
- the layer j becomes the layer i Whether the layer is an indirect reference layer can be grasped. Also, by deriving a dependency flag (DependencyFlag [i] [j]) indicating a dependency relationship when layer i depends on layer j (when the direct dependency flag is 1 or the indirect dependency flag is 1), It can be understood whether the layer j is a dependency layer (direct reference layer or indirect reference layer) of the layer i. Note that the derivation procedure is not limited to the above steps, and may be changed within a practicable range.
- the dependency flag DipendecyFlag [i] [j] indicating whether or not the j-th layer for the i-th layer is a direct reference layer or an indirect reference layer is set as an index i in all layers.
- the layer identifier nuhLId # i of the i-th layer and the layer identifier nuhLId # j of the j-th layer are used as dependency flags between layer identifiers (inter-layer identifier dependency flag) LIdDipendencyFlag [ ] [] May be derived.
- the first element of the inter-layer identifier dependency flag (LIdDependencyFlag [] []) is set as the layer identifier nuhLId # i of the i-th layer, and the second element is set as the j-th layer.
- the layer identifier nuhLId # j the value of the inter-layer identifier dependency flag (LIdDependencyFlag [nuhLId # i] [nuhLId # j]) is derived. That is, as shown in the following equation, if the value of the direct dependency flag is 1 or the value of the indirect dependency flag is 1, the value of the inter-identifier dependency flag is 1.
- the value of the inter-layer identifier dependency flag is 0.
- LIdDependencyFlag [nuhLId # i] [nuhLId # j] (direct_dependency_flag [i] [j]
- the inter-layer identifier dependency flag (indicating whether the i-th layer having the layer identifier nuhLId # i directly or indirectly depends on the j-th layer having the layer identifier nuhLId # j) Deriving Lid0DependencyFlag [nuhLId # i] [nuhLId # j]) makes the jth layer with the layer identifier nuhLId # j a direct reference layer or indirect reference of the ith layer with the layer identifier nuhLId # i You can see if it is a layer.
- the said procedure is
- Sequence parameter set SPS a set of encoding parameters referred to by the image decoding device 1 in order to decode the target sequence is defined.
- the active VPS identifier is an identifier that is designated as an active VPS that is referred to by the target SPS, and is included in the SPS as the syntax “sps_video_parameter_set_id” (SYNSPS01 in FIG. 15).
- the parameter set decoding unit 12 decodes the active VPS identifier included in the sequence parameter set SPS to be decoded, and reads the encoding parameter of the active VPS specified by the active VPS identifier from the parameter set management unit 13.
- the encoding parameters of the active VPS may be referred to when each subsequent syntax of the decoding target SPS is decoded. Note that if the syntax of the decoding target SPS does not depend on the encoding parameter of the active VPS, the VPS activation process at the time of decoding the active VPS identifier of the decoding target SPS is not necessary.
- the SPS identifier is an identifier for identifying each SPS, and is included in the SPS as the syntax “sps_seq_parameter_set_id” (SYNSPS02 in FIG. 15).
- SYNSPS02 syntax “sps_seq_parameter_set_id”
- the SPS includes information for determining the size of the decoded picture of the target layer as the picture information.
- the picture information includes information indicating the width and height of the decoded picture of the target layer.
- the picture information decoded from the SPS includes the width of the decoded picture (pic_width_in_luma_samples) and the height of the decoded picture (pic_height_in_luma_samples) (not shown in FIG. 15).
- the value of the syntax “pic_width_in_luma_samples” corresponds to the width of the decoded picture in luminance pixel units.
- the value of the syntax “pic_height_in_luma_samples” corresponds to the height of the decoded picture in luminance pixel units.
- scaling list information on a scaling list (quantization matrix) used throughout the entire target sequence.
- “sps_infer_scaling_list_flag” indicates whether information on the scaling list of the target SPS is estimated from the scaling list information of the active SPS of the reference layer specified by “sps_scaling_list_ref_layer_id” It is a flag which shows.
- the SPS scaling list estimation flag is 1, the scaling list information of the SPS is estimated (copied) from the scaling list information of the active SPS of the reference layer specified by “sps_scaling_list_ref_layer_id”.
- scaling list information is notified based on “sps_scaling_list_data_present_flag” in SPS.
- the SPS extension data presence / absence flag “sps_extension_flag” (SYNSPS05 in FIG. 15) is a flag indicating whether the SPS further includes the SPS extension data sps_extension () (SYNSPS06 in FIG. 15).
- the SPS extension data includes, for example, inter-layer position correspondence information (SYNSPS0A in FIG. 16) and the like.
- Picture Parameter Set PPS a set of encoding parameters referred to by the image decoding device 1 for decoding each picture in the target sequence is defined.
- the PPS identifier is an identifier for identifying each PPS, and is included in the PPS as the syntax “sps_seq_parameter_set_id” (SYNSPS02 in FIG. 15).
- SYNSPS02 in FIG. 15
- a PPS specified by an active PPS identifier (slice_pic_parameter_set_id) included in a later-described slice header is referred to during decoding processing of encoded data of the target layer in the target layer set.
- the active SPS identifier is an identifier that is designated as an active SPS that is referenced by the target PPS, and is included in the PPS as the syntax “pps_seq_parameter_set_id” (SYNSPS02 in FIG. 17).
- the parameter set decoding unit 12 decodes the active SPS identifier included in the picture parameter set PPS to be decoded, and reads out the encoding parameter of the active SPS specified by the active SPS identifier from the parameter set management unit 13. Further, the coding parameters of the active VPS referred to by the active SPS may be called, and the coding parameters of the active SPS and the active VPS may be referred to when each syntax of the subsequent decoding target PPS is decoded. .
- the activation process of the SPS and VPS at the time of decoding the active SPS identifier of the decoding target PPS is not necessary.
- the syntax group indicated by SYNPPS03 in FIG. 17 is information (scaling list information) on a scaling list (quantization matrix) used when decoding a picture that refers to the target PPS.
- scaling list information “pps_infer_scaling_list_flag” (scaling list estimation flag) indicates whether or not to estimate the information about the scaling list of the target PPS from the scaling list information of the active PPS of the reference layer specified by “pps_scaling_list_ref_layer_id”. It is a flag to show.
- the scaling list information of the PPS is estimated (copied) from the scaling list information of the active PPS of the reference layer specified by “sps_scaling_list_ref_layer_id”.
- the PPS scaling list estimation flag is 0, scaling list information is notified based on “sps_scaling_list_data_present_flag” by PPS.
- the picture decoding unit 14 generates and outputs a decoded picture based on the input VCL NAL unit and the active parameter set.
- FIG. 20 is a functional block diagram illustrating a schematic configuration of the picture decoding unit 14.
- the picture decoding unit 14 includes a slice header decoding unit 141 and a CTU decoding unit 142.
- the CTU decoding unit 142 further includes a prediction residual restoration unit 1421, a predicted image generation unit 1422, and a CTU decoded image generation unit 1423.
- the slice header decoding unit 141 decodes the slice header based on the input VCL NAL unit and the active parameter set.
- the decoded slice header is output to the CTU decoding unit 142 together with the input VCL NAL unit.
- the CTU decoding unit 142 decodes a region corresponding to each CTU included in a slice constituting a picture based on an input slice header, slice data included in a VCL NAL unit, and an active parameter set.
- a decoded image of the slice is generated by decoding the image.
- the CTU size the CTB size for the target layer included in the active parameter set (the syntax corresponding to log2_min_luma_coding_block_size_minus3 and log2_diff_max_min_luma_coding_block_size on SYNSPS03 in FIG. 15) is used.
- the decoded image of the slice is output as a part of the decoded picture to the slice position indicated by the input slice header.
- the decoded image of the CTU is generated by the prediction residual restoration unit 1421, the prediction image generation unit 1422, and the CTU decoded image generation unit 1423 inside the CTU decoding unit 142.
- the prediction residual restoration unit 1421 decodes prediction residual information (TT information) included in the input slice data, generates a prediction residual of the target CTU, and outputs it.
- TT information prediction residual information
- the predicted image generation unit 1422 generates and outputs a predicted image based on the prediction method and the prediction parameter indicated by the prediction information (PT information) included in the input slice data. At that time, a decoded image of the reference picture and an encoding parameter are used as necessary. For example, when using inter prediction or inter-layer image prediction, a corresponding reference picture is read from the decoded picture management unit 15.
- the CTU decoded image generation unit 1423 adds the input predicted image and the prediction residual to generate and output a decoded image of the target CTU.
- FIG. 21 is a flowchart showing a decoding process in units of slices constituting a picture of the target layer i in the picture decoding unit 14.
- the first slice flag (first_slice_segment_in_pic_flag) of the decoding target slice is decoded.
- the decoding target slice is the first slice in the decoding order (hereinafter, processing order) in the picture, and the position (hereinafter, the first CTU of the decoding target slice in the raster scan order in the picture).
- CTU address) is set to 0.
- the counter numCtu (hereinafter, the number of processed CTUs numCtu) of the number of processed CTUs in the picture is set to zero.
- the head slice flag is 0, the head CTU address of the decoding target slice is set based on a slice address decoded in SD106 described later.
- the active PPS identifier (slice_pic_paramter_set_id) that specifies the active PPS to be referred to when decoding the decoding target slice is decoded.
- the active parameter set is fetched from the parameter set management unit 13. That is, the PPS having the same PPS identifier (pps_pic_parameter_set_id) as the active PPS identifier (slice_pic_parameter_set_id) referred to by the decoding target slice is set as the active PPS, and the encoding parameter of the active PPS is fetched (read) from the parameter set management unit 13.
- the SPS having the same SPS identifier (sps_seq_parameter_set_id) as the active SPS identifier (pps_seq_parameter_set_id) in the active PPS is set as the active SPS, and the encoding parameter of the active SPS is fetched from the parameter set management unit 13.
- the VPS having the same VPS identifier (vps_video_parameter_set_id) as the active VPS identifier (sps_video_parameter_set_id) in the active SPS is set as the active VPS, and the encoding parameter of the active VPS is fetched from the parameter set management unit 13.
- step SD105 Whether the decoding target slice is the first slice in the processing order in the picture is determined based on the first slice flag. If the first slice flag is 0 (Yes in SD105), the process proceeds to step SD106. In other cases (No in SD105), the process of step SD106 is skipped. When the head slice flag is 1, the slice address of the decoding target slice is 0.
- the slice address (slice_segment_address) of the decoding target slice is decoded, and the head CTU address of the decoding target slice is set.
- the head slice CTU address slice_segment_address. ... Omitted ...
- the CTU decoding unit 142 is included in the slices constituting the picture based on the input slice header, active parameter set, and each CTU information (SYNSD01 in FIG. 18) in the slice data included in the VCL NAL unit. A CTU decoded image of an area corresponding to each CTU to be generated is generated.
- a slice end flag (end_of_slice_segment_flag) (SYNSD02 in FIG. 18) indicating whether the CTU is the end of the decoding target slice. Further, after decoding each CTU, the value of the number of processed CTUs numCtu is incremented by 1 (numCtu ++).
- SD10B It is determined based on the slice end flag whether or not the CTU is the end of the decoding target slice.
- the slice end flag is 1 (Yes in SD10B)
- the process proceeds to step SD10C.
- the process proceeds to step SD10A in order to decode subsequent CTU information.
- numCtu is equal to PicSizeInCtbsY (Yes in SD10C)
- the decoding process in units of slices constituting the decoding target picture ends.
- numberCtu ⁇ PicSizeInCtbsY No in SD10C
- the process proceeds to step SD101 in order to continue the decoding process in units of slices constituting the decoding target picture.
- the hierarchical video decoding device 1 (hierarchical video decoding device) according to the present embodiment described above is based on the layer ID list LayerIdListTarget of the layers constituting the target layer set LayerSetTarget supplied from the outside and the target highest temporal identifier HighestTidTarget.
- the bitstream extraction processing is performed, and from the input hierarchically encoded data DATA, it is not included in the set (called target set TargetSet) determined by the target highest temporal identifier HighestTidTarget and the layer ID list LayerIdListTarget of the target layer set LayerSetTarget.
- the NAL unit is removed (destroyed), and a bit stream extraction unit 17 is provided that extracts target layer set encoded data DATA # T (BitstreamToDecode) composed of NAL units included in the target set TargetSet. Further, when the layer identifier of the video parameter set is not included in the target set TargetSet, the bitstream extraction unit 17 converts the layer identifier of the video parameter set to the lowest layer identifier in the target set TargetSet. It is characterized by updating (rewriting). Note that the operation of the bitstream extraction unit 17 is based on the premise that “maximum one VPS having the lowest layer identifier in the AU is included in the AU constituting the input hierarchical encoded data DATA”. However, it is not limited to this.
- a VPS having a layer identifier other than the lowest layer identifier in the AU may be included in the AU.
- the bitstream extraction unit 17 may set the VPS that is the layer identifier update target to be the lowest layer identifier that is a layer identifier that is not included in the target set TargetSet.
- the hierarchical video decoding device 1 it is possible to prevent the problem that the VPS is not included in the layer set on the bit stream after the bit stream is extracted. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- the decoding process related to the parameter set of the target layer is performed. It can be omitted. That is, the parameter set can be decoded with a smaller code amount.
- bitstream extraction unit 17 In the bitstream extraction unit 17 according to the first embodiment, as illustrated in FIG. 27, if the target set does not include the VPS layer identifier in the target set TargetSet, the VPS layer identifier is included in the target set TargetSet. Thus, the VPS is always included in the target set TargetSet by updating (rewriting) to the lowest layer identifier.
- the bitstream extraction unit 17 omits discarding the VPS NAL unit without updating the VPS layer identifier for VPSs not included in the layer ID list LayerIdListTarget that configures the target set TargetSet.
- the VPS may be included in the bit stream of the target set TargetSet.
- movement of the bit stream extraction part 17 'which concerns on the modification 1 is demonstrated.
- Whether or not the NAL unit type (nal_unit_type) of the target NAL unit is a parameter set is determined based on “nal_unit_type” and “Name of_nal_unit_type” shown in FIG.
- (SG105a) It is determined whether or not the layer identifier and temporal identifier of the target NAL unit are included in the target set TargetSet based on the layer ID list LayerIdListTarget of the layers constituting the target layer set LayerSetTarget and the target highest temporal identifier.
- the detailed operation is the same as step SG105 in FIG.
- (SG106a) Discard target NAL unit. That is, since the target NAL unit is not included in the target set TargetSet, the bit stream extraction unit 17 ′ removes the target NAL unit from the input hierarchical encoded data DATA.
- bitstream extraction unit 17 ′ The operation of the bitstream extraction unit 17 ′ according to the modification 1 has been described above. However, the operation is not limited to the above step, and the step may be changed within a feasible range.
- the bit stream is based on the layer ID list LayerIdListTarget of the layers constituting the target layer set LayerSetTarget supplied from the outside and the target highest temporal identifier HighestTidTarget.
- the NAL unit whose NAL unit type is VPS is excluded from the input hierarchically encoded data DATA, and the set target set TargetSet determined by the target highest temporal identifier HighestTidTarget and the layer ID list LayerIdListTarget of the target layer set LayerSetTarget NAL units not included in the target set are removed (destroyed), and target layer set encoded data DATA # T (BitstreamToDecode) composed of NAL units included in the target set TargetSet is extracted and output.
- the bit stream extraction unit 17 ′ does not discard the NAL unit of the video parameter set, and the bit stream of the target set TargetSet. Includes the VPS.
- the operation of the bitstream extraction unit 17 ′ is “the AU constituting the input hierarchical encoded data DATA includes at most one VPS having the lowest layer identifier in the AU”. Although it is assumed, it is not limited to this. For example, a VPS having a layer identifier other than the lowest layer identifier in the AU may be included in the AU. In this case, the bitstream extraction unit 17 ′ may add a condition “whether the VPS layer identifier is a layer identifier not included in the target set TargetSet and is the lowest layer identifier?” In step SG102a. .
- bit stream extraction unit 17 ′ According to the bit stream extraction unit 17 ′ according to the first modification described above, it is possible to prevent the problem that the VPS is not included in the layer set on the bit stream after the bit stream is extracted. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- bitstream In order to perform the bitstream extraction described in the bitstream extraction unit 17 ′ according to the first modification, the bitstream must satisfy at least the following condition CY1 as the bitstream conformance.
- Target set TargetSet (layer set) includes VPS having a layer identifier equal to the lowest layer identifier in all layers”
- the bitstream constraint CY1 is “the VPS included in the access unit belongs to the same layer as the VCL having the lowest layer identifier in all layers (including layers not included in the access unit)” That's what it means.
- a VPS included in an access unit belongs to the same layer as the VCL having the lowest layer identifier in all layers (including layers not included in the access unit)” means “a subset of layer set A
- a layer in a layer set B refers to a VPS of a layer that is “included in layer set A but not included in layer set B” in layer set A, in layer set B extracted by bitstream extraction, , VPS having the same encoding parameter as the VPS is included in the layer set B ”.
- the VPS having the same encoding parameters as the VPS indicates that the VPS identifier and other syntax in the VPS are the same as the VPS except for the layer identifier and the temporal identifier.
- bit stream restriction it is possible to solve the problem that the VPS is not included in the layer set on the bit stream after the bit stream is extracted. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- the target set TargetSet includes a VPS having a layer identifier equal to the lowest layer identifier in all layers
- the conformance condition CY2 the same effect as the conformance condition CY1 is achieved.
- the prior art Non-Patent Documents 2 to 3
- NAL unit type (nal_unit_type) of the target NAL unit is a parameter set is determined based on “nal_unit_type” and “Name of_nal_unit_type” shown in FIG.
- Step SG107 when the NAL unit type is any one of the parameter sets of VPS, SPS, and PPS (YES in SG102a ′), the process proceeds to Step SG107. In other cases (No in SG102a), the process proceeds to step SG105a.
- bit stream extraction unit 17 ′ to which the above changes are added is referred to as a bit stream extraction unit 17 ′ a according to the modification 1 a.
- the bitstream extraction unit 17 In the bitstream extraction unit 17 according to the first embodiment, as illustrated in FIG. 27, if the target set does not include the VPS layer identifier in the target set TargetSet, the VPS layer identifier is included in the target set TargetSet. Thus, the VPS is always included in the target set TargetSet by updating (rewriting) to the lowest layer identifier.
- the bitstream extraction unit 17 depends on each layer in the target set TargetSet and does not include the layer ID list LayerIdListTarget that configures the target set TargetSet.
- the VCL regarding the reference layer (direct reference layer and indirect reference layer) and The discard of the non-VCL NAL unit may be omitted, and the VCL and non-VCL may be included in the bit stream of the target set after extraction.
- movement of the bit stream extraction part 17 '' which concerns on the modification 2 is demonstrated.
- movement common with the bit stream extraction part 17 which concerns on Example 1 the same code
- the bit stream extraction unit 17 ′′ has the same function as the VPS decoding means in the parameter set decoding unit 12 in order to derive the dependency layer from the VPS encoding parameter.
- FIG. 29 is a flowchart showing bit stream extraction processing in units of access units in the bit stream extraction unit 17 ′′.
- Whether or not the NAL unit type (nal_unit_type) of the target NAL unit is a parameter set is determined based on “nal_unit_type” and “Name of_nal_unit_type” shown in FIG.
- the bit stream extraction unit 17 '' decodes the target NAL unit that is a VPS, and derives a dependent layer (dependent layer set) of each layer included in the target set TargetSet. Specifically, in accordance with the procedure described in (Derivation of reference layer ID list and direct reference layer IDX list) and (Derivation of indirect dependency flag and dependency flag), the i-th layer of the layer identifier nuhLId # i The inter-identifier dependency flag LIdDipendecyFlag [] [] indicating whether or not the jth layer of the layer identifier nuhLId # j is a direct reference layer or an indirect reference layer is derived.
- the j-th layer (layer identifier nuhLId # j) for the i-th layer (layer identifier nuhLId # i) described above is a direct reference layer or an indirect reference layer, instead of the inter-layer identifier dependency flag.
- the indicated dependency flag DependencyFlag [i] [j] may be derived.
- a layer constituting the target layer set LayerSetTarget indicates whether or not the layer identifier and temporal identifier of the target NAL unit are included in the target set TargetSet, or whether or not it is a dependent layer of the layer included in the target set TargetSet. It is determined based on the layer ID list LayerIdListTarget, the target highest temporal identifier, and the dependency flag (inter-layer identifier dependency flag LidDependencyFlag [] []). More specifically, it is determined whether the following conditions (1) to (3) are satisfied. When at least one of the conditions is satisfied (true) (Yes in SG105), the process proceeds to step SG106. In other cases (No in SG105b), the process proceeds to step SG107.
- the layer ID list LayerIdListTarget of the layers constituting the target layer set LayerSetTarget has the same value as the layer identifier of the target NAL unit”, it is determined to be true, and otherwise (the target layer set LayerSetTarget is set to The layer ID list LayerIdListTarget of the layer to be configured does not have the same value as the layer identifier of the target NAL unit), and is determined to be false.
- the temporal identifier of the target NAL unit is less than or equal to the target highest temporal identifier HighestTidTarget”, it is determined to be true; otherwise (the temporal identifier of the target NAL unit is greater than the target highest temporal temporal identifier HighestTidTarget), Judge as false.
- the inter-layer identifier dependency flag LidDependencyFlag [LayerIdListTarget [k]] [nuhLayerId] is 1, it is determined to be true.
- the inter-layer identifier dependency flag LidDependencyFlag [[LayerIdListTarget [k]] [nuhLayerId] has a value of 0
- the above determination may be based on DepFlag derived also by the following equation.
- LIdDependencyFlag [LayerIdListTarget [k]] [nuhLayerId]; ⁇ (SG106b) Discard the target NAL unit. That is, since the target NAL unit is not included in the target set TargetSet or the dependent layer of the target set Target, the bitstream extraction unit 17 removes the target NAL unit from the input hierarchically encoded data DATA.
- bitstream extraction unit 17 '' The operation of the bitstream extraction unit 17 '' according to the modification 2 has been described above. However, the operation is not limited to the above step, and the step may be changed within a feasible range.
- bit stream extraction unit 17 ′′ According to the bit stream extraction unit 17 ′′ according to the modified example 2 described above, it is derived from the layer ID list LayerIdListTarget, the target highest temporal identifier HighestTidTarget, and the VPS constituting the target layer set LayerSetTarget supplied from the outside. Bitstream extraction processing based on the dependency layer information (dependency flag (LIdDependencyFlag [] [] or DependencyFlag [] [])), and from the input hierarchical encoded data DATA, the target highest temporal identifier HighestTidTarget, NAL units not included in the target layer set TargetSet TargetSet and target set TargetSet dependent layer are removed (destroyed), and included in the target set TargetSet and target set TargetSet dependent layers.
- dependency flag LIdDependencyFlag [] [] or DependencyFlag [] []
- the target layer set encoded data DATA # T (BitstreamToDecode) composed of units is extracted and output.
- the bit stream extraction unit 17 ′′ includes the NAL unit included in the dependency layer in the bit stream of the target set TargetSet without discarding the NAL unit included in the dependency layer of the target set TargetSet. Yes.
- the operation of the bitstream extraction unit 17 ′′ is that “the AU constituting the input hierarchical encoded data DATA includes at most one VPS having the lowest layer identifier in the AU.”
- the present invention is not limited to this.
- a VPS having a layer identifier other than the lowest layer identifier in the AU may be included in the AU.
- the bitstream extraction unit 17 '' may set the VPS that is the target for deriving the layer-dependent information in steps SG102b to SG10B as the lowest layer identifier with a layer identifier not included in the target set TargetSet. .
- layer-dependent information may be derived from the VPS, and other VPSs not included in the target set TargetSet are discarded. To do.
- the dependency layer (direct reference layer or indirect) referred to from the layer in the layer set is referred to as the layer set. It is possible to prevent the problem that VCL and non-VCL NAL units related to the reference layer are not included. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- bit stream restriction according to the second modification of the bitstream extraction unit 17 In addition, in order to perform the bit stream extraction described in the bit stream extraction unit 17 ′′ according to the modification 2, the bit stream must satisfy at least the following condition CZ1 as the bit stream conformance.
- Target set TargetSet (layer set) includes dependent layers on which each layer in the target set TargetSet depends (refers to)”
- bitstream restriction CZ1 means that “a dependency layer referred to by a certain target layer in the layer set is included in the same layer set”.
- a dependent layer referred to by a target layer in the layer set is included in the same layer set
- a layer in the layer set B that is a subset of the layer set A is a layer In the set A
- the problem that the unit is not included can be solved. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- the NCL unit “may be omitted, and the VCL and non-VCL may be included in the bit stream of the target set TargetSet after extraction. In this case, as the bitstream conformance, in addition to the conformance condition CZ1, conformance conditions CA1 and CA2 regarding the parameter set (SPS, PPS) must be satisfied.
- FIG. 30 is a flowchart showing bit stream extraction processing in units of access units in the bit stream extraction unit 17 ′ ′′.
- Whether or not the NAL unit type (nal_unit_type) of the target NAL unit is a parameter set is determined based on “nal_unit_type” and “Name of_nal_unit_type” shown in FIG.
- the process proceeds to step SG0107. In other cases (No in SG10C), the process proceeds to step SG105b.
- a layer constituting the target layer set LayerSetTarget indicates whether or not the layer identifier and temporal identifier of the target NAL unit are included in the target set TargetSet, or whether or not it is a dependent layer of the layer included in the target set TargetSet. This is determined based on the layer ID list LayerIdListTarget, the target highest temporal identifier, and the dependency flag (inter-layer identifier dependency flag LIdDependencyFlag [] []). Since the process is the same as step SG105b in FIG. 29, the description thereof is omitted.
- (SG106b) Discard target NAL unit. That is, since the target NAL unit is not included in the target set TargetSet, or is not a dependency layer of each layer of the target set TargetSet, the bitstream extraction unit 17 ′ ′′ obtains the target NAL from the input hierarchical encoded data DATA. Remove unit.
- bitstream extraction unit 17 ′′ ′′ As described above, the operation of the bitstream extraction unit 17 ′′ ′′ according to the modified example 3 has been described. However, the operation is not limited to the above step, and the step may be changed within a practicable range.
- bit stream extraction unit 17 ′ ′′ According to the bit stream extraction unit 17 ′ ′′ according to the modified example 3 described above, it is derived from the layer ID list LayerIdListTarget, the target highest temporal identifier HighestTidTarget, and the VPS constituting the target layer set LayerSetTarget supplied from the outside.
- the NAL unit that does not have the layer identifier of the dependent layer of each layer in et is removed (destroyed), the NAL unit that has the layer identifier included in the target set TargetSet, and the layer of the dependent layer of each layer in the target set TargetSet Extracting and outputting target layer set encoded data DATA # T (BitstreamToDecode) composed of a NAL unit having an identifier and a NAL
- the operation of the bitstream extraction unit 17 ′ ′′ is “the AU constituting the input hierarchical encoded data DATA includes at most one VPS having the lowest layer identifier in the AU.
- the present invention is not limited to this.
- a VPS having a layer identifier other than the lowest layer identifier in the AU may be included in the AU.
- the bitstream extraction unit 17 ′ ′′ uses the layer identifier not included in the target set TargetSet as the lowest layer identifier in steps SG102b to SG10B, the VPS for which the layer-dependent information is derived is the layer identifier not included in the target set Good.
- layer-dependent information may be derived from the VPS, and other VPSs not included in the target set TargetSet are discarded. (Or ignore).
- the dependency layer directly reference layer or It is possible to prevent a problem that a VCL related to an indirect reference layer
- FIG. 22 is a functional block diagram showing a schematic configuration of the hierarchical video encoding device 2.
- the hierarchical video encoding device 2 encodes the input image PIN # T (picture) of each layer included in the layer set to be encoded (target layer set), and generates the hierarchical encoded data DATA of the target layer set. Generate. That is, the moving image encoding apparatus 2 encodes the pictures of each layer in ascending order from the lowest layer ID to the highest layer ID included in the target layer set, and generates the encoded data.
- the hierarchical video encoding device 2 uses the aforementioned CX1 (CX1 ′) or CX2 ( (CX2 ′), or CY1, or CY2, or (CY2 and CY3 and CY4), or CZ1, or (CZ1, CA1, and CA2 and “the VPS layer identifier nuh_layer_id is 0”).
- Hierarchical encoded data DATA of the target layer set is generated so as to satisfy.
- Hierarchically encoded data DATA that satisfies the bitstream conformance only the layer set of the subset of the layer set generated by the bitstream extraction process from the bitstream of a certain layer set in the hierarchical decoding device 1 Occurrence of a layer that cannot be decoded can be prevented on a bitstream including.
- the hierarchical video encoding device 2 includes a target layer set picture encoding unit 20 and a NAL multiplexing unit 21. Further, the target layer set picture coding unit 20 includes a parameter set coding unit 22, a picture coding unit 24, a decoded picture management unit 15, and a coding parameter determination unit 26.
- the decoded picture management unit 15 is the same component as the decoded picture management unit 15 included in the hierarchical video decoding device 1 already described. However, since the decoded picture management unit 15 included in the hierarchical video encoding device 2 does not need to output a picture recorded in the internal DPB as an output picture, the output can be omitted. Note that the description described as “decoding” in the description of the decoded picture management unit 15 of the hierarchical video decoding device 1 is replaced with “encoding”, so that the decoded picture management unit 15 included in the hierarchical video encoding device 2 also includes Applicable.
- the NAL multiplexing unit 21 generates the NAL-multiplexed hierarchical moving image encoded data DATA # T by storing the VCL and non-VCL of each layer of the input target layer set in the NAL unit, Output to.
- the NAL multiplexing unit 21 includes the non-VCL encoded data, the VCL encoded data supplied from the target layer set picture encoding unit 20, and the NAL unit type and layer corresponding to each non-VCL and VCL.
- the identifier and the temporal identifier are stored (encoded) in the NAL unit, and NAL-multiplexed hierarchical encoded data DATA # T is generated.
- the encoding parameter determination unit 26 selects one set from among a plurality of sets of encoding parameters.
- the encoding parameters are various parameters related to each parameter set (VPS, SPS, PPS), prediction parameters for encoding a picture, and parameters to be encoded generated in relation to the prediction parameters. It is.
- the encoding parameter determination unit 26 calculates a cost value indicating the amount of information and the encoding error for each of the plurality of sets of the encoding parameters.
- the cost value is, for example, the sum of a code amount and a square error multiplied by a coefficient ⁇ .
- the code amount is an information amount of encoded data of each layer of the target layer set obtained by variable-length encoding the quantization error and the encoding parameter.
- the square error is the sum between pixels regarding the square value of the difference value between the input image PIN # T and the predicted image.
- the coefficient ⁇ is a real number larger than a preset zero.
- the encoding parameter determination unit 26 selects a set of encoding parameters that minimizes the calculated cost value, and supplies the selected set of encoding parameters to the parameter set encoding unit 22 and the picture encoding unit 24. .
- the parameter set output from the encoding parameter determination unit 26 is derived from the syntax value of the syntax relating to the parameter set (VPS, SPS, PPS) included in the encoded data, and the syntax value. It can also be expressed as a set of variables.
- the parameter set encoding unit 22 uses the parameter set (VPS, SPS, and SPS) used for encoding the input image based on the encoding parameter and the input image of each parameter set input from the encoding parameter determination unit 26.
- Each parameter set is supplied to the NAL multiplexer 21 as data stored in the non-VCL NAL unit.
- encoding of a parameter set is performed based on a predetermined syntax table. That is, in accordance with the procedure defined by the syntax table, the syntax value of the syntax included in the syntax table is encoded, a bit string is generated, and output as encoded data.
- the parameter set encoded by the parameter set encoding unit 22 includes inter-layer dependency information (direct dependency flag, layer dependency type bit length) described in the description of the parameter set decoding unit 12 included in the hierarchical video decoding device 1. , Layer dependent type).
- the parameter set encoding unit 22 encodes the non-VCL dependency presence / absence flag as part of the layer dependency type.
- the parameter set encoding unit 22 also outputs the NAL unit type, the layer identifier, and the temporal identifier corresponding to the non-VCL when supplying non-VCL encoded data to the NAL multiplexing unit 21. .
- the parameter set generated by the parameter set encoding unit 22 includes an identifier for identifying the parameter set, and a parameter set (active parameter set) referred to by the parameter set referred to for decoding a picture of each layer.
- an active parameter set identifier Specifically, for a video parameter set VPS, a VPS identifier for identifying the VPS is included.
- an SPS identifier sps_seq_parameter_set_id
- an active VPS identifier sps_video_parameter_set_id
- a PPS identifier for identifying the PPS and an active SPS identifier (pps_seq_parameter_set_id) for identifying an SPS to which the PPS or other syntax refers are included.
- the picture encoding unit 24 is based on the input image PIN # T of each input layer, the parameter set supplied from the encoding parameter determination unit 26, and the reference picture recorded in the decoded picture management unit 15. A part of the input image of each layer corresponding to the slice constituting the picture is encoded to generate encoded data of the part, and the encoded data is supplied to the NAL multiplexing unit 21 as data stored in the VCL NAL unit. Detailed description of the picture encoding unit 24 will be described later. Note that when the picture coding unit 24 supplies the VCL coded data to the NAL multiplexing unit 21, the picture coding unit 24 also assigns and outputs the NAL unit type, the layer identifier, and the temporal identifier corresponding to the VCL.
- FIG. 23 is a functional block diagram showing a schematic configuration of the picture encoding unit 24.
- the picture encoding unit 24 includes a slice header encoding unit 241 and a CTU encoding unit 242.
- the slice header encoding unit 241 generates a slice header used for encoding the input image of each layer input in units of slices based on the input active parameter set.
- the generated slice header is output as part of the slice encoded data and is supplied to the CTU encoding unit 242 together with the input image.
- the slice header generated by the slice header encoding unit 241 includes an active PPS identifier that designates a picture parameter set PPS (active PPS) to be referred to in order to decode a picture of each layer.
- the CTU encoding unit 242 encodes the input image (target slice portion) in units of CTU based on the input active parameter set and slice header, and generates slice data and a decoded image (decoded picture) related to the target slice. And output. More specifically, the CTU encoding unit 242 divides the input image of the target slice in units of CTBs having a CTB size included in the parameter set, and encodes an image corresponding to each CTB as one CTU. . CTU encoding is performed by the prediction residual encoding unit 2421, the prediction image encoding unit 2422, and the CTU decoded image generation unit 2423.
- the prediction residual encoding unit 2421 converts the quantization residual information (TT information) obtained by transforming and quantizing the difference image between the input image and the prediction image to be input to the slice data included in the slice encoded data. Output as part. Further, the prediction residual is restored by applying inverse transform / inverse quantization to the quantized residual information, and the restored prediction residual is output to the CTU decoded image generation unit 2423.
- TT information quantization residual information
- the prediction image encoding unit 2422 generates a prediction image based on the prediction method and the prediction parameter of the target CTU included in the target slice, which is determined by the encoding parameter determination unit 26, and the prediction residual encoding unit 2421.
- the data is output to the CTU decoded image generation unit 2423.
- the prediction scheme and prediction parameter information are variable-length encoded as prediction information (PT information) and output as a part of slice data included in the slice encoded data.
- the prediction methods that can be selected by the prediction image encoding unit 2422 include at least inter-layer image prediction. When inter prediction or inter-layer image prediction is used, a corresponding reference picture is read from the decoded picture management unit 15.
- the CTU decoded image generation unit 2423 is the same component as the CTU decoded image system generation unit 1423 included in the hierarchical video decoding device 1, description thereof is omitted. Note that the decoded image of the target CTU is supplied to the decoded picture management unit 15 and recorded in the internal DPB.
- FIG. 24 is a flowchart showing an encoding process in units of slices constituting a picture of the target layer i in the picture encoding unit 24.
- the first slice flag (first_slice_segment_in_pic_flag) of the encoding target slice is encoded. That is, if the input image divided into slice units (hereinafter referred to as encoding target slice) is the first slice in the encoding order (decoding order) (hereinafter referred to as processing order) in the picture, the first slice flag (first_slice_segment_in_pic_flag) is set. 1. If the current slice is not the first slice, the first slice flag is 0. When the head slice flag is 1, the head CTU address of the encoding target slice is set to 0. Further, the counter numCtu for the number of processed CTUs in the picture is set to zero. When the head slice flag is 0, the head CTU address of the encoding target slice is set based on a slice address encoded in SE106 described later.
- SE102 Encodes an active PPS identifier (slice_pic_paramter_set_id) that specifies an active PPS to be referred to when encoding an encoding target slice.
- the active parameter set determined by the encoding parameter determination unit 26 is fetched. That is, the PPS having the same PPS identifier (pps_pic_parameter_set_id) as the active PPS identifier (slice_pic_parameter_set_id) referred to by the encoding target slice is set as the active PPS, and the encoding parameter determination unit 26 fetches (reads) the encoding parameter of the active PPS. ).
- the SPS having the same SPS identifier (sps_seq_parameter_set_id) as the active SPS identifier (pps_seq_parameter_set_id) in the active PPS is set as the active SPS, and the encoding parameter of the active SPS is fetched from the encoding parameter determination unit 26.
- the VPS having the same VPS identifier (vps_video_parameter_set_id) as the active VPS identifier (sps_video_parameter_set_id) in the active SPS is set as the active VPS, and the encoding parameter of the active VPS is fetched from the encoding parameter determination unit 26.
- SE105 It is determined based on the head slice flag whether or not the coding target slice is the head slice in the processing order in the picture. If the first slice flag is 0 (Yes in SE105), the process proceeds to step SE106. In other cases (No in SE105), the process of step SE106 is skipped. When the head slice flag is 1, the slice address of the encoding target slice is 0.
- the slice address (slice_segment_address) of the encoding target slice is encoded.
- the slice address of the encoding target slice (the leading CUT address of the encoding target slice) can be set based on, for example, the counter numCtu of the number of processed CTUs in the picture.
- the slice address slice_segment_adress numCtu. That is, the leading CTU address of the encoding target slice is also numCtu.
- the method for determining the slice address is not limited to this, and can be changed within a practicable range. ... Omitted ...
- the CTU encoding unit 242 encodes an input image (encoding target slice) in units of CTUs based on the input active parameter set and slice header, and as a part of slice data of the encoding target slice
- the encoded data of the CTU information (SYNSD01 in FIG. 18) is output.
- the CTU encoding unit 242 generates and outputs a CTU decoded image of a region corresponding to each CTU.
- a slice end flag (end_of_slice_segment_flag) (SYNSD2 in FIG. 18) indicating whether the CTU is the end of the encoding target slice is encoded.
- the slice end flag is set to 1, otherwise it is set to 0 and encoding is performed. Further, after encoding each CTU, 1 is added to the value of the number of processed CTUs numCtu (numCtu ++).
- SE10B It is determined based on the slice end flag whether or not the CTU is the end of the encoding target slice.
- the slice end flag is 1 (Yes in SE10B)
- the process proceeds to Step SE10C.
- the process proceeds to step SE10A in order to encode the subsequent CTU.
- numCtu is equal to PicSizeInCtbsY (Yes in SE10C)
- the encoding process in units of slices constituting the encoding target picture is terminated.
- the process proceeds to step SE101 in order to continue the encoding process in units of slices constituting the current picture to be encoded.
- the hierarchical video encoding device 2 has the above-described CX1 (CX1) in order to ensure that the hierarchical video decoding device 1 (and its modifications) is a decodable bitstream.
- CX1 CX1
- CX2' CX2
- CY1, or CY2, or (CY2 and CY3 and CY4) CZ1, or (CZ1, CA1, and CA2 and “the layer identifier nuh_layer_id of VPS is 0”
- the hierarchical encoded data DATA of the target layer set is generated so as to satisfy the bit stream conformance.
- the hierarchical video encoding device 2 is a parameter used for encoding a reference layer as a parameter set (VPS, SPS, PPS) used for encoding a target layer.
- a parameter set VPN, SPS, PPS
- the parameter set can be encoded with a smaller code amount.
- the above-described hierarchical video encoding device 2 and hierarchical video decoding device 1 can be used by being mounted on various devices that perform transmission, reception, recording, and reproduction of moving images.
- the moving image may be a natural moving image captured by a camera or the like, or may be an artificial moving image (including CG and GUI) generated by a computer or the like.
- FIG. 25A is a block diagram illustrating a configuration of a transmission device PROD_A in which the hierarchical video encoding device 2 is mounted.
- the transmission device PROD_A modulates a carrier wave with an encoding unit PROD_A1 that obtains encoded data by encoding a moving image, and the encoded data obtained by the encoding unit PROD_A1.
- a modulation unit PROD_A2 that obtains a modulation signal and a transmission unit PROD_A3 that transmits the modulation signal obtained by the modulation unit PROD_A2 are provided.
- the hierarchical moving image encoding apparatus 2 described above is used as the encoding unit PROD_A1.
- the transmission device PROD_A is a camera PROD_A4 that captures a moving image, a recording medium PROD_A5 that records the moving image, an input terminal PROD_A6 that inputs the moving image from the outside, as a supply source of the moving image input to the encoding unit PROD_A1.
- An image processing unit A7 that generates or processes an image may be further provided.
- FIG. 25A illustrates a configuration in which the transmission apparatus PROD_A includes all of these, but a part of the configuration may be omitted.
- the recording medium PROD_A5 may be a recording of a non-encoded moving image, or a recording of a moving image encoded by a recording encoding scheme different from the transmission encoding scheme. It may be a thing. In the latter case, a decoding unit (not shown) for decoding the encoded data read from the recording medium PROD_A5 according to the recording encoding method may be interposed between the recording medium PROD_A5 and the encoding unit PROD_A1.
- FIG. 25 is a block diagram illustrating a configuration of a receiving device PROD_B in which the hierarchical video decoding device 1 is mounted.
- the reception device PROD_B includes a reception unit PROD_B1 that receives a modulation signal, a demodulation unit PROD_B2 that obtains encoded data by demodulating the modulation signal received by the reception unit PROD_B1, and a demodulation A decoding unit PROD_B3 that obtains a moving image by decoding the encoded data obtained by the unit PROD_B2.
- the above-described hierarchical video decoding device 1 is used as the decoding unit PROD_B3.
- the receiving device PROD_B has a display PROD_B4 for displaying a moving image, a recording medium PROD_B5 for recording the moving image, and an output terminal for outputting the moving image to the outside as a supply destination of the moving image output by the decoding unit PROD_B3.
- PROD_B6 may be further provided.
- FIG. 25B illustrates a configuration in which the reception apparatus PROD_B includes all of these, but a part of the configuration may be omitted.
- the recording medium PROD_B5 may be used for recording a non-encoded moving image, or may be encoded using a recording encoding method different from the transmission encoding method. May be. In the latter case, an encoding unit (not shown) for encoding the moving image acquired from the decoding unit PROD_B3 according to the recording encoding method may be interposed between the decoding unit PROD_B3 and the recording medium PROD_B5.
- the transmission medium for transmitting the modulation signal may be wireless or wired.
- the transmission mode for transmitting the modulated signal may be broadcasting (here, a transmission mode in which the transmission destination is not specified in advance) or communication (here, transmission in which the transmission destination is specified in advance). Refers to the embodiment). That is, the transmission of the modulation signal may be realized by any of wireless broadcasting, wired broadcasting, wireless communication, and wired communication.
- a terrestrial digital broadcast broadcasting station (broadcasting equipment or the like) / receiving station (such as a television receiver) is an example of a transmitting device PROD_A / receiving device PROD_B that transmits and receives a modulated signal by wireless broadcasting.
- a broadcasting station (such as broadcasting equipment) / receiving station (such as a television receiver) of cable television broadcasting is an example of a transmitting device PROD_A / receiving device PROD_B that transmits and receives a modulated signal by cable broadcasting.
- a server workstation etc.
- Client television receiver, personal computer, smart phone etc.
- VOD Video On Demand
- video sharing service using the Internet is a transmitting device for transmitting and receiving modulated signals by communication.
- PROD_A / reception device PROD_B usually, either a wireless or wired transmission medium is used in a LAN, and a wired transmission medium is used in a WAN.
- the personal computer includes a desktop PC, a laptop PC, and a tablet PC.
- the smartphone also includes a multi-function mobile phone terminal.
- the video sharing service client has a function of encoding a moving image captured by the camera and uploading it to the server. That is, the client of the video sharing service functions as both the transmission device PROD_A and the reception device PROD_B.
- FIG. 26A is a block diagram illustrating a configuration of a recording apparatus PROD_C in which the above-described hierarchical video encoding apparatus 2 is mounted.
- the recording device PROD_C includes an encoding unit PROD_C1 that obtains encoded data by encoding a moving image, and the encoded data obtained by the encoding unit PROD_C1 on the recording medium PROD_M.
- the hierarchical moving image encoding device 2 described above is used as the encoding unit PROD_C1.
- the recording medium PROD_M may be of a type built in the recording device PROD_C, such as (1) HDD (Hard Disk Drive) or SSD (Solid State Drive), or (2) SD memory. It may be of the type connected to the recording device PROD_C, such as a card or USB (Universal Serial Bus) flash memory, or (3) DVD (Digital Versatile Disc) or BD (Blu-ray Disc: registration) For example, it may be loaded into a drive device (not shown) built in the recording device PROD_C.
- the recording device PROD_C is a camera PROD_C3 that captures moving images as a supply source of moving images to be input to the encoding unit PROD_C1, an input terminal PROD_C4 for inputting moving images from the outside, and reception for receiving moving images.
- the unit PROD_C5 and an image processing unit C6 that generates or processes an image may be further provided.
- FIG. 26A a configuration in which all of these are provided in the recording apparatus PROD_C is illustrated, but a part may be omitted.
- the receiving unit PROD_C5 may receive a non-encoded moving image, or may receive encoded data encoded by a transmission encoding scheme different from the recording encoding scheme. You may do. In the latter case, a transmission decoding unit (not shown) that decodes encoded data encoded by the transmission encoding method may be interposed between the reception unit PROD_C5 and the encoding unit PROD_C1.
- Examples of such a recording device PROD_C include a DVD recorder, a BD recorder, and an HDD (Hard Disk Drive) recorder (in this case, the input terminal PROD_C4 or the receiving unit PROD_C5 is a main supply source of moving images).
- a camcorder in this case, the camera PROD_C3 is a main source of moving images
- a personal computer in this case, the receiving unit PROD_C5 or the image processing unit C6 is a main source of moving images
- a smartphone in this case In this case, the camera PROD_C3 or the receiving unit PROD_C5 is a main supply source of moving images
- the camera PROD_C3 or the receiving unit PROD_C5 is a main supply source of moving images
- FIG. 26 is a block showing a configuration of a playback device PROD_D in which the above-described hierarchical video decoding device 1 is mounted.
- the playback device PROD_D reads a moving image by decoding a read unit PROD_D1 that reads encoded data written to the recording medium PROD_M and a coded data read by the read unit PROD_D1. And a decoding unit PROD_D2 to be obtained.
- the hierarchical moving image decoding apparatus 1 described above is used as the decoding unit PROD_D2.
- the recording medium PROD_M may be of the type built into the playback device PROD_D, such as (1) HDD or SSD, or (2) such as an SD memory card or USB flash memory, It may be of a type connected to the playback device PROD_D, or (3) may be loaded into a drive device (not shown) built in the playback device PROD_D, such as DVD or BD. Good.
- the playback device PROD_D has a display PROD_D3 that displays a moving image, an output terminal PROD_D4 that outputs the moving image to the outside, and a transmission unit that transmits the moving image as a supply destination of the moving image output by the decoding unit PROD_D2.
- PROD_D5 may be further provided.
- FIG. 26B illustrates a configuration in which the playback apparatus PROD_D includes all of these, but a part may be omitted.
- the transmission unit PROD_D5 may transmit an unencoded moving image, or transmits encoded data encoded by a transmission encoding method different from the recording encoding method. You may do. In the latter case, it is preferable to interpose an encoding unit (not shown) that encodes a moving image with an encoding method for transmission between the decoding unit PROD_D2 and the transmission unit PROD_D5.
- Examples of such a playback device PROD_D include a DVD player, a BD player, and an HDD player (in this case, an output terminal PROD_D4 to which a television receiver or the like is connected is a main supply destination of moving images).
- a television receiver in this case, the display PROD_D3 is a main supply destination of moving images
- a digital signage also referred to as an electronic signboard or an electronic bulletin board
- the display PROD_D3 or the transmission unit PROD_D5 is a main supply of moving images.
- Desktop PC (in this case, the output terminal PROD_D4 or the transmission unit PROD_D5 is the main video image supply destination), laptop or tablet PC (in this case, the display PROD_D3 or the transmission unit PROD_D5 is a moving image)
- a smartphone which is a main image supply destination
- a smartphone in this case, the display PROD_D3 or the transmission unit PROD_D5 is a main moving image supply destination
- the like are also examples of such a playback device PROD_D.
- each block of the hierarchical video decoding device 1 and the hierarchical video encoding device 2 may be realized in hardware by a logic circuit formed on an integrated circuit (IC chip), or may be a CPU (Central It may be realized by software using a Processing Unit).
- IC chip integrated circuit
- CPU Central It may be realized by software using a Processing Unit
- each of the devices includes a CPU that executes instructions of a control program that realizes each function, a ROM (Read Memory) that stores the program, a RAM (Random Access Memory) that expands the program, the program, and A storage device (recording medium) such as a memory for storing various data is provided.
- An object of the present invention is to provide a recording medium in which a program code (execution format program, intermediate code program, source program) of a control program for each of the above devices, which is software that realizes the above-described functions, is recorded in a computer-readable manner This can also be achieved by supplying each of the above devices and reading and executing the program code recorded on the recording medium by the computer (or CPU or MPU (Micro Processing Unit)).
- a program code execution format program, intermediate code program, source program
- Examples of the recording medium include tapes such as magnetic tapes and cassette tapes, magnetic disks such as floppy (registered trademark) disks / hard disks, CD-ROMs (Compact Disc-Read-Only Memory) / MO (Magneto-Optical) / Discs including optical discs such as MD (Mini Disc) / DVD (Digital Versatile Disc) / CD-R (CD Recordable), cards such as IC cards (including memory cards) / optical cards, mask ROM / EPROM (Erasable) Programmable Read-only Memory / EEPROM (registered trademark) (ElectricallyErasable Programmable Read-only Memory) / Semiconductor memories such as flash ROM, or logic circuits such as PLD (Programmable Logic Device) and FPGA (Field Programmable Gate Array) Etc. can be used.
- tapes such as magnetic tapes and cassette tapes
- magnetic disks such as floppy (registered trademark) disks / hard disks
- each of the above devices may be configured to be connectable to a communication network, and the program code may be supplied via the communication network.
- the communication network is not particularly limited as long as it can transmit the program code.
- the Internet intranet, extranet, LAN (Local Area Network), ISDN (Integrated Services Digital Network), VAN (Value-Added Network), CATV (Community Area Antenna Television) communication network, Virtual Private Network (Virtual Private Network), A telephone line network, a mobile communication network, a satellite communication network, etc. can be used.
- the transmission medium constituting the communication network may be any medium that can transmit the program code, and is not limited to a specific configuration or type.
- IEEE Institute of Electrical and Electronic Engineers 1394, USB, power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line, etc. wired such as IrDA (Infrared Data Association) or remote control Bluetooth (registered trademark), IEEE 802.11 wireless, HDR (High Data Rate), NFC (Near Field Communication), DLNA (registered trademark) (Digital Living Network Alliance), mobile phone network, satellite line, terrestrial digital network, etc. It can also be used wirelessly.
- the present invention can also be realized in the form of a computer data signal embedded in a carrier wave in which the program code is embodied by electronic transmission.
- An image decoding apparatus is an image decoding apparatus that decodes input image encoded data, and is based on a layer ID list indicating a decoding target layer set including one or more layers.
- An encoded image data extraction unit for extracting encoded image data related to the decoding target layer set from the encoded data, a picture decoding unit for decoding a picture of the decoding target layer set from the extracted encoded image data,
- the input image encoded data extracted from the image encoded data extraction unit includes a non-VCL NAL unit having a layer identifier not equal to 0 and having a layer identifier not included in the layer ID list It is characterized by not.
- the image decoding apparatus is the image decoding apparatus according to aspect 1, wherein the temporal ID of the NAL unit included in the image encoded data is equal to or less than the value of the highest temporal ID of the decoding target layer set.
- the image decoding apparatus is characterized in that, in aspect 1, the non-VCL-NAL unit is a NAL unit having a parameter set.
- the image decoding apparatus is characterized in that, in aspect 2, the parameter set includes a video parameter set.
- the image decoding apparatus is characterized in that, in aspect 3 described above, the parameter set includes a sequence parameter set.
- the image decoding apparatus is characterized in that, in aspect 3 described above, the parameter set includes a picture parameter set.
- An image decoding method is an image decoding method for decoding input image encoded data, wherein the input image code is based on a layer ID list indicating a decoding target layer set composed of one or more layers.
- An encoded image data extracting step for extracting encoded image data related to the decoding target layer set from the encoded data; a picture decoding step for decoding a picture of the decoding target layer set from the extracted encoded image data;
- the input image encoded data extracted in the image encoded data extraction step includes a non-VCL NAL unit having a layer identifier not equal to 0 and having a layer identifier not included in the layer ID list It is characterized by not.
- An image decoding apparatus is an image decoding apparatus including an encoded image data extraction unit that extracts encoded image data to be decoded from input encoded image data based on a layer ID list of an object layer set.
- the encoded image data extracting unit further includes a layer of a non-video encoded layer NAL unit having a layer identifier smaller than the lowest layer identifier in the layer ID list of the target layer set in the input image encoded data.
- a layer identifier update unit that updates an identifier to the lowest layer identifier
- the image encoded data extraction unit includes an image encoding unit that includes a non-video encoded layer NAL unit whose layer identifier has been updated by the layer identifier update unit. From the data, it has a layer identifier not included in the layer ID list of the target layer set Discard the NAL unit, and generating a decoded image encoded data.
- the NAL unit of the non-video coding layer is not included in the layer set on the bit stream after the bit stream is extracted. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- An image decoding apparatus is an image decoding apparatus including an encoded image data extraction unit that extracts encoded image data to be decoded from input encoded image data based on a layer ID list of an object layer set.
- the encoded image data extraction unit removes a layer identifier that is not included in the layer ID list of the target layer set, except for the NAL unit of the parameter set whose layer identifier is 0, from the input image encoded data.
- the NAL unit is discarded, and decoding target image encoded data is generated.
- the NAL unit of the parameter set whose layer identifier is 0 is not included in the layer set on the bit stream after the bit stream is extracted. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- An image decoding apparatus is an image decoding apparatus including an encoded image data extraction unit that extracts decoded image encoded data from input image encoded data based on a layer ID list of an object layer set.
- the encoded image data extracting unit further includes a dependent layer information deriving unit for deriving dependent layer information on which each layer included in the target layer set depends in the input image encoded data, and the image code
- the encoded data extraction unit excludes the dependent layer NAL unit derived by the dependent layer information deriving unit from the input image encoded data, and includes a NAL having a layer identifier not included in the layer ID list of the target layer set The unit is discarded, and decoding target image encoded data is generated.
- the layer set does not include the dependent layer NAL unit on which each layer included in the target layer set depends on the bitstream after the bitstream extraction. it can. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- the image decoding apparatus is the image decoding apparatus according to aspect 8, further characterized in that the non-video coding layer NAL unit is a NAL unit including a parameter set.
- the image decoding apparatus is characterized in that, in the above aspect 9 or 11, the parameter set is a video parameter set.
- the image decoding apparatus is characterized in that in the above aspect 9 or 11, the parameter set is a sequence parameter set.
- the image decoding apparatus is characterized in that, in the above aspect 9 or 11, the parameter set is a picture parameter set.
- the image decoding apparatus is the image decoding apparatus according to aspect 10, further characterized in that the dependency layer is a direct reference layer or an indirect reference layer.
- the layer set does not include the direct reference layer on which each layer included in the target layer set depends or the NAL unit of the indirect reference layer on the bit stream after the bit stream is extracted. The problem can be prevented.
- the video parameter set referred to from a certain target layer is the same layer identifier as the VCL having the lowest layer identifier in the access unit including the target layer.
- the encoded image data satisfies the conformance condition.
- the problem that the video parameter set is not included in the layer set in the sub bitstream generated by the bitstream extraction from the image encoded data can be prevented. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- the encoded image data according to the aspect 17 of the present invention is encoded image data that satisfies a conformance condition that a layer identifier value of a video parameter set referred to from a certain target layer is 0. .
- the problem that the video parameter set whose layer identifier is 0 is not included in the layer set in the sub bitstream generated by the bitstream extraction from the encoded image data is prevented. be able to. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- the image encoded data according to aspect 18 of the present invention is image encoded data that satisfies the conformance condition that the value of the layer identifier of the sequence parameter set referenced from a certain target layer is 0 in aspect 17 above. It is characterized by being.
- the problem that the sequence parameter set whose layer identifier is 0 is not included in the layer set in the sub-bitstream generated by the bitstream extraction from the image encoded data is prevented. be able to.
- the coded image data according to the nineteenth aspect of the present invention is an image that satisfies the conformance condition that the layer identifier value of the picture parameter set referred to from a certain target layer is zero. It is encoded data.
- the problem that the picture parameter set whose layer identifier is 0 is not included in the layer set in the sub-bitstream generated by the bitstream extraction from the image encoded data is prevented. be able to.
- the encoded image data according to the twentieth aspect of the present invention is image encoded data that satisfies a conformance condition that a layer set includes a dependent layer referenced from a certain target layer in the layer set.
- the layer set does not include a dependent layer referenced from a certain target layer in the layer set. Can be prevented. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- An image encoding device is an image encoding device that generates image encoded data from an input layer image corresponding to a target layer set based on a layer ID list of the target layer set,
- a layer identifier of a non-video encoding layer referenced from a certain target layer in the target layer set is the same as a VCL having a lowest layer identifier in an access unit of the target layer set
- the encoded image data satisfying the conformance condition of being the layer identifier of the image is generated.
- a non-video encoding layer referenced from a certain target layer in a sub-bitstream generated by bitstream extraction from image encoded data generated by the image encoding device can be prevented. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- An image encoding device is an image encoding device that generates image encoded data from an input layer image corresponding to a target layer set based on a layer ID list of the target layer set,
- a layer identifier of a non-video encoding layer referred to from a certain target layer in the target layer set is a lowest layer identifier in a layer ID list of the target layer set.
- Image encoded data that satisfies the conformance condition is generated.
- a non-video encoding layer referenced from a certain target layer in a sub-bitstream generated by bitstream extraction from image encoded data generated by the image encoding device can be prevented. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- An image encoding device is an image encoding device that generates image encoded data from an input layer image corresponding to a target layer set based on a layer ID list of the target layer set, The image encoding device generates image encoded data that satisfies a conformance condition that a dependency layer on which each layer depends is included in the target layer set in the target layer set.
- the NAL unit of the dependency layer referenced from a certain target layer in the sub-bitstream generated by the bitstream extraction from the image encoded data generated by the image encoding device can be prevented. That is, it is possible to prevent generation of a layer that cannot be decoded on a bitstream that includes only a layer set of a subset of the layer set, which is generated from a bit stream of a certain layer set by bit stream extraction processing.
- the present invention relates to a hierarchical video decoding device that decodes encoded data in which image data is hierarchically encoded, and a hierarchical video encoding device that generates encoded data in which image data is hierarchically encoded. It can be suitably applied to. Further, the present invention can be suitably applied to the data structure of hierarchically encoded data that is generated by a hierarchical video encoding device and referenced by the hierarchical video decoding device.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
本実施の形態に係る階層動画像復号装置(画像復号装置)1は、階層動画像符号化装置(画像符号化装置)2によって階層符号化された符号化データを復号する。階層符号化とは、動画像を低品質のものから高品質のものにかけて階層的に符号化する符号化方式のことである。階層符号化は、例えば、SVCやSHVCにおいて標準化されている。なお、ここでいう動画像の品質とは、主観的および客観的な動画像の見栄えに影響する要素のことを広く意味する。動画像の品質には、例えば、“解像度”、“フレームレート”、“画質”、および、“画素の表現精度”が含まれる。よって、以下、動画像の品質が異なるといえば、例示的には、“解像度”等が異なることを指すが、これに限られない。例えば、異なる量子化ステップで量子化された動画像の場合(すなわち、異なる符号化雑音により符号化された動画像の場合)も互いに動画像の品質が異なるといえる。
ここで、図2を用いて、階層符号化データの符号化および復号について説明すると次のとおりである。図2は、動画像を、下位階層L3、中位階層L2、および上位階層L1の3階層により階層的に符号化/復号する場合について模式的に表わす図である。つまり、図2(a)および(b)に示す例では、3階層のうち、上位階層L1が最上位層となり、下位階層L3が最下位層となる。
以下、各階層の符号化データを生成する符号化方式として、HEVCおよびその拡張方式を用いる場合について例示する。しかしながら、これに限られず、各階層の符号化データを、MPEG-2や、H.264/AVCなどの符号化方式により生成してもよい。
本実施形態に係る画像符号化装置2および画像復号装置1の詳細な説明に先立って、画像符号化装置2によって生成され、画像復号装置1によって復号される階層符号化データDATAのデータ構造について説明する。
図5は、階層符号化データDATAにおけるデータの階層構造を示す図である。階層符号化データDATAは、NAL(Network Abstraction Layer)ユニットと呼ばれる単位で符号化される。
特定の分類ルールにより集約されたNALユニットの集合のことをアクセスユニットと呼ぶ。レイヤ数が1の場合には、アクセスユニットは1ピクチャを構成するNALユニットの集合である。レイヤ数が1より大きい場合には、アクセスユニットは同じ時刻の複数のレイヤのピクチャを構成するNALユニットの集合である。なお、アクセスユニットの区切りを示すために、符号化データはアクセスユニットデリミタ(Access unit delimiter)と呼ばれるNALユニットを含んでも良い。アクセスユニットデリミタは、符号化データ中にあるアクセスユニットを構成するNALユニットの集合と、別のアクセスユニットを構成するNALユニットの集合の間に含まれる。
シーケンスレイヤでは、処理対象のシーケンスSEQ(以下、対象シーケンスとも称する)を復号するために画像復号装置1が参照するデータの集合が規定されている。シーケンスSEQは、図9の(a)に示すように、ビデオパラメータセット(Video Parameter Set)シーケンスパラメータセットSPS(Sequence Parameter Set)、ピクチャパラメータセットPPS(Picture Parameter Set)、ピクチャPICT、及び、付加拡張情報SEI(Supplemental Enhancement Information)を含んでいる。ここで#の後に示される値はレイヤIDを示す。図9では、#0と#1、すなわちレイヤIDが0とレイヤIDが1の符号化データが存在する例を示すが、レイヤの種類およびレイヤの数はこれに限定されない。
ビデオパラメータセットVPSでは、1以上のレイヤから構成される符号化データを復号するために画像復号装置1が参照する符号化パラメータの集合が規定されている。例えば、後述のシーケンスパラメータセットや他のシンタックス要素が参照するVPSを識別するために用いるVPS識別子(video_parameter_set_id)や、符号化データに含まれるレイヤ数(vps_max_layers_minus1)、レイヤに含まれるサブレイヤ数(vps_sub_layers_minus1)、符号化データ中で表現される1以上のレイヤからなるレイヤの集合を規定するレイヤセットの数(vps_num_layer_sets_minus1)、レイヤセットを構成するレイヤの集合を規定するレイヤセット構成情報(layer_id_included_flag[i][j])や、レイヤ間の依存関係(直接依存フラグdirect_dependency_flag[i][j]、レイヤ依存タイプdirect_dependency_type[i][j])などが規定されている。VPSは符号化データ内に複数存在してもよい。その場合、対象シーケンス毎に復号に用いられるVPSが複数の候補から選択される。あるレイヤに属する特定シーケンスの復号に使用されるVPSは、アクティブVPSと呼ばれる。また、基本レイヤと拡張レイヤに適用されるVPSを区別して、基本レイヤ(レイヤID=0)に対するVPSをアクティブVPSと呼び、拡張レイヤ(レイヤID>0)に対するVPSをアクティブレイヤVPSと呼ぶこともある。以下では、特に断りがなければ、VPSは、あるレイヤに属する対象シーケンスに対するアクティブVPSを意味する。なお、レイヤID=nuhLayerIdAであるレイヤの復号で利用される、レイヤID=nuhLayerIdAであるVPSは、nuhLayerIdAよりも大きいレイヤIDであるレイヤ(nuhLayerIdB、nuhLayerIdB>nuhLayerIdA)の復号に利用されてもよい。
なお、VPSに関して、「VPSのレイヤIDは、アクセスユニット内に含まれるVCLの中で最低次のレイヤIDを有するレイヤIDと等しい、かつ、テンポラルIDは0である(tId=0)である」というビットストリーム制約(ビットストリームコンフォーマンスともいう)がデコーダとエンコーダ間であるものとする。ここで、ビットストリームコンフォーマンスとは、階層動画像復号装置(ここでは本発明の実施形態に係る階層動画像復号装置)が復号するビットストリームが満たす必要がある条件である。同様に、階層動画像符号化装置(ここでは本発明の実施形態に係る階層動画像符号化装置が生成するビットストリームに関しても、上記階層動画像復号装置が復号可能なビットストリームであることを保障するため、上記ビットストリームコンフォーマンスを満たす必要がある。すなわち、ビットストリームコンフォーマンスとして、ビットストリームは、以下の条件CX1を少なくとも満たさなければならない。
なお、CX1の条件は、次の条件CX1’のようにも言い換えることができる。
上記ビットストリーム制約CX1(CX1’)は、換言すれば、対象レイヤが参照するVPSは、対象レイヤセットのNALユニットの集合であるアクセスユニット内に含まれるVCLで、最低次のレイヤ識別子を有するVCLと同一のレイヤに属するということである。
なお、上記VPSに関する制約は、「VPSのレイヤIDはレイヤセット内で最低次のレイヤID、かつ、テンポラルIDは0である(tId=0)である」としてもよい。
なお、CX2の条件は、次の条件CX2’のようにも言い換えることができる。
上記ビットストリーム制約CX2(CX2’)は、換言すれば、対象レイヤが参照するVPSは、対象レイヤセット内で最低次のレイヤ識別子を有するVPSであるということである。
シーケンスパラメータセットSPSでは、対象シーケンスを復号するために画像復号装置1が参照する符号化パラメータの集合が規定されている。例えば、対象SPSが参照するアクティブVPSを表わすアクティブVPS識別子(sps_video_parameter_set_id)、後述のピクチャパラメータセットや他のシンタックス要素が参照するSPSを識別するために用いるSPS識別子(sps_seq_parameter_set_id)や、ピクチャの幅や高さが規定される。SPSは符号化データ内に複数存在してもよい。その場合、対象シーケンス毎に復号に用いられるSPSが複数の候補から選択される。あるレイヤに属する特定シーケンスの復号に使用されるSPSは、アクティブSPSとも呼ばれる。また、基本レイヤと拡張レイヤに適用されるSPSを区別して、基本レイヤに対するSPSをアクティブSPSと呼び、拡張レイヤに対するSPSをアクティブレイヤSPSと呼ぶこともある。以下では、特に断りがなければ、SPSは、あるレイヤに属する対象シーケンスの復号に利用されるに対するアクティブSPSを意味する。なお、レイヤID=nuhLayerIdAであるレイヤに属するシーケンスの復号で利用される、レイヤID=nuhLayerIdAであるSPSは、nuhLayerIdAよりも大きいレイヤIDであるレイヤ(nuhLayerIdB、nuhLayerIdB>nuhLayerIdA)に属するシーケンスの復号に利用されてもよい。以降、特に断りがなければ、SPSのテンポラルIDは0である(tId=0)とする制約(ビットストリーム制約ともいう)がデコーダとエンコーダ間であるものとする。
ピクチャパラメータセットPPSでは、対象シーケンス内の各ピクチャを復号するために画像復号装置1が参照する符号化パラメータの集合が規定されている。例えば、対象PPSが参照するアクティブSPSを表わすアクティブSPS識別子(pps_seq_parameter_set_id)、後述のスライスヘッダや他のシンタックス要素が参照するPPSを識別するために用いるPPS識別子(pps_pic_parameter_set_id)や、ピクチャの復号に用いられる量子化幅の基準値(pic_init_qp_minus26)や重み付き予測の適用を示すフラグ(weighted_pred_flag)、スケーリングリスト(量子化マトリックス)が含まれる。なお、PPSは複数存在してもよい。その場合、対象シーケンス内の各ピクチャから複数のPPSの何れかを選択する。あるレイヤに属する特定ピクチャの復号に使用されるPPSはアクティブPPSと呼ばれる。また、基本レイヤと拡張レイヤに適用されるPPSを区別して、基本レイヤに対するPPSをアクティブPPSと呼び、拡張レイヤに対するPPSをアクティブレイヤPPSと呼ぶこともある。以下では、特に断りがなければ、PPSは、あるレイヤに属する対象ピクチャに対するアクティブPPSを意味する。なお、レイヤID=nuhLayerIdAであるレイヤに属するピクチャの復号で利用される、レイヤID=nuhLayerIdAであるPPSは、nuhLayerIdAよりも大きいレイヤIDであるレイヤ(nuhLayerIdB、nuhLayerIdB>nuhLayerIdA)に属するピクチャの復号に利用されてもよい。
ピクチャレイヤでは、処理対象のピクチャPICT(以下、対象ピクチャとも称する)を復号するために階層動画像復号装置1が参照するデータの集合が規定されている。ピクチャPICTは、図9の(b)に示すように、スライスS0~SNS-1を含んでいる(NSはピクチャPICTに含まれるスライスの総数)。
スライスレイヤでは、処理対象のスライスS(対象スライスとも称する)を復号するために階層動画像復号装置1が参照するデータの集合が規定されている。スライスSは、図9の(c)に示すように、スライスヘッダSH、および、スライスデータSDATAを含んでいる。
スライスデータレイヤでは、処理対象のスライスデータSDATAを復号するために階層動画像復号装置1が参照するデータの集合が規定されている。スライスデータSDATAは、図9の(d)に示すように、符号化ツリーブロック(CTB:Coded Tree Block)を含んでいる。CTBは、スライスを構成する固定サイズ(例えば64×64)のブロックであり、最大符号化単位(LCU:Largest Cording Unit)と呼ぶこともある。
符号化ツリーレイヤは、図9の(e)に示すように、処理対象の符号化ツリーブロックを復号するために階層動画像復号装置1が参照するデータの集合が規定されている。符号化ツリーユニットは、再帰的な4分木分割により分割される。再帰的な4分木分割により得られる木構造のノードのことを符号化ツリー(coding tree)と称する。4分木の中間ノードは、符号化ツリーユニット(CTU:Coded Tree Unit)であり、符号化ツリーブロック自身も最上位のCTUとして規定される。CTUは、分割フラグ(split_flag)を含み、split_flagが1の場合には、4つの符号化ツリーユニットCTUに分割される。split_flagが0の場合には、符号化ツリーユニットCTUは4つの符号化ユニット(CU:Coded Unit)に分割される。符号化ユニットCUは符号化ツリーレイヤの末端ノードであり、このレイヤではこれ以上分割されない。符号化ユニットCUは、符号化処理の基本的な単位となる。
符号化ユニットレイヤは、図9の(f)に示すように、処理対象の符号化ユニットを復号するために階層動画像復号装置1が参照するデータの集合が規定されている。具体的には、符号化ユニットCU(coding unit)は、CUヘッダCUH、予測ツリー、変換ツリーから構成される。CUヘッダCUHでは、符号化ユニットが、イントラ予測を用いるユニットであるか、インター予測を用いるユニットであるかなどが規定される。符号化ユニットは、予測ツリー(prediction tree;PT)および変換ツリー(transform tree;TT)のルートとなる。なお、CUに対応するピクチャ上の領域は符号化ブロック(CB:Coding Block)と呼ばれる。輝度ピクチャ上のCBを輝度CB、色差ピクチャ上のCBを色差CBと呼ぶ。CUサイズ(符号化ノードのサイズ)とは、輝度CBサイズを意味する。
変換ツリー(以下、TTと略称する)は、符号化ユニットCUが1または複数の変換ブロックに分割され、各変換ブロックの位置とサイズとが規定される。別の表現でいえば、変換ブロックは、符号化ユニットCUを構成する1または複数の重複しない領域のことである。また、変換ツリーは、上述の分割より得られた1または複数の変換ブロックを含む。なお、CUに含まれる変換ツリーに関する情報、及び変換ツリーに包含される情報を、TT情報と呼ぶ。
処理2:処理1にて得られた変換係数を量子化する;
処理3:処理2にて量子化された変換係数を可変長符号化する;
なお、上述した量子化パラメータqpは、階層動画像符号化装置2が変換係数を量子化する際に用いた量子化ステップQPの大きさを表わす(QP=2qp/6)。
予測ツリー(以下、PTと略称する)は、符号化ユニットCUが1または複数の予測ブロックに分割され、各予測ブロックの位置とサイズとが規定される。別の表現でいえば、予測ブロックは、符号化ユニットCUを構成する1または複数の重複しない領域である。また、予測ツリーは、上述の分割により得られた1または複数の予測ブロックを含む。なお、CUに含まれる予測ツリーに関する情報、及び予測ツリーに包含される情報を、PT情報と呼ぶ。
予測ユニットの予測画像は、予測ユニットに付随する予測パラメータによって導出される。予測パラメータには、イントラ予測の予測パラメータ、もしくはインター予測の予測パラメータがある。
predFlagL1 =インター予測識別子 >> 1
ここで、“&”は論理積、“>>”は右シフトである。
次に、参照ピクチャリストの一例について説明する。参照ピクチャリストとは、復号ピクチャバッファに記憶された参照ピクチャからなる列である。図11(a)は、参照ピクチャリストの一例を示す概念図である。参照ピクチャリストRPL0において、左右に一列に配列された5個の長方形は、それぞれ参照ピクチャを示す。左端から右へ順に示されている符号P1、P2、Q0、P3、P4は、それぞれの参照ピクチャを示す符号である。同様に、参照ピクチャリストRPL1において、左端から右へ順に示されている符号P4、P3、R0、P2、P1は、それぞれの参照ピクチャを示す符号である。P1等のPとは、対象レイヤPを示し、そしてQ0のQとは、対象レイヤPとは異なるレイヤQを示す。同様に、R0のRとは、対象レイヤP、及びレイヤQとは異なるレイヤRを示す。P、Q及びRの添字は、ピクチャ順序番号POCを示す。refIdxL0の真下の下向きの矢印は、参照ピクチャインデックスrefIdxL0が、復号ピクチャバッファにおいて、参照ピクチャリストRPL0より参照ピクチャQ0を参照するインデックスであることを示す。同様に、refIdxL1の真下の下向きの矢印は、参照ピクチャインデックスrefIdxL1が、復号ピクチャバッファにおいて、参照ピクチャリストRPL1より参照ピクチャP3を参照するインデックスであることを示す。
次に、ベクトルを導出する際に用いる参照ピクチャの例について説明する。図11(b)は、参照ピクチャの例を示す概念図である。図11(b)において、横軸は表示時刻を示し、縦軸はレイヤ数を示す。図示されている、縦3行、横3列(計9個)の長方形は、それぞれピクチャを示す。9個の長方形のうち、下行の左から2列目の長方形は復号対象のピクチャ(対象ピクチャ)を示し、残りの8個の長方形がそれぞれ参照ピクチャを示す。対象ピクチャから下向きの矢印で示される参照ピクチャQ2、及びR2は対象ピクチャと同じ表示時刻であってレイヤが異なるピクチャである。対象ピクチャcurPic(P2)を基準とするレイヤ間予測においては、参照ピクチャQ2、またはR2が用いられる。対象ピクチャから左向きの矢印で示される参照ピクチャP1は、対象ピクチャと同じレイヤであって、過去のピクチャである。対象ピクチャから右向きの矢印で示される参照ピクチャP3は、対象ピクチャと同じレイヤであって、未来のピクチャである。対象ピクチャを基準とする動き予測においては、参照ピクチャP1又はP3が用いられる。
インター予測パラメータの復号(符号化)方法には、マージ予測(merge)モードとAMVP(Adaptive Motion Vector Prediction、適応動きベクトル予測)モードがある、マージフラグmerge_flagは、これらを識別するためのフラグである。マージ予測モードでも、AMVPモードでも、既に処理済みのブロックの予測パラメータを用いて、対象PUの予測パラメータが導出される。マージ予測モードは、予測リスト利用フラグpredFlagLX(インター予測識別子inter_pred_idc)、参照ピクチャインデックスrefIdxLX、ベクトルmvLXを符号化データに含めずに、既に導出した予測パラメータをそのまま用いるモードであり、AMVPモードは、インター予測識別子inter_pred_idc、参照ピクチャインデックスrefIdxLX、ベクトルmvLXを符号化データに含めるモードである。なおベクトルmvLXは、予測ベクトルを示す予測ベクトルインデックスmvp_LX_idxと差分ベクトル(mvdLX)として符号化される。
ベクトルmvLXには、動きベクトルと変位ベクトル(disparity vector、視差ベクトル)がある。動きベクトルとは、あるレイヤのある表示時刻でのピクチャにおけるブロックの位置と、異なる表示時刻(例えば、隣接する離散時刻)における同一のレイヤのピクチャにおける対応するブロックの位置との間の位置のずれを示すベクトルである。変位ベクトルとは、あるレイヤのある表示時刻でのピクチャにおけるブロックの位置と、同一の表示時刻における異なるレイヤのピクチャにおける対応するブロックの位置との間の位置のずれを示すベクトルである。異なるレイヤのピクチャとしては、同一解像度でかつ品質が異なるピクチャである場合、異なる視点のピクチャである場合、もしくは、異なる解像度のピクチャである場合などがある。特に、異なる視点のピクチャに対応する変位ベクトルを視差ベクトルと呼ぶ。以下の説明では、動きベクトルと変位ベクトルを区別しない場合には、単にベクトルmvLXと呼ぶ。ベクトルmvLXに関する予測ベクトル、差分ベクトルを、それぞれ予測ベクトルmvpLX、差分ベクトルmvdLXと呼ぶ。ベクトルmvLXおよび差分ベクトルmvdLXが、動きベクトルであるか、変位ベクトルであるかは、ベクトルに付随する参照ピクチャインデックスrefIdxLXを用いて行われる。
以下では、本実施形態に係る階層動画像復号装置1の構成について、図19~図21を参照して説明する。
本実施形態に係る階層動画像復号装置1の構成について説明する。図19は、本実施形態に係る階層動画復号装置1の構成を示す概略図である。階層動画像復号装置1は、外部より供給される階層符号化データDATAに含まれる復号対象とする対象レイヤセットLayerSetTargetのレイヤIDリストLayerIdListTarget、及び復号対象とするレイヤに付随する最高次のサブレイヤを指定する対象最高次テンポラル識別子HighestTidTargetに基づいて、階層動画像符号化装置2から供給される階層符号化データDATAを復号して、対象レイヤセットに含まれる各レイヤの復号画像POUT#Tを生成する。すなわち、階層動画像復号装置1は、対象レイヤセットに含まれる、最低次のレイヤIDから最高次のレイヤIDまで、昇順で、各レイヤのピクチャの符号化データを復号し、その復号画像(復号ピクチャ)を生成する。言い換えれば、対象レイヤセットのレイヤIDリストLayerIdListTarget[0]…LayerIdListTarget[N-1](Nは対象レイヤセットに含まれるレイヤ数)の順で、各レイヤのピクチャの符号化データを復号する。
ビットストリーム抽出部17は、外部より供給される対象レイヤセットLayerSetTargetを構成するレイヤのレイヤIDリストLayerIdListTarget、及び対象最高次テンポラル識別子HighestTidTargetに基づいて、ビットストリーム抽出処理を行い、入力される階層符号化データDATAから、対象最高次テンポラル識別子HighestTidTarget、及び対象レイヤセットLayerSetTargetのレイヤIDリストLayerIdListTargetによって定まる集合(ターゲットセットTargetSetと呼ぶ)に含まれないNALユニットを除去(破棄)し、ターゲットセットTargetSetに含まれるNALユニットから構成される対象レイヤセット符号化データDATA#T(BitstreamToDecode)を抽出し、出力する。
パラメータセット復号部12は、入力される対象レイヤセット符号化データから、対象レイヤセットの復号に用いられるパラメータセット(VPS、SPS、PPS)を復号する。復号されたパラメータセットの符号化パラメータは、パラメータセット管理部13に供給され、各パラメータセットの有する識別子毎に記録される。
ビデオパラメータセットVPSは、複数のレイヤに共通するパラメータを規定するためのパラメータセットであり、各VPSを識別するためのVPS識別子、レイヤ情報として、最大レイヤ数情報、レイヤセット情報、及びレイヤ間依存情報が含まれている。
参照レイヤIDリスト、及び直接参照レイヤIDXリストの導出は以下の疑似コードにより実行される。
for(i=0; i< vps_max_layers_minus1+1; i++){
iNuhLId = nuhLId#i;
NumDirectRefLayers[iNuhLId] = 0;
for(j=0; j<i; j++){
if( direct_dependency_flag[i][j]){
RefLayerId[iNuhLId][NumDirectRefLayers[iNuhLId]] = nuhLId#j;
NumDirectRefLayers[iNuhLId]++;
DirectRefLayerIdx[iNuhLId][ nuhLId#j]= NumDirectRefLayers[iNuhLId] - 1;
}
} // end of loop on for(j=0; j<i; i++)
} // end of loop on for(i=0; i< vps_max_layers_minus1+1; i++)
なお、上記疑似コードをステップで表わせば、次の通りである。
すなわち、RefLayerId[iNuhLId][NumDirectRefLayers[iNuhLId]] = nuhLId#j;
(SL06)直接参照レイヤ数NumDirectRefLayers[iNuhLId]の値を“1”加算する。すなわち、NumDirectRefLayers[iNuhLId]++;
(SL07)直接参照レイヤIDXリストDirectRefLayerIdx[iNuhLid][]のnuhLId#j番目の要素へ、直接参照レイヤインデクス(直接参照レイヤIDX)として、“直接参照レイヤ数-1”の値を設定する。すなわち、
DirectRefLayerIdx[iNuhLId][ nuhLId#j]= NumDirectRefLayers[iNuhLId] - 1;
(SL0A)j番目のレイヤを、i番目のレイヤに関する参照レイヤIDリスト、及び直接参照レイヤIDXリストへ要素追加に係るループの終端である。
SamplePredEnabledFlag[iNuhLId][j]=((direct_dependency_type[i][j]+1) & 1 );
MotionPredEnabledFlag[iNuhLId][j]=((direct_dependency_type[i][j]+1) & 2 )>>1;
NonVCLDepEnabledFlag [iNuhLid][j]= ((direct_dependency_type[i][j]+1) & (1<<(N-1)) ) >> (N-1);
あるいは、(direct_dependency_type[i][j]+1)の代わりに、変数DirectDepType[i][j]を用いて、次の式によっても表現できる。 SamplePredEnabledFlag[iNuhLId][j]=((DirectDepType[i][j]) & 1 );
MotionPredEnabledFlag[iNuhLId][j]=((DirectDepType[i][j]) & 2 )>>1;
NonVCLDepEnabledFlag [iNuhLid][j]= ((DirectDepType[i][j]) &
(1<<(N-1)) ) >> (N-1);
なお、図14(a)の例では、(N-1)ビット目をnon-VCL依存タイプ(non-VCL依存有無フラグ)としているが、これに限定されない。例えば、N=3とし、最下位ビットから2ビット目をnon-VCLの依存タイプの有無を表わすビットとしてもよい。また、各依存タイプ別の有無フラグを示すビット位置は、実施可能な範囲で変更してもよい。なお、上記各有無フラグの導出は、前述した(参照レイヤIDリスト、及び直接参照レイヤIDXリストの導出)において、ステップSL08として実行して導出してもよい。なお、導出の手順は、上記ステップに限定されず、実施可能な範囲で変更してもよい。
ここで、i番目のレイヤが、j番目のレイヤに間接的に依存するか否か(j番目のレイヤは、i番目のレイヤの間接参照レイヤであるか否か)の依存関係を示す間接依存フラグ(IndirectDependencyFlag[i][j])は、直接依存フラグ(direct_dependency_flag[i][j])を参照して、後述の疑似コードにより導出することができる。同様に、i番目のレイヤが、j番目のレイヤに直接的に依存するか(直接依存フラグが1の場合、j番目のレイヤは、i番目のレイヤの直接参照レイヤともいう)、あるいは間接的に依存するか(間接依存フラグが1の場合、j番目のレイヤは、i番目のレイヤの間接参照レイヤともいう)の依存関係を示す依存フラグ(DependencyFlag[i][j])は、直接依存フラグ(direct_dependency_flag[i][j])、及び上記間接依存フラグ(IndirectDepdendencyFlag[i][j])を参照して、後述の疑似コードにより導出することができる。ここで、図31を参照しながら、間接参照レイヤについて説明する。図31では、レイヤ数が、N+1であり、j番目のレイヤ(図31上のL#j、レイヤjと呼ぶ)は、i番目のレイヤ(図31上のL#i、レイヤiと呼ぶ)より下位のレイヤである(j<i)。また、レイヤjより上位であり、レイヤiより下位であるレイヤk(図31上では、L#k)(j<k<i)があるとする。図31では、レイヤkは、レイヤjに直接的に依存し(図31上の実線の矢印、レイヤjはレイヤkの直接参照レイヤ、direct_dependency_flag[k][j]==1)、レイヤiは、レイヤkに直接的に依存する(レイヤkは、レイヤjの直接参照レイヤ、direct_dependency_flag[i][k]==1)。このとき、レイヤiは、レイヤkを介して、レイヤjに対して、間接的に依存する(図31上の点線の矢印)ため、レイヤjはレイヤiの間接参照レイヤであると呼ぶ。換言すれば、レイヤiが、1又は複数のレイヤk(i<k<j)を介して、レイヤjに対して、間接的に依存す場合、レイヤjは、レイヤiの間接参照レイヤである。
for(i=0; i< vps_max_layers_minus1+1; i++){
for(j=0; j<i; j++){
IndirectDependencyFlag[i][j] = 0;
DependencyFlag[i][j] = 0;
for(k=j+1; k<i; k++){
if( direct_dependency_flag[k][j] && direct_dependency_flag[i][k] && !direct_dependency_flag[i][j]){
IndirectDependencyFlag[i][j] = 1;
}
}
DependencyFlag[i][j] = (direct_dependency_flag[i][j] | IndirectDependencyFlag[i][j]);
} // end of loop on for(j=0; j<i; i++)
} // end of loop on for(i=0; i< vps_max_layers_minus1+1; i++)
なお、上記疑似コードをステップで表わせば、次の通りである。
DependencyFlag[i][j] = (direct_dependency_flag[i][j] | IndirectDependencyFlag[i][j]);
(SN0A))i番目のレイヤとj番目のレイヤに関する間接依存フラグ、及び依存フラグの導出に係るループの終端である。
// derive indirect reference layers of layer i
for(i=2; i< vps_max_layers_minus1+1; i++){
for(k=1; k<i; k++){
for(j=0; j<k; j++){
if( (direct_dependency_flag[k][j] || IndirectDependencyFlag[k][j] )
direct_dependency_flag[i][k] && !direct_dependency_flag[i][j]){
IndirectDependencyFlag[i][j] = 1;
}
} // end of loop on for(j=0; j<k; j++)
} // end of loop on for(k=1; k<i; k++)
} // end of loop on for(i=2; i< vps_max_layers_minus1+1; i++)
// derive dependent layers (direct or indirect reference layers) of layer i
for(i=0; i< vps_max_layers_minus1+1; i++){
for(j=0; j<i; j++){
DependencyFlag[i][j] = (direct_dependency_flag[i][j] | IndirectDependencyFlag[i][j]);
} // end of loop on for(j=0; j<i; i++)
} // end of loop on for(i=0; i< vps_max_layers_minus1+1; i++)
なお、上記疑似コードをステップで表わせば、次の通りである。なお、ステップSO01の開始前に、間接依存フラグIndirectDependencyFlag[][]、及び依存フラグDependencyFlga[][]の全ての要素の値は、0で初期化済であるものとする。
DependencyFlag[i][j] = (direct_dependency_flag[i][j] | IndirectDependency
Flag[i][j]);
(S00D)レイヤjがレイヤiの依存レイヤ(直接参照レイヤ、または間接参照レイヤ)であるかを探索するループの終端である。
LIdDependencyFlag[nuhLId#i][nuhLId#j] = (direct_dependency_flag[i][j] | IndirectDependencyFlag[i][j]);
以上、説明したように、レイヤ識別子nuhLId#iであるi番目のレイヤが、レイヤ識別子nuhLId#jであるj番目のレイヤに直接的に、または間接的に依存するか示すレイヤ識別子間依存フラグ(Lid0DependencyFlag[nuhLId#i][nuhLId#j])を導出することにより、レイヤ識別子nuhLId#jであるj番目のレイヤが、レイヤ識別子nuhLId#iであるi番目のレイヤの直接参照レイヤ、または間接参照レイヤであるかを把握することができる。なお、上記手順は、これに限定されず、実施可能な範囲で変更してもよい。
シーケンスパラメータセットSPSでは、対象シーケンスを復号するために画像復号装置1が参照する符号化パラメータの集合が規定されている。
SPSには、ピクチャ情報として、対象レイヤの復号ピクチャのサイズを定める情報が含まれる。例えば、ピクチャ情報は、対象レイヤの復号ピクチャの幅や高さを表わす情報を含んでいる。SPSから復号されるピクチャ情報には、復号ピクチャの幅(pic_width_in_luma_samples)と復号ピクチャの高さ(pic_height_in_luma_samples)が含まれている(図15では不図示)。シンタックス“pic_width_in_luma_samples”の値は、輝度画素単位での復号ピクチャの幅に対応する。また、シンタックス“pic_height_in_luma_samples”の値は、輝度画素単位での復号ピクチャの高さに対応する。
ピクチャパラメータセットPPSでは、対象シーケンス内の各ピクチャを復号するために画像復号装置1が参照する符号化パラメータの集合が規定されている。
ピクチャ復号部14は、入力されるVCL NALユニット、および、アクティブパラメータセットに基づいて復号ピクチャを生成して出力する。
スライスヘッダ復号部141は、入力されるVCL NALユニットとアクティブパラメータセットに基づいてスライスヘッダを復号する。復号したスライスヘッダは、入力されるVCL NALユニットと合わせてCTU復号部142に出力する。
CTU復号部142は、概略的には、入力されるスライスヘッダ、VCL NALユニットに含まれるスライスデータ、及びアクティブパラメータセットに基づいて、ピクチャを構成するスライスに含まれる各CTUに対応する領域の復号画像を復号することで、スライスの復号画像を生成する。ここで、CTUサイズは、アクティブパラメータセットに含まれる、対象レイヤに対するCTBサイズ(図15のSYNSPS03上のlog2_min_luma_coding_block_size_minus3、及びlog2_diff_max_min_luma_coding_block_sizeが対応するシンタックスである)が用いられる。スライスの復号画像は、入力されるスライスヘッダが示すスライス位置へ、復号ピクチャの一部として出力される。CTUの復号画像は、CTU復号部142内部の予測残差復元部1421、予測画像生成部1422、及びCTU復号画像生成部1423により生成される。
以下、図21を参照して、ピクチャ復号部14における対象レイヤiのピクチャの復号の概略的な動作について説明する。図21は、ピクチャ復号部14における対象レイヤiのピクチャを構成するスライス単位の復号プロセスを示すフロー図である。
・・・省略・・・
(SD10A)CTU復号部142は、入力されるスライスヘッダ、アクティブパラメータセット、及びVCL NALユニットに含まれるスライスデータ内の各CTU情報(図18のSYNSD01)に基づいて、ピクチャを構成するスライスに含まれる各CTUに対応する領域のCTU復号画像を生成する。さらに、各CTU情報の後に、該CTUが復号対象スライスの終端であるかを示すスライス終端フラグ(end_of_slice_segment_flag)(図18のSYNSD02)。また、各CTUの復号後に、処理済CTU数numCtuの値を1加算する(numCtu++)。
以上説明した本実施形態に係る階層動画像復号装置1(階層画像復号装置)は、外部より供給される対象レイヤセットLayerSetTargetを構成するレイヤのレイヤIDリストLayerIdListTarget、及び対象最高次テンポラル識別子HighestTidTargetに基づいて、ビットストリーム抽出処理を行い、入力される階層符号化データDATAから、対象最高次テンポラル識別子HighestTidTarget、及び対象レイヤセットLayerSetTargetのレイヤIDリストLayerIdListTargetによって定まる集合(ターゲットセットTargetSetと呼ぶ)に含まれないNALユニットを除去(破棄)し、ターゲットセットTargetSetに含まれるNALユニットから構成される対象レイヤセット符号化データDATA#T(BitstreamToDecode)を抽出するビットストリーム抽出部17を備える。さらに、上記ビットストリーム抽出部17は、ビデオパラメータセットのレイヤ識別子が前記ターゲットセットTargetSetに含まれない場合には、前記ビデオパラメータセットのレイヤ識別子を、ターゲットセットTargetSet内で、最低次のレイヤ識別子へ更新する(書き換える)ことを特徴とする。なお、上記ビットストリーム抽出部17の動作は、「入力される階層符号化データDATAを構成するAUには、AU内で最低次のレイヤ識別子を有するVPSが最大で1つ含まれること」を前提にしているが、これに限定されない。例えば、AU内で最低次のレイヤ識別子以外のレイヤ識別子を有するVPSが、AU内に含まれていてもよい。この場合、ビットストリーム抽出部17は、図27のステップSG104において、レイヤ識別子の更新対象とするVPSは、ターゲットセットTargetSetに含まれないレイヤ識別子で、最低次のレイヤ識別子とすればよい。通常は、レイヤ識別子“nuhLayerId=0”のVPSが最低次のレイヤ識別子を有するVPSとなるため、該VPSを更新対象とし、それ以外のターゲットセットTargetSetに含まれないVPSは破棄する。
実施例1に係るビットストリーム抽出部17では、図27示すように、ターゲットセットに、VPSのレイヤ識別子が前記ターゲットセットTargetSetに含まれない場合には、前記VPSのレイヤ識別子を、ターゲットセットTargetSet内で、最低次のレイヤ識別子へ更新する(書き換える)ことで、ターゲットセットTargetSetに必ずVPSが含まれるようしているが、これに限定されない。例えば、ビットストリーム抽出部17は、ターゲットセットTargetSetを構成するレイヤIDリストLayerIdListTargetに含まれないVPSは、該VPSのレイヤ識別子を更新せずに、該VPSのNALユニットの破棄を省略し、抽出後のターゲットセットTargetSetのビットストリームに該VPSが含まれるようにしてもよい。以下では、図28を参照して、変形例1に係るビットストリーム抽出部17’の動作について説明する。なお、実施例1に係るビットストリーム抽出部17と共通の動作については、同一の符号を付与し、説明を省略する。
なお、変形例1に係るビットストリーム抽出部17’に記載したビットストリーム抽出を行うために、ビットストリームコンフォーマンスとして、ビットストリームは、以下の条件CY1を少なくとも満たさなければならない。
上記ビットストリーム制約CY1は、換言すれば、「アクセスユニット内に含まれるVPSは、全レイヤ(アクセスユニットに含まれないレイヤを含む)で最低次のレイヤ識別子を有するVCLと同一のレイヤに属する」ということである。
上記コンフォーマンス条件CY2の場合も、コンフォーマンス条件CY1と同様の効果を奏する。さらに、従来技術(非特許文献2~3)の制約(VPSのレイヤ識別子は0である)時において、ビットストリーム抽出時に、TargetSetのあるレイヤが、nuh_layer_id=0のVPSを参照する場合において、TargetSetに含まれない、nuh_layer_id=0のVPSが破棄されることがなくなるため、TargetSetのあるレイヤが復号不可となることを防止することが可能である。
さらに、従来技術(非特許文献4)の制約(VPS/SPS/PPSのレイヤ識別子は0であること)時においては、ビットストリームコンフォーマンスとして、CY2に追加して、CY3、CY4を少なくとも満たさなければならない。
CY4:「ターゲットセットTargetSet(レイヤセット)には、レイヤ識別子nuh_layer_id=0と等しいレイヤ識別子を有するPPSが含まれること」
上記ビットストリーム制約CY2~CY4を適用する場合、変形例1に係るビットストリーム抽出部17’の動作(図28のステップG102)を次の処理(SG102a’)へ変更すればよい。
実施例1に係るビットストリーム抽出部17では、図27示すように、ターゲットセットに、VPSのレイヤ識別子が前記ターゲットセットTargetSetに含まれない場合には、前記VPSのレイヤ識別子を、ターゲットセットTargetSet内で、最低次のレイヤ識別子へ更新する(書き換える)ことで、ターゲットセットTargetSetに必ずVPSが含まれるようしているが、これに限定されない。例えば、ビットストリーム抽出部17は、ターゲットセットTargetSet内の各レイヤが依存する、ターゲットセットTargetSetを構成するレイヤIDリストLayerIdListTargetに含まれない、参照レイヤ(直接参照レイヤ、及び間接参照レイヤ)に関するVCL及びnon-VCLののNALユニットの破棄を省略し、抽出後のターゲットセットTargetSetのビットストリームに該VCL及びnon-VCLが含まれるようにしてもよい。以下では、図29を参照して、変形例2に係るビットストリーム抽出部17’’の動作について説明する。なお、実施例1に係るビットストリーム抽出部17と共通の動作については、同一の符号を付与し、説明を省略する。なお、ビットストリーム抽出部17’’は、VPSの符号化パラメータから依存レイヤを導出するために、パラメータセット復号部12におけるVPSの復号手段と同一の機能を有する。図29は、ビットストリーム抽出部17’’におけるアクセスユニット単位のビットストリーム抽出処理を示すフロー図である。
DepFlag=0; for(k=0; i<k; i++){ DepFlag |= LIdDependencyFlag[LayerIdListTarget[k]][nuhLayerId]; }
(SG106b)対象NALユニットを破棄する。すなわち、対象NALユニットは、ターゲットセットTargetSet、またはターゲットセットTargetの依存レイヤに含まれないため、ビットストリーム抽出部17は、入力された階層符号化データDATAから、対象NALユニットを除去する。
なお、変形例2に係るビットストリーム抽出部17’’に記載したビットストリーム抽出を行うために、ビットストリームコンフォーマンスとして、ビットストリームは、以下の条件CZ1を少なくとも満たさなければならない。
上記ビットストリーム制約CZ1は、換言すれば、「レイヤセット内のある対象レイヤが参照する依存レイヤは、同一レイヤセットに含まれること」ということである。
さらに、ビットストリーム抽出部17は、ビットストリーム抽出部17の変形例1a及び、変形2を組み合わせて構成してもよい。すなわち、「ターゲットセットTargetSet内の各レイヤが依存する参照レイヤ(直接参照レイヤ、及び間接参照レイヤ)のレイヤ識別子がターゲットセットTargetSetに含まれない場合、該レイヤ識別子を有する参照レイヤのVCL及びnon-VCLのNALユニット」の破棄を省略し、かつ、「ターゲットセットTargetSetにレイヤ識別子nuh_layer_id=0が含まれない場合、レイヤ識別子nuh_layer_id=0であるパラメータセット(VPS、SPS、PPS)を含むnon-VCLのNALユニット」の破棄を省略し、抽出後のターゲットセットTargetSetのビットストリームに該VCL及びnon-VCLが含まれるようにしてもよい。この場合、ビットストリームコンフォーマンスとして、コンフォーマンス条件CZ1に加えて、パラメータセット(SPS、PPS)に関するコンフォーマンス条件CA1、及びCA2を少なくとも満たさなければならない。
CA2:「レイヤ識別子nuh_layer_id=layerIdAであるレイヤlayerAに対するアクティブPPSのレイヤ識別子は、0、または、layerIdA、または、レイヤlayerAの直接参照レイヤ、または、間接参照レイヤのレイヤ識別子nuh_layer_idの値と等しいこと」
上記コンフォーマンス条件CZ1、及びCA1~CA2、及び従来の「VPSのレイヤ識別子nuh_layer_idは0であること」を適用する場合、変形例3に係るビットストリーム抽出部17’’’の動作を、図30を参照して説明する。なお、変形例3に係るビットストリーム抽出部17’’と共通の動作については、同一の符号を付与し、説明を省略する。なお、ビットストリーム抽出部17’’’は、VPSの符号化パラメータから依存レイヤを導出するために、パラメータセット復号部12におけるVPSの復号手段と同一の機能を有する。図30は、ビットストリーム抽出部17’’’におけるアクセスユニット単位のビットストリーム抽出処理を示すフロー図である。
以下では、本実施形態に係る階層動画像符号化装置2の構成について、図22を参照して説明する。
図22を用いて、階層動画像符号化装置2の概略構成を説明する。図22は、階層動画像符号化装置2の概略的構成を示した機能ブロック図である。階層動画像符号化装置2は、符号化対象とするレイヤセット(対象レイヤセット)に含まれる各レイヤの入力画像PIN#T(ピクチャ)を符号化して、対象レイヤセットの階層符号化データDATAを生成する。すなわち、動画像符号化装置2は、対象レイヤセットに含まれる、最低次のレイヤIDから最高次のレイヤIDまで、昇順で、各レイヤのピクチャを符号化し、その符号化データを生成する。言い換えれば、対象レイヤセットのレイヤIDリストLayerSetLayerIdList[0]…LayerSetIdList[N-1](Nは対象レイヤセットに含まれるレイヤ数)の順で、各レイヤのピクチャを符号化する。なお、階層動画像符号化装置2は、階層動画像復号装置1(及びその変形例を含む)が復号可能なビットストリームであることを保障するため、前述のCX1(CX1’)、あるいはCX2(CX2’)、あるいはCY1、あるいはCY2、あるいは(CY2かつCY3かつCY4)、あるいはCZ1、あるいは、(CZ1かつCA1かつCA2かつ「VPSのレイヤ識別子nuh_layer_idは0であること」)のビットストリームコンフォーマンスを満たすように、対象レイヤセットの階層符号化データDATAを生成する。上記ビットストリームコンフォーマンスを満たす階層符号化データDATAを生成することで、階層復号装置1において、あるレイヤセットのビットストリームから、ビットストリーム抽出処理により生成された、該レイヤセットのサブセットのレイヤセットのみを含むビットストリーム上で復号不可となるレイヤの発生を防止することができる。
図23を参照して、ピクチャ符号化部24の構成の詳細を説明する。図23は、ピクチャ符号化部24の概略的構成を示した機能ブロック図である。
以下、図24を参照して、ピクチャ符号化部24における対象レイヤiのピクチャの符号化の概略的な動作について説明する。図24は、ピクチャ符号化部24における対象レイヤiのピクチャを構成するスライス単位の符号化プロセスを示すフロー図である。
・・・省略・・・
(SE10A)CTU符号化部242は、入力されるアクティブパラメータセット、スライスヘッダに基づいて、入力画像(符号化対象スライス)をCTU単位で符号化して、符号化対象スライスのスライスデータの一部として、CTU情報の符号化データ(図18のSYNSD01)を出力する。また、CTU符号化部242は、各CTUに対応する領域のCTU復号画像を生成し出力する。さらに、各CTU情報の符号化データの後に、該CTUが符号化対象スライスの終端であるかを示すスライス終端フラグ(end_of_slice_segment_flag)(図18のSYNSD2)を符号化する。該CTUが符号化対象スライスの終端である場合、スライス終端フラグを1へ設定し、それ以外の場合は0へ設定し、符号化する。また、各CTUの符号化後に、処理済CTU数numCtuの値を1加算する(numCtu++)。
以上説明した本実施形態に係る階層動画像符号化装置2は、階層動画像復号装置1(及びその変形例を含む)が復号可能なビットストリームであることを保障するため、前述のCX1(CX1’)、あるいはCX2(CX2’)、あるいはCY1、あるいはCY2、あるいは(CY2かつCY3かつCY4)、あるいはCZ1、あるいは、(CZ1かつCA1かつCA2かつ「VPSのレイヤ識別子nuh_layer_idは0であること」)のビットストリームコンフォーマンスを満たすように、対象レイヤセットの階層符号化データDATAを生成する。上記ビットストリームコンフォーマンスを満たす階層符号化データDATAを生成することで、階層復号装置1において、あるレイヤセットのビットストリームから、ビットストリーム抽出処理により生成された、該レイヤセットのサブセットのレイヤセットのみを含むビットストリーム上で復号不可となるレイヤの発生を防止することができる。
上述した階層動画像符号化装置2及び階層動画像復号装置1は、動画像の送信、受信、記録、再生を行う各種装置に搭載して利用できる。なお、動画像は、カメラ等により撮像された自然動画像であってもよいし、コンピュータ等により生成された人工動画像(CGおよびGUIを含む)であってもよい。
最後に、階層動画像復号装置1、階層動画像符号化装置2の各ブロックは、集積回路(ICチップ)上に形成された論理回路によってハードウェア的に実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェア的に実現してもよい。
本発明の態様1に係る画像復号装置は、入力画像符号化データを復号する画像復号装置であって、1以上のレイヤからなる復号対象レイヤセットを示すレイヤIDリストに基づいて、前記入力画像符号化データから、前記復号対象レイヤセットに係る画像符号化データを抽出する画像符号化データ抽出部と、前記抽出された画像符号化データより、復号対象レイヤセットのピクチャを復号するピクチャ復号部と、を備え、前記画像符号化データ抽出部から抽出された前記入力画像符号化データは、レイヤ識別子が0に等しくない、かつ前記レイヤIDリストに含まれないレイヤ識別子を有するnon-VCL NALユニットを含まないことを特徴とする。
2…階層動画像符号化装置
10…対象レイヤセットピクチャ復号部
11…NAL逆多重化部
12…パラメータセット復号部
13…パラメータセット管理部
14…ピクチャ復号部
141…スライスヘッダ復号部
142…CTU復号部
1421…予測残差復元部
1422…予測画像生成部
1423…CTU復号画像生成部
15…復号ピクチャ管理部
17…ビットストリーム抽出部(画像符号化データ抽出部)
20…対象レイヤセットピクチャ符号化部
21…NAL多重化部
22…パラメータセット符号化部
24…ピクチャ符号化部
26…符号化パラメータ決定部
241…スライスヘッダ符号化部
242…CTU符号化部
2421…予測残差符号化部
2422…予測画像符号化部
2423…CTU復号画像生成部
Claims (7)
- 入力画像符号化データを復号する画像復号装置であって、
1以上のレイヤからなる復号対象レイヤセットを示すレイヤIDリストに基づいて、前記入力画像符号化データから、前記復号対象レイヤセットに係る画像符号化データを抽出する画像符号化データ抽出部と、
前記抽出された画像符号化データより、復号対象レイヤセットのピクチャを復号するピクチャ復号部と、を備え、
前記画像符号化データ抽出部から抽出された前記入力画像符号化データは、レイヤ識別子が0に等しくない、かつ前記レイヤIDリストに含まれないレイヤ識別子を有するnon-VCL NALユニットを含まないことを特徴とする画像復号装置。 - 前記画像符号化データに含まれるNALユニットのテンポラルIDは、前記復号対象レイヤセットの最高次テンポラルIDの値以下であることを特徴とする請求項1に記載の画像復号装置。
- 前記non-VCL NALユニットは、パラメータセットを有するNALユニットであることを特徴とする請求項1に記載の画像復号装置。
- 前記パラメータセットは、ビデオパラメータセットを含むことを特徴とする請求項2に記載の画像復号装置。
- 前記パラメータセットは、シーケンスパラメータセットを含むことを特徴とする請求項3に記載の画像復号装置。
- 前記パラメータセットは、ピクチャパラメータセットを含むことを特徴とする請求項3に記載の画像復号装置。
- 入力画像符号化データを復号する画像復号方法であって、
1以上のレイヤからなる復号対象レイヤセットを示すレイヤIDリストに基づいて、前記入力画像符号化データから、前記復号対象レイヤセットに係る画像符号化データを抽出する画像符号化データ抽出工程と、
前記抽出された画像符号化データより、復号対象レイヤセットのピクチャを復号するピクチャ復号工程と、を備え、
前記画像符号化データ抽出工程で抽出された前記入力画像符号化データは、レイヤ識別子が0に等しくない、かつ前記レイヤIDリストに含まれないレイヤ識別子を有するnon-VCL NALユニットを含まないことを特徴とする画像復号方法。
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015543862A JP6272343B2 (ja) | 2013-10-22 | 2014-10-21 | 画像復号装置、及び画像復号方法 |
KR1020167013416A KR20160074642A (ko) | 2013-10-22 | 2014-10-21 | 이미지 디코딩 장치 및 이미지 디코딩 방법 |
CN201480058432.6A CN106165422A (zh) | 2013-10-22 | 2014-10-21 | 图像解码装置及图像解码方法 |
EP14855868.7A EP3051820B1 (en) | 2013-10-22 | 2014-10-21 | Image decoding device and image decoding method |
KR1020187009698A KR101930896B1 (ko) | 2013-10-22 | 2014-10-21 | 이미지 디코딩 장치 및 이미지 디코딩 방법 |
US15/136,705 US10666978B2 (en) | 2013-10-22 | 2016-04-22 | Image decoding apparatus, image decoding method |
US16/855,779 US20200322634A1 (en) | 2013-10-22 | 2020-04-22 | Image decoding apparatus, image decoding method |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013219443 | 2013-10-22 | ||
JP2013-219443 | 2013-10-22 | ||
JP2013231347 | 2013-11-07 | ||
JP2013-231347 | 2013-11-07 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/136,705 Continuation US10666978B2 (en) | 2013-10-22 | 2016-04-22 | Image decoding apparatus, image decoding method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015060295A1 true WO2015060295A1 (ja) | 2015-04-30 |
Family
ID=52992889
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/077931 WO2015060295A1 (ja) | 2013-10-22 | 2014-10-21 | 画像復号装置、及び画像復号方法 |
Country Status (6)
Country | Link |
---|---|
US (2) | US10666978B2 (ja) |
EP (1) | EP3051820B1 (ja) |
JP (2) | JP6272343B2 (ja) |
KR (2) | KR101930896B1 (ja) |
CN (1) | CN106165422A (ja) |
WO (1) | WO2015060295A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017523676A (ja) * | 2014-06-20 | 2017-08-17 | クアルコム,インコーポレイテッド | ビットストリーム適合検査を選択的に実行するためのシステムおよび方法 |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016098056A1 (en) | 2014-12-18 | 2016-06-23 | Nokia Technologies Oy | An apparatus, a method and a computer program for video coding and decoding |
US10764575B2 (en) | 2017-03-03 | 2020-09-01 | Qualcomm Incorporated | Coding MCTS-nested SEI messages to exclude other SEI messages that are not MCTS-nested |
US12022059B2 (en) * | 2018-12-07 | 2024-06-25 | Beijing Dajia Internet Information Technology Co., Ltd. | Video coding using multi-resolution reference picture management |
EP3900360A4 (en) * | 2018-12-20 | 2022-03-16 | Telefonaktiebolaget Lm Ericsson (Publ) | METHOD OF ENCODING AND/OR DECODING VIDEO WITH SYNTAX DISPLAY AND IMAGE HEADER |
CN109788300A (zh) * | 2018-12-28 | 2019-05-21 | 芯原微电子(北京)有限公司 | 一种hevc解码器中的错误检测方法和装置 |
US11395006B2 (en) * | 2019-03-06 | 2022-07-19 | Tencent America LLC | Network abstraction layer unit header |
KR20210139272A (ko) * | 2019-03-23 | 2021-11-22 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 적응적 루프 필터링 파라미터 세트들에 대한 제한들 |
BR112021024418A2 (pt) | 2019-06-21 | 2022-01-18 | Ericsson Telefon Ab L M | Métodos para a decodificação de um conjunto de imagens a partir de um fluxo contínuo de bits e para a codificação de uma imagem, decodificador de vídeo, codificador de vídeo, programa de computador, e, portadora |
CN114650428B (zh) | 2019-07-05 | 2023-05-09 | 华为技术有限公司 | 使用标识符指示的视频译码码流提取的方法、设备和介质 |
WO2021054720A1 (ko) * | 2019-09-16 | 2021-03-25 | 엘지전자 주식회사 | 가중 예측을 이용한 영상 부호화/복호화 방법, 장치 및 비트스트림을 전송하는 방법 |
US20220417498A1 (en) * | 2019-12-10 | 2022-12-29 | Lg Electronics Inc. | Method for coding image on basis of tmvp and apparatus therefor |
US11356698B2 (en) * | 2019-12-30 | 2022-06-07 | Tencent America LLC | Method for parameter set reference constraints in coded video stream |
US20230156231A1 (en) * | 2020-04-03 | 2023-05-18 | Lg Electronics Inc. | Image encoding/decoding method and device signaling sps, and method for transmitting bitstream |
KR20230023721A (ko) * | 2020-06-06 | 2023-02-17 | 엘지전자 주식회사 | 레이어 정보 시그널링 기반 영상 코딩 장치 및 방법 |
CN116325724A (zh) * | 2020-06-10 | 2023-06-23 | Lg电子株式会社 | 基于最大时间标识符执行子比特流提取过程的图像编码/解码方法和设备及存储比特流的计算机可读记录介质 |
US20240179346A1 (en) * | 2021-03-08 | 2024-05-30 | Lg Electronics Inc. | Point cloud data transmission device, point cloud data transmission method, point cloud data reception device, and point cloud data reception method |
KR20240032912A (ko) * | 2021-07-05 | 2024-03-12 | 엘지전자 주식회사 | 포인트 클라우드 데이터 송신 장치, 포인트 클라우드 데이터 송신 방법, 포인트 클라우드 데이터 수신 장치 및 포인트 클라우드 데이터 수신 방법 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102547273B (zh) * | 2010-12-08 | 2014-05-07 | 中国科学院声学研究所 | 一种基于mkv的支持可伸缩编码的多媒体文件构造方法 |
AU2013285333A1 (en) * | 2012-07-02 | 2015-02-05 | Nokia Technologies Oy | Method and apparatus for video coding |
US9319703B2 (en) * | 2012-10-08 | 2016-04-19 | Qualcomm Incorporated | Hypothetical reference decoder parameter syntax structure |
-
2014
- 2014-10-21 JP JP2015543862A patent/JP6272343B2/ja active Active
- 2014-10-21 WO PCT/JP2014/077931 patent/WO2015060295A1/ja active Application Filing
- 2014-10-21 EP EP14855868.7A patent/EP3051820B1/en active Active
- 2014-10-21 KR KR1020187009698A patent/KR101930896B1/ko active IP Right Grant
- 2014-10-21 KR KR1020167013416A patent/KR20160074642A/ko active Application Filing
- 2014-10-21 CN CN201480058432.6A patent/CN106165422A/zh active Pending
-
2016
- 2016-04-22 US US15/136,705 patent/US10666978B2/en active Active
-
2017
- 2017-12-28 JP JP2017253347A patent/JP6800837B2/ja active Active
-
2020
- 2020-04-22 US US16/855,779 patent/US20200322634A1/en not_active Abandoned
Non-Patent Citations (6)
Title |
---|
"JCT3V-E1008_v5 ''MV-HEVC Draft Text 5", JOINT COLLABORATIVETEAM ON 3D VIDEO CODING EXTENSION DEVELOPMENT OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 5TH MEETING: VIENNA, AT, 2 August 2013 (2013-08-02) |
"JCTVC-00092_vl ''MV-HEVC/SHVC HLS: On nuh_layer_id of SPS and PPS", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 15TH MEETING: GENEVA, CH, 14 October 2013 (2013-10-14) |
"JCTVC-N1008_v3 ''SHVC Draft 3", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 14TH MEETING: VIENNA, AT, 2 August 2013 (2013-08-02) |
"Recommendation H.265 (04/13", ITU-T, 7 June 2013 (2013-06-07) |
YE-KUI WANG ET AL.: "MV-HEVC/SHVC HLS: On changing of the highest layer ID across AUs and multi-mode bitstream extraction", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 14TH MEETING, 25 July 2013 (2013-07-25), VIENNA, AT, XP030114792 * |
YONG HE ET AL.: "MV- HEVC/SHVC HLS: On nuh_layer_id of SPS and PPS", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT- VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 15TH MEETING, 14 October 2013 (2013-10-14), GENEVA, CH, XP030115079 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2017523676A (ja) * | 2014-06-20 | 2017-08-17 | クアルコム,インコーポレイテッド | ビットストリーム適合検査を選択的に実行するためのシステムおよび方法 |
US10356415B2 (en) | 2014-06-20 | 2019-07-16 | Qualcomm Incorporated | Systems and methods for constraining representation format parameters for a parameter set |
US10542261B2 (en) | 2014-06-20 | 2020-01-21 | Qualcomm Incorporated | Systems and methods for processing a syntax structure assigned a minimum value in a parameter set |
Also Published As
Publication number | Publication date |
---|---|
EP3051820A4 (en) | 2016-08-03 |
EP3051820A1 (en) | 2016-08-03 |
JP2018082462A (ja) | 2018-05-24 |
EP3051820B1 (en) | 2020-12-02 |
US20160381393A1 (en) | 2016-12-29 |
KR20180037338A (ko) | 2018-04-11 |
US10666978B2 (en) | 2020-05-26 |
CN106165422A (zh) | 2016-11-23 |
JP6272343B2 (ja) | 2018-01-31 |
KR101930896B1 (ko) | 2018-12-19 |
US20200322634A1 (en) | 2020-10-08 |
KR20160074642A (ko) | 2016-06-28 |
JPWO2015060295A1 (ja) | 2017-03-09 |
JP6800837B2 (ja) | 2020-12-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6800837B2 (ja) | 画像復号装置、及び画像復号方法 | |
JP6585223B2 (ja) | 画像復号装置 | |
WO2015053330A1 (ja) | 画像復号装置 | |
JP6456535B2 (ja) | 画像符号化装置、画像符号化方法および記録媒体 | |
JP6465863B2 (ja) | 画像復号装置、画像復号方法及び記録媒体 | |
WO2015053120A1 (ja) | 画像復号装置、画像復号方法、画像符号化装置、及び画像符号化方法 | |
WO2014050597A1 (ja) | 画像復号装置 | |
WO2014162954A1 (ja) | 画像復号装置、および画像符号化装置 | |
JP2015195543A (ja) | 画像復号装置、画像符号化装置 | |
WO2014007131A1 (ja) | 画像復号装置、および画像符号化装置 | |
JP2015073213A (ja) | 画像復号装置、画像符号化装置、符号化データ変換装置、および、注目領域表示システム | |
JP2015119402A (ja) | 画像復号装置、画像符号化装置、及び符号化データ | |
JP2015126507A (ja) | 画像復号装置、画像符号化装置、及び符号化データ | |
WO2015098713A1 (ja) | 画像復号装置および画像符号化装置 | |
JP2015126508A (ja) | 画像復号装置、画像符号化装置、符号化データ変換装置、領域再生装置 | |
JP2015076807A (ja) | 画像復号装置、画像符号化装置、および符号化データのデータ構造 | |
JP2015177319A (ja) | 画像復号装置、画像符号化装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14855868 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015543862 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REEP | Request for entry into the european phase |
Ref document number: 2014855868 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014855868 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 20167013416 Country of ref document: KR Kind code of ref document: A |