WO2015083575A1 - 画像復号装置、画像符号化装置、および、符号化データ変換装置 - Google Patents
画像復号装置、画像符号化装置、および、符号化データ変換装置 Download PDFInfo
- Publication number
- WO2015083575A1 WO2015083575A1 PCT/JP2014/081033 JP2014081033W WO2015083575A1 WO 2015083575 A1 WO2015083575 A1 WO 2015083575A1 JP 2014081033 W JP2014081033 W JP 2014081033W WO 2015083575 A1 WO2015083575 A1 WO 2015083575A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- layer
- encoded data
- picture
- unit
- information
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/59—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/167—Position within a video image, e.g. region of interest [ROI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/182—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/187—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/33—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present invention relates to an image decoding apparatus that decodes hierarchically encoded data in which an image is hierarchically encoded, and an image encoding apparatus that generates hierarchically encoded data by hierarchically encoding an image.
- One of information transmitted in a communication system or information recorded in a storage device is an image or a moving image. 2. Description of the Related Art Conventionally, a technique for encoding an image for transmitting and storing these images (hereinafter including moving images) is known.
- Non-patent Document 1 As video encoding methods, AVC (H.264 / MPEG-4 Advanced Video Coding) and HEVC (High-Efficiency Video Coding), which is a successor codec, are known (Non-patent Document 1).
- a predicted image is usually generated based on a local decoded image obtained by encoding / decoding an input image, and obtained by subtracting the predicted image from the input image (original image).
- Prediction residuals (sometimes referred to as “difference images” or “residual images”) are encoded.
- examples of the method for generating a predicted image include inter-screen prediction (inter prediction) and intra-screen prediction (intra prediction).
- predicted images in a picture are sequentially generated based on a locally decoded image in the same picture.
- inter prediction a predicted image is generated by motion compensation between pictures.
- a decoded picture used for predictive image generation in inter prediction is called a reference picture.
- a technique for generating encoded data from a plurality of moving images by encoding a plurality of mutually related moving images into layers (hierarchies) is also known, which is called a hierarchical encoding technique .
- the encoded data generated by the hierarchical encoding technique is also referred to as hierarchical encoded data.
- SHVC Scalable HEVC
- Non-patent Document 2 As a representative hierarchical encoding technique, SHVC (Scalable HEVC) based on HEVC is known (Non-patent Document 2).
- SHVC supports spatial scalability, temporal scalability, and SNR scalability.
- spatial scalability hierarchical encoded data is generated by dividing a plurality of moving images having different resolutions into layers. For example, an image downsampled from the original image to a desired resolution is encoded as a lower layer. Next, the original image is encoded as an upper layer after applying inter-layer prediction in order to remove redundancy between layers.
- MV-HEVC Multi-View HEVC
- HEVC based on HEVC
- Non-patent Document 3 Another typical hierarchical coding technique
- MV-HEVC supports view scalability.
- view scalability a moving image corresponding to a plurality of different viewpoints (views) is divided into layers and encoded to generate hierarchical encoded data.
- a moving image corresponding to a basic viewpoint (base view) is encoded as a lower layer.
- a moving image corresponding to a different viewpoint is encoded as an upper layer after applying inter-layer prediction.
- Inter-layer prediction in SHVC and MV-HEVC includes inter-layer image prediction and inter-layer motion prediction.
- inter-layer image prediction a predicted image is generated using a decoded image of a lower layer.
- inter-layer motion prediction motion information prediction values are derived using motion information of lower layers.
- a picture used for prediction in inter-layer prediction is called an inter-layer reference picture.
- a layer including an inter-layer reference picture is called a reference layer.
- reference pictures used for inter prediction and reference pictures used for inter-layer prediction are generically referred to simply as reference pictures.
- any of inter prediction, intra prediction, and inter-layer image prediction can be used to generate a predicted image.
- One of the applications that use SHVC and MV-HEVC is a video application that considers the area of interest.
- a video playback terminal normally plays back video in the entire area with a relatively low resolution.
- the attention area When a part of the video displayed by the viewer of the video reproduction terminal is designated as the attention area, the attention area is displayed on the reproduction terminal with high resolution.
- the video application considering the attention area as described above is a hierarchical code in which a relatively low resolution video of the entire area is encoded as lower layer encoded data, and a high resolution video of the attention area is encoded as upper layer encoded data.
- This can be realized using the data. That is, when reproducing the entire region, only the encoded data of the lower layer is decoded and reproduced, and when reproducing the high-resolution video of the region of interest, the encoded data of the upper layer is converted into the encoded data of the lower layer.
- the application can be realized with a smaller transmission band than when both encoded data for low-resolution video and encoded data for high-resolution video are sent. At that time, the transmission band can be further suppressed by extracting and transmitting the encoded data corresponding to the region including the region of interest from the upper layer and the lower layer, respectively.
- Non-Patent Document 4 for the purpose of correcting the positional relationship between the pixels of the upper layer and the pixels of the lower layer, the phase correspondence information between layers is sent, and the lower layer corresponding to each pixel of the upper layer using the phase correspondence information between layers.
- a method for calculating pixel positions on a layer is disclosed.
- JCTVC-O0056 MV-HEVC / SHVC HLS On conversion to ROI-capablemulti-layer bitstream, Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO / IEC JTC 1 / SC 29 / WG 11 15th Meeting: Geneva, CH, 23 Oct.-1 Nov. 2013 (released on October 11, 2013)
- the present invention has been made in view of the above problems, and an object of the present invention is to draw a positional relationship between upper layer pixels and lower layer pixels using the same scale as the entire region hierarchically encoded data.
- An object is to realize an image encoding device and an image decoding device capable of encoding / decoding encoded data.
- the value of the inter-layer scale derived by the predicted image generation unit is an approximate value of a ratio between a virtual reference layer corresponding region size and a virtual reference layer size, and the virtual reference layer size Is preferably the sum of the reference layer picture size and the virtual reference layer size difference.
- the value of the scale correction information is set so that the virtual reference layer size and the reference layer picture size match.
- an image encoding device is an image encoding device that generates encoded data of an upper layer from an input image, a parameter set decoding unit that decodes a parameter set, and a reference A prediction image encoding unit that generates a prediction image by inter-layer prediction with reference to a decoded pixel of a layer picture, the parameter set decoding unit encodes scale correction information, and the prediction image encoding unit
- the inter-layer scale value derived from the scale correction information is used to derive the corresponding reference position for the decoded pixel of the target layer, and the virtual reference layer size difference is derived based on the scale correction information.
- a hierarchical encoded data conversion apparatus converts input hierarchical encoded data based on input attention area information, and generates attention area hierarchical encoded data.
- An output hierarchical encoded data conversion apparatus including a parameter set correction unit, wherein the parameter set correction unit corrects the scale correction information so that the inter-layer scales derived from the hierarchical encoded data before and after conversion match. It is characterized by doing.
- An image decoding apparatus (an image encoding apparatus) predicts by a parameter set decoding unit (encoding unit) that decodes (encodes) a parameter set, and inter-layer prediction with reference to decoded pixels of a reference layer picture
- a predicted image generation unit for generating an image is provided.
- the parameter set decoding unit (encoding) decodes (sets) scale correction information, and the predicted image generation unit responds to pixels on the target layer using an inter-layer scale value derived from the scale correction information.
- a reference position is derived.
- the image decoding apparatus (image encoding apparatus) can derive the same inter-layer scale before and after conversion even when converting hierarchically encoded data for the purpose of being represented by attention area extraction. Therefore, in the encoded data after conversion, the prediction residual of inter-layer prediction is reduced by maintaining the accuracy of the positional relationship between the upper layer pixels and the lower layer pixels. Therefore, the image decoding apparatus according to the present invention can decode encoded data with a smaller code amount and output a decoded picture of an upper layer. In addition, the image coding apparatus according to the present invention can output encoded data with a smaller code amount.
- (C) shows the slice layer that defines the slice S
- (d) shows the CTU layer that defines the coding tree unit CTU
- (e) shows the code layer 3 shows a CU layer that defines a coding unit (Coding Unit; CU) included in a coding tree unit CTU.
- CU Coding Unit
- (A) shows a transmission device equipped with a hierarchical video encoding device, and (b) shows a reception device equipped with a hierarchical video decoding device. It is the figure which showed the structure of the recording device carrying the said hierarchy moving image encoder, and the reproducing
- (A) shows a recording device equipped with a hierarchical video encoding device, and (b) shows a playback device equipped with a hierarchical video decoding device.
- the hierarchical moving picture decoding apparatus 1 to 25 the hierarchical moving picture decoding apparatus 1, the hierarchical moving picture encoding apparatus 2 and the encoded data conversion apparatus 3 according to an embodiment of the present invention will be described as follows.
- a hierarchical video decoding device (image decoding device) 1 decodes encoded data that has been hierarchically encoded by a hierarchical video encoding device (image encoding device) 2.
- Hierarchical coding is a coding scheme that hierarchically encodes moving images from low quality to high quality.
- Hierarchical coding is standardized in SVC and SHVC, for example.
- the quality of a moving image here widely means an element that affects the appearance of a subjective and objective moving image.
- the quality of the moving image includes, for example, “resolution”, “frame rate”, “image quality”, and “pixel representation accuracy”.
- Hierarchical coding technology is classified into (1) spatial scalability, (2) temporal scalability, (3) SNR (Signal to Noise Ratio) scalability, and (4) view scalability from the viewpoint of the type of information layered.
- Spatial scalability is a technique for hierarchizing resolution and image size.
- Time scalability is a technique for layering at a frame rate (number of frames per unit time).
- SNR scalability is a technique for layering in coding noise.
- view scalability is a technique for hierarchizing at the viewpoint position associated with each image.
- the encoded data conversion device 3 converts the encoded data that has been hierarchically encoded by the hierarchical moving image encoding device 2, and converts the encoded data related to a predetermined attention region (the attention region encoded data). ) Is generated.
- the attention area encoded data can be decoded by the hierarchical moving picture decoding apparatus 1 according to the present embodiment.
- the hierarchical video decoding device 1 Prior to detailed description of the hierarchical video encoding device 2, the hierarchical video decoding device 1, and the hierarchical encoded data conversion device 3 according to the present embodiment, first, (1) the hierarchical video encoding device 2 or the hierarchical code. A layer structure of hierarchically encoded data generated by the encoded data conversion device 3 and decoded by the hierarchical video decoding device 1 will be described, and then (2) a specific example of a data structure that can be adopted in each layer will be described.
- FIG. 2 is a diagram schematically illustrating a case where a moving image is hierarchically encoded / decoded by three layers of a lower layer L3, a middle layer L2, and an upper layer L1. That is, in the example shown in FIGS. 2A and 2B, of the three layers, the upper layer L1 is the highest layer and the lower layer L3 is the lowest layer.
- a decoded image corresponding to a specific quality that can be decoded from hierarchically encoded data is referred to as a decoded image of a specific hierarchy (or a decoded image corresponding to a specific hierarchy) (for example, in the upper hierarchy L1).
- Decoded image POUT # A a decoded image of a specific hierarchy (or a decoded image corresponding to a specific hierarchy) (for example, in the upper hierarchy L1).
- FIG. 2A shows a hierarchical moving image encoding apparatus 2 # A to 2 # C that generates encoded data DATA # A to DATA # C by hierarchically encoding input images PIN # A to PIN # C, respectively. Is shown.
- FIG. 2B shows a hierarchical moving picture decoding apparatus 1 # A ⁇ that generates decoded images POUT # A ⁇ POUT # C by decoding the encoded data DATA # A ⁇ DATA # C, which are encoded hierarchically. 1 # C is shown.
- the input images PIN # A, PIN # B, and PIN # C that are input on the encoding device side have the same original image but different image quality (resolution, frame rate, image quality, and the like).
- the image quality decreases in the order of the input images PIN # A, PIN # B, and PIN # C.
- the hierarchical video encoding apparatus 2 # B of the middle hierarchy L2 encodes the input image PIN # B of the middle hierarchy L2 with reference to the encoded data DATA # C of the lower hierarchy, and performs the middle hierarchy L2 Encoded data DATA # B is generated.
- additional data necessary for decoding the decoded image POUT # B of the intermediate hierarchy is added to the encoded data DATA # B of the intermediate hierarchy L2.
- Information (indicated by “B” in FIG. 2) is included.
- the reference layers of the upper hierarchy L1 are the middle hierarchy L2 and the lower hierarchy L3.
- the hierarchically encoded data can be configured so that it is not necessary to refer to all of the lower layers in decoding of the specific layer.
- the hierarchical encoded data can be configured such that the reference layer of the upper hierarchy L1 is either the middle hierarchy L2 or the lower hierarchy L3.
- a set of encoding parameters referred to by the hierarchical video decoding device 1 for decoding each picture in the target sequence is defined.
- a plurality of PPS may exist in the encoded data. In that case, one of a plurality of PPSs is selected from each picture in the target sequence.
- a PPS used for decoding a specific picture is also called an active PPS.
- PPS means active PPS for the current picture.
- the active SPS and the active PPS may be set to different SPSs and PPSs for each layer.
- I slice that uses only intra prediction at the time of encoding (2) P slice that uses unidirectional prediction or intra prediction at the time of encoding, (3) B-slice using unidirectional prediction, bidirectional prediction, or intra prediction at the time of encoding may be used.
- CTU layer In the CTU layer, a set of data referred to by the hierarchical video decoding device 1 for decoding a coding tree unit CTU to be processed (hereinafter also referred to as a target CTU) is defined.
- the coding tree unit may be referred to as a coding tree block (CTB) or a maximum coding unit (LCU).
- CTB coding tree block
- LCU maximum coding unit
- the coding tree unit CTU includes a CTU header CTUH and coding unit information CU 1 to CU NL (NL is the total number of coding unit information included in the CTU).
- NL is the total number of coding unit information included in the CTU.
- coding unit information (hereinafter referred to as CU information)
- CU 1 to CU NL corresponds to each coding node (coding unit) obtained by recursively dividing the coding tree unit CTU into quadtrees. Information.
- each coding node is half of the size of the coding node that is the parent node of the coding node (that is, the node that is one layer higher than the coding node).
- the encoding node is a node at the root of a prediction tree (PT) and a transformation tree (TT).
- PT prediction tree
- TT transformation tree
- PU partitioning There are roughly two types of partitioning in the prediction tree (hereinafter abbreviated as PU partitioning): intra prediction and inter prediction.
- Prediction type information PType is information that specifies a predicted image generation method for the target PU. In the base layer, it is information that specifies whether intra prediction or inter prediction is used.
- the intra prediction information PP_Intra includes an encoding parameter that is referred to when the hierarchical video decoding device 1 generates an intra predicted image by intra prediction. More specifically, the intra prediction information PP_Intra includes intra PU division information that specifies a division pattern of the target CU into each intra PU, and intra prediction parameters for each intra PU.
- the intra prediction parameter is a parameter for designating an intra prediction method (prediction mode) for each intra PU.
- each conversion block obtained by the division can take a size from 32 ⁇ 32 pixels to 4 ⁇ 4 pixels.
- the PU partition type specified by the PU partition information includes the following eight patterns in total, assuming that the size of the target CU is 2N ⁇ 2N pixels. That is, 4 symmetric splittings of 2N ⁇ 2N pixels, 2N ⁇ N pixels, N ⁇ 2N pixels, and N ⁇ N pixels, and 2N ⁇ nU pixels, 2N ⁇ nD pixels, nL ⁇ 2N pixels, And four asymmetric splittings of nR ⁇ 2N pixels.
- N 2 m (m is an arbitrary integer of 1 or more).
- a prediction unit obtained by dividing the target CU is referred to as a prediction block or a partition.
- enhancement layer encoded data For encoded data included in the layer representation of the enhancement layer (hereinafter, enhancement layer encoded data), for example, a data structure substantially similar to the data structure shown in FIG. 3 can be adopted. However, in the enhancement layer encoded data, additional information can be added or parameters can be omitted as follows.
- spatial scalability, temporal scalability, SNR scalability, and view scalability hierarchy identification information may be encoded.
- the CU type of the target CU is an intra CU, an inter-layer CU, an inter CU, or a skip CU.
- the intra CU can be defined in the same manner as the intra CU in the base layer.
- the inter-layer image prediction flag is set to “0”, and the prediction mode flag is set to “0”.
- An inter-layer CU can be defined as a CU that uses a decoded image of a picture in a reference layer for generating a predicted image.
- the inter-layer image prediction flag is set to “1” and the prediction mode flag is set to “0”.
- the skip CU can be defined in the same manner as in the HEVC method described above. For example, in the skip CU, “1” is set in the skip flag.
- the inter CU may be defined as a CU that applies non-skip and motion compensation (MC).
- MC non-skip and motion compensation
- the encoded data of the enhancement layer may be generated by an encoding method different from the encoding method of the lower layer. That is, the encoding / decoding process of the enhancement layer does not depend on the type of the lower layer codec.
- the lower layer may be encoded by, for example, MPEG-2 or H.264 / AVC format.
- the VPS may be extended to include a parameter representing a reference structure between layers.
- SPS, PPS, and slice header are extended, and information related to a decoded image of a reference layer used for inter-layer image prediction (for example, an inter-layer reference picture set, an inter-layer reference picture list described later) , Syntax for deriving base control information or the like directly or indirectly).
- a reference layer used for inter-layer image prediction for example, an inter-layer reference picture set, an inter-layer reference picture list described later
- the parameters described above may be encoded independently, or a plurality of parameters may be encoded in combination.
- an index is assigned to the combination of parameter values, and the assigned index is encoded.
- the encoding of the parameter can be omitted.
- FIG. 4B illustrates the relationship between tiles and slices in the configuration of encoded data.
- encoded data includes a plurality of VCL (Video Coding Layer) NAL units and non-VCL (non-VCL) NAL units.
- the encoded data of the video encoding layer corresponding to one picture is composed of a plurality of VCL NALs.
- the encoded data corresponding to the picture includes encoded data corresponding to the tiles in the tile raster order. That is, as shown in FIG. 4A, when a picture is divided into tiles, encoded data corresponding to tiles is included in the order of tiles T00, T01, T02, T10, T11, and T12.
- encoded data corresponding to a specific tile in a picture is associated with encoded data corresponding to one or more slices. Therefore, if a decoded image of a slice associated with a tile can be generated, a decoded image of a partial region in a picture corresponding to the tile can be generated.
- FIG. 5 exemplifies a system SYS_ROI1 that performs transmission and reproduction of a hierarchical video that can be realized by combining the hierarchical video decoding device 1, the hierarchical video encoding device 2, and the encoded data conversion device 3.
- Hierarchically encoded data BSALL includes encoded data corresponding to the entire high-quality input image PIN # H as hierarchically encoded data of an upper layer (enhancement layer).
- the hierarchically encoded data BSALL includes encoded data corresponding to the entire low-quality input image PIN # L as hierarchically encoded data of the lower layer (base layer).
- a decoded image DROI # H corresponding to the high quality input image PIN # H and corresponding to the region of interest ROI is output.
- a decoded image DOUT # L corresponding to the low-quality input image PIN # L is output.
- the description may be made assuming that the system SYS_ROI1 is used.
- the usage of the apparatus is not limited to the system SYS_ROI1.
- FIG. 6 is a functional block diagram showing a schematic configuration of the hierarchical video decoding device 1.
- the hierarchical video decoding device 1 receives hierarchical encoded data DATA (hierarchical encoded data DATAF provided from the hierarchical video encoding device 2 or hierarchical encoded data DATAAR provided from the encoded data conversion device 3). Decoding is performed to generate a decoded image POUT # T of the target layer.
- the target layer is an extension layer having the base layer as a reference layer. Therefore, the target layer is also an upper layer with respect to the reference layer. Conversely, the reference layer is also a lower layer with respect to the target layer.
- the hierarchical video decoding device 1 includes a NAL demultiplexing unit 11, a parameter set decoding unit 12, a tile setting unit 13, a slice decoding unit 14, a base decoding unit 15, and a decoded picture management unit 16.
- the NAL demultiplexing unit 11 demultiplexes the hierarchical encoded data DATA transmitted in units of NAL units in NAL (Network Abstraction Layer).
- NAL is a layer provided to abstract communication between VCL (Video Coding Layer) and lower systems that transmit and store encoded data.
- VCL Video Coding Layer
- VCL is a layer that performs video encoding processing, and encoding is performed in VCL.
- the lower system here corresponds to the H.264 / AVC and HEVC file formats and the MPEG-2 system.
- Hierarchical encoded data DATA includes NAL including parameter sets (VPS, SPS, PPS), SEI, etc. in addition to NAL generated by VCL. Those NALs are called non-VCL NALs versus VCL NALs.
- the NAL demultiplexing unit 11 demultiplexes the hierarchical encoded data DATA, and extracts the target layer encoded data DATA # T and the reference layer encoded data DATA # R. Further, the NAL demultiplexing unit 11 supplies non-VCL NAL to the parameter set decoding unit 12 and VCL NAL to the slice decoding unit 14 among NALs included in the target layer encoded data DATA # T.
- the parameter set decoding unit 12 decodes the parameter set, that is, VPS, SPS, and PPS, from the input non-VCL NAL and supplies them to the tile setting unit 13 and the slice decoding unit 14. Details of processing highly relevant to the present invention in the parameter set decoding unit 12 will be described later.
- the tile setting unit 13 derives the tile information of the picture based on the input parameter set and supplies it to the slice decoding unit 14.
- the tile information includes at least tile division information of a picture. The detailed description of the tile setting unit 13 will be described later.
- the slice decoding unit 14 generates a decoded picture or a partial area of the decoded picture based on the input VCL NAL, parameter set, tile information, and reference picture, and records them in a buffer in the decoded picture management unit 16. .
- a detailed description of the slice decoding unit will be described later.
- the decoded picture management unit 16 records the input decoded picture and the base decoded picture in an internal decoded picture buffer (DPB: “Decoded” Picture ”Buffer), and performs reference picture list generation and output picture determination. Also, the decoded picture management unit 16 outputs the decoded picture recorded in the DPB to the outside as an output picture POUT # T at a predetermined timing.
- DPB internal decoded picture buffer
- the base decoding unit 15 decodes the base decoded picture from the reference layer encoded data DATA # R.
- the base decoded picture is a decoded picture of the reference layer used when decoding the decoded picture of the target layer.
- the base decoding unit 15 records the decoded base decoded picture in the DPB in the decoded picture management unit 16.
- FIG. 7 is a functional block diagram illustrating the configuration of the base decoding unit 15.
- the base NAL demultiplexing unit 151 demultiplexes the reference layer encoded data DATA # R to extract VCL NAL and non-VCL NAL, non-VCL NAL to the base parameter set decoding unit 152, and VCL NAL to base slice Each is supplied to the decryption unit 154.
- the base parameter set decoding unit 152 decodes the parameter set, that is, VPS, SPS, and PPS, from the input non-VCL NAL and supplies them to the base tile setting unit 153 and the base slice decoding unit 154.
- the base slice decoding unit 154 generates a decoded picture or a partial area of the decoded picture based on the input VCL NAL, parameter set, tile information, and reference picture, and stores the decoded picture in the buffer in the base decoded picture management unit 156. Record.
- the base decoded picture management unit 156 records the input decoded picture in the internal DPB, and performs reference picture list generation and output picture determination. Further, the base decoded picture management unit 156 outputs the decoded picture recorded in the DPB as a base decoded picture at a predetermined timing.
- the parameter set output from the parameter set decoding unit 12 is a syntax value of syntax related to the parameter set (VPS, SPS, PPS) included in the encoded data, and a variable derived from the syntax value. It can also be expressed as a set of
- the parameter set decoding unit 12 decodes picture information from input target layer encoded data.
- the picture information is information that determines the size of the decoded picture of the target layer.
- the picture information includes information indicating the width and height of the decoded picture of the target layer.
- Picture information is included in SPS, for example.
- the picture information decoded from the SPS includes the width of the decoded picture (pic_width_in_luma_samples) and the height of the decoded picture (pic_height_in_luma_samples).
- the value of the syntax pic_width_in_luma_samples corresponds to the width of the decoded picture in luminance pixel units.
- the value of the syntax pic_height_in_luma_samples corresponds to the height of the decoded picture in luminance pixel units.
- the parameter set decoding unit 12 decodes the display area information from the input target layer encoded data.
- the display area information is included in the SPS, for example.
- the display area information decoded from the SPS includes a display area flag (conformance_flag).
- the display area flag indicates whether information indicating the position of the display area (display area position information) is additionally included in the SPS. That is, when the display area flag is 1, it indicates that the display area position information is additionally included, and when the display area flag is 0, it indicates that the display area position information is not additionally included.
- the display area information decoded from the SPS further includes display area left offset (conf_win_left_offset), display area right offset (conf_win_right_offset), display area upper offset (conf_win_top_offset), and display area. Contains the lower offset (conf_win_bottom_offset).
- FIG. 8 is a diagram illustrating the relationship between the display area which is a partial area in the picture and the display area position information.
- the display area is included in the picture
- the display area offset is the distance between the picture upper edge and the display area upper edge
- the display area left offset is the distance between the picture left edge and the display area left edge
- the display area right offset Represents the distance between the right side of the picture and the right side of the display area
- the lower offset of the display area represents the distance between the lower side of the picture and the lower side of the display area. Therefore, the position and size of the display area in the picture can be uniquely specified by the display area position information.
- the display area information may be other information that can uniquely identify the position and size of the display area in the picture.
- the parameter set decoding unit 12 decodes the inter-layer position correspondence information from the input target layer encoded data.
- the inter-layer position correspondence information schematically indicates the positional relationship between corresponding areas of the target layer and the reference layer. For example, when an object (object A) in a picture of the target layer and a picture of the reference layer is included, an area corresponding to the object A on the picture of the target layer and an area corresponding to the object A on the picture of the reference layer , Corresponding to the regions corresponding to the target layer and the reference layer.
- the inter-layer position correspondence information may not necessarily be information that accurately indicates the positional relationship between the corresponding regions of the target layer and the reference layer, but generally, in order to improve the accuracy of inter-layer prediction. The correct positional relationship between the corresponding layers of the target layer and the reference layer is shown.
- the inter-layer position correspondence information includes inter-layer pixel correspondence information.
- the inter-layer pixel correspondence information is information indicating a positional relationship between a pixel on the reference layer picture and a pixel on the corresponding target layer picture.
- the inter-layer correspondence information may additionally include inter-layer phase correspondence information.
- the inter-layer phase correspondence information is information representing the phase difference of the pixels whose correspondence is indicated by the inter-layer pixel correspondence information.
- the inter-layer pixel correspondence information is included in, for example, an SPS extension (sps_estension) that is a part of the SPS of the higher layer, and is decoded according to the syntax table shown in FIG.
- FIG. 9 is a part of a syntax table that the parameter set decoding unit 12 refers to when performing SPS decoding, and is a part related to inter-layer pixel correspondence information.
- the inter-layer pixel correspondence information decoded from the SPS includes the number of inter-layer pixel correspondence information (num_scaled_ref_layer_offsets) included in the SPS extension.
- the inter-layer pixel correspondence information includes an inter-layer pixel correspondence offset corresponding to the number of the inter-layer pixel correspondence information.
- the offset corresponding to the pixel between layers includes the enlarged reference layer left offset (scaled_ref_layer_left_offset [i]), the enlarged reference layer upper offset (scaled_ref_layer_top_offset [i]), the enlarged reference layer right offset (scaled_ref_layer_right_offset [i]), and the enlarged reference layer below
- the offset scaled_ref_layer_bottom_offset [i]
- the inter-layer pixel correspondence offset is also referred to as an enlarged reference layer offset.
- FIG. 10 is a diagram illustrating the relationship between the picture of the target layer, the picture of the reference layer, and the inter-layer pixel correspondence offset.
- FIG. 10A shows an example in which the entire picture of the reference layer corresponds to a part of the picture of the target layer.
- an area on the target layer corresponding to the entire reference layer picture (target layer corresponding area) is included in the target layer picture.
- FIG. 10B illustrates an example in which a part of the reference layer picture corresponds to the entire picture of the target layer.
- the target layer picture is included inside the reference layer corresponding area. Note that the entire target layer picture includes an offset.
- the enlarged reference layer right offset represents the offset of the right side of the reference layer corresponding area to the right side of the target layer picture.
- SRL right offset When the SRL right offset is larger than 0, it indicates that the right side of the reference layer corresponding region is located on the left side of the right side of the target layer picture.
- the scale correction information decoded from the SPS includes a syntax element (A0).
- A0 Number of scale correction information (num_scale_adjust_info)
- the SPS includes a set of syntax elements (A1, A2L, A2T, A2R, A2B, A3W, A3H) indicated by the number of scale correction information.
- the scale correction reference layer identifier (A1) is an identifier of a specific reference layer for the target layer.
- the specific reference layer is a reference layer to be subjected to scale correction processing.
- the virtual reference layer size (A3W, A3H) is a parameter used for calculating an inter-layer scale (correction scale).
- a reference layer width reference layer picture width before hierarchical encoded data conversion is set.
- the syntax (PPS tile information) related to tile information included in the PPS includes a multi-tile enabled flag (tiles_enabled_flag).
- tile_enabled_flag When the value of the multi-tile valid flag is 1, it indicates that the picture is composed of two or more tiles. When the value of the flag is 0, the picture is composed of one tile, that is, the picture and the tile match.
- Num_tile_columns_minus1 is a syntax corresponding to a value obtained by subtracting 1 from the number of tiles included in the horizontal direction of the picture.
- Num_tile_rows_minus1 is a syntax corresponding to a value obtained by subtracting 1 from the number of tiles included in the vertical direction of the picture. Therefore, the number of tiles NumTilesInPic included in the picture is calculated by the following equation.
- NumTilesInPic (num_tile_columns_minus1 + 1) * (num_tile_rows_minus1 + 1)
- a uniform_spacing_flag value of 1 indicates that the tile size included in the picture is uniform, that is, the width and height of each tile are equal.
- a uniform_spacing_flag value of 0 indicates that the tile sizes included in the picture are uneven, that is, the width and height of the tiles included in the picture do not necessarily match.
- the PPS tile information includes information indicating the tile width (column_width_minus1 [i]) for each tile column included in the picture, and the picture For each tile row included, additional information (row_height_minus1 [i]) indicating the height of the tile is included.
- the tile setting unit 13 derives tile structure information.
- the tile structure information includes an array for deriving a tile scan CTB address from a raster scan CTB address (CtbAddrRsToTs [ctbAddrRs]), an array for deriving a raster scan CTB address from a tile scan CTB address (CtbAddrTsToRs [ctbAddrTs]), and a tile scan CTB address
- Each tile identifier (TileId [ctbAddrTs])
- the width of each tile column ColumnWidthInLumaSamples [i]
- the height of each tile row RowHeightInLumaSamples [j]
- each tile column is calculated based on the picture size and the number of tiles in the picture. For example, the width of the i-th tile column (ColumnWidthInLumaSamples [i]) is calculated by the following equation. Note that PicWidthInCtbsY represents the number of CTUs included in the horizontal direction of the picture.
- colBd [i] indicating the boundary position of the i-th tile row and rowBd [j] indicating the boundary position of the j-th tile row are calculated by the following equations. Note that the values of colBd [0] and rowBd [0] are 0.
- CtbAddrRsToTs includes the sum of CTUs contained in tiles that precede the tiles of (tileX, tileY) in the tile scan order, and (tbX-colBd [tileX], A value obtained by adding the positions of the raster scan order in the tile of the CTU located at tbY (-rowBd [tileY]) is set.
- FIG. 15 is a functional block diagram illustrating a schematic configuration of the slice decoding unit 14.
- the slice header decoding unit 141 decodes the slice header based on the input VCL NAL and the parameter set, and outputs the decoded slice header to the slice position setting unit 142, the skip slice determination unit 143, and the CTU decoding unit 144.
- the slice position setting unit 142 specifies the slice position in the picture based on the input slice header and tile information, and outputs the slice position to the CTU decoding unit 144.
- the slice position in the picture derived by the slice position setting unit 142 includes the position in the picture of each CTU included in the slice.
- the generation process of the predicted pixel value of the target pixel included in the target CTU to which the inter-layer image prediction is applied is executed according to the following procedure.
- a reference picture position derivation process is executed to derive a corresponding reference position.
- the corresponding reference position is a position on the reference layer corresponding to the target pixel on the target layer picture. Since the pixels of the target layer and the reference layer do not necessarily correspond one-to-one, the corresponding reference position is expressed with an accuracy of less than the pixel unit in the reference layer.
- the prediction pixel value of the target pixel is generated by executing the interpolation filter process using the derived corresponding reference position as an input.
- currPicW and currPicH are the height and width of the target picture.
- the target of the corresponding reference position derivation process is a luminance pixel, it matches the syntax values of pic_width_in_luma_samples and pic_height_in_luma_samples included in the SPS picture information in the target layer To do.
- the object is a color difference, a value obtained by converting the syntax value according to the type of color format is used.
- the virtual reference layer corresponding region size is calculated by the sum of the current picture size, the expanded reference layer offset, and the expanded reference layer additional offset.
- rlVirtualW is the virtual reference layer width
- rlVirtualH is the virtual reference layer height
- rlVirtualW is the virtual reference layer width
- rlVirtualH is the virtual reference layer height
- the inter-layer scale is obtained by changing the virtual reference layer size (virtual reference layer width or virtual reference layer height) to the virtual reference layer corresponding region size (virtual reference layer corresponding region width or It is derived by dividing by the virtual reference layer corresponding area height).
- the inter-layer scale is calculated as a ratio between the virtual reference layer size and the virtual reference layer corresponding region size.
- the reference position is calculated based on the inter-layer pixel correspondence information and the inter-layer scale.
- the horizontal component xRef and the vertical component yRef at the reference position corresponding to the target layer pixel are calculated by the following equations. Note that xRef represents the horizontal position with reference to the upper left pixel of the reference layer picture, and yRef represents the vertical position with reference to the upper left pixel in pixel units of the reference layer picture.
- xRef (xP-SRLLOffset) * scaleX
- yRef (yP-SRLTOffset) * scaleY
- xP and yP represent the horizontal component and the vertical component of the target layer pixel with reference to the upper left pixel of the target layer picture, in pixel units of the target layer picture.
- xRef16 ((xP-SRLLOffset) * scaleIntX + (1 ⁇ 11)) >> 12
- yRef16 ((yP-SRLTOffset) * scaleIntY + (1 ⁇ 11)) >> 12
- the position on the reference layer corresponding to the target pixel on the target layer picture can be derived as the corresponding reference position.
- the ratio between the virtual reference layer size and the virtual reference layer corresponding region size derived based on the scale correction information is used for the inter-layer scale, not the ratio between the actual reference layer picture and the reference layer corresponding region size. As derived.
- the virtual reference layer size equal to the reference layer size in the hierarchically encoded data before conversion and the reference in the hierarchically encoded data before conversion
- An inter-layer scale can be derived by deriving a virtual reference layer corresponding region size equal to the layer corresponding region size when decoding the hierarchically encoded data after conversion. Therefore, the scale between layers can be maintained at the same value before and after conversion.
- the pixel value at the position corresponding to the corresponding reference position derived in the corresponding reference position deriving process is applied to the decoded pixels of the pixels near the corresponding reference position on the reference layer picture. Generate.
- xRef16 ((xP-SRLLOffset) * scaleIntX + addX + (1 ⁇ 11)) >> 12
- yRef16 ((yP-SRLTOffset) * scaleIntY + addY + (1 ⁇ 11)) >> 12
- addX and addY are the amounts corresponding to the phase shift in the vertical direction and the horizontal direction, respectively, phaseX which is the phase difference in the vertical direction expressed with quarter-pixel accuracy, and similarly the phase difference in the horizontal direction. Is calculated by the following equation using phaseY.
- the hierarchical moving picture decoding apparatus 1 (hierarchical picture decoding apparatus) according to the present embodiment described above includes a parameter set decoding unit 12 that decodes a parameter set, and a prediction image by inter-layer prediction with reference to a decoded pixel of a reference layer picture. Is provided with a predicted image generation unit 1442.
- the parameter set decoding unit 12 decodes the scale correction information, and the predicted image generation unit 1442 uses the inter-layer scale value derived from the scale correction information to derive the corresponding reference position for the pixel on the target layer.
- the hierarchical moving picture decoding apparatus 1 can derive the same inter-layer scale before and after the conversion even when the hierarchical encoded data is converted for the purpose represented by the attention area extraction. Therefore, in the encoded data after conversion, the prediction residual of inter-layer prediction is reduced by maintaining the accuracy of the positional relationship between the pixels of the upper layer and the pixels of the lower layer. Can be decoded to output a decoded picture of an upper layer.
- the parameter set decoding unit 12 decodes the scale correction information from the input target layer encoded data.
- the scale correction information is included in the SPS extension, for example, and is decoded according to the syntax table shown in FIG. FIG. 16 is a part of a syntax table that the parameter set decoding unit 12 refers to when performing SPS decoding, and is a part related to scale correction information.
- the alternative inter-layer scale is a parameter used for calculating the inter-layer scale.
- the alternate inter-layer scale is used directly as the value of the inter-layer scale.
- the SPS includes an alternative inter-layer scale horizontal component (alt_scale_x [i]) and an alternative inter-layer scale vertical component (alt_scale_y [i]) corresponding to a specific reference layer as the alternative inter-layer scale.
- the array subscript ([i]) represents the number of parameters in the SPS.
- the alternative inter-layer scale of the subscript i is an alternative inter-layer scale when the layer indicated by the scale correction reference layer identifier of the subscript i is the reference layer.
- the scale between layers is calculated according to the following steps S201 to S203.
- (S201) It is determined whether an alternative inter-layer scale is transmitted for a reference layer used for inter-layer prediction. Specifically, when the layer identifier of the reference layer is included in the scale correction reference layer identifier included in the scale correction information, it is determined that the alternative inter-layer scale is transmitted, and in other cases, It is determined that the alternate inter-layer scale is not transmitted. If the alternative inter-layer scale is transmitted, the process proceeds to S202, and if not transmitted, the process proceeds to S203.
- the inter-layer scale is calculated as the ratio of the reference layer size to the reference layer corresponding region size, and the process is terminated.
- the scale between layers is calculated by the following equation.
- rlW and rlH represent the width and height of the reference layer picture, respectively
- srlW and srlH represent the width and height of the reference layer corresponding area, respectively.
- the range values of the horizontal component for the alternate layer scale and the vertical component for the alternate layer scale must be the same as the scale between layers when the alternate layer scale is not used.
- the range of scaleX is an integer greater than or equal to 0 and less than or equal to (rlMaxW ⁇ 16). Therefore, the alternative inter-layer scale needs to be included in the parameter set with a code capable of expressing an equivalent range.
- the reference layer picture width rlW (height rlH) is equal to or larger than the reference layer corresponding region width srlW (height srlH), and the maximum scaleX The value is when srlW is equal to rlW.
- scaleX 1 ⁇ 16. Therefore, the value range of the scale between alternative layers is limited to an integer of 0 or more (1 ⁇ 16). Therefore, in such a case, it is preferable to include an alternative inter-layer scale in the parameter set using a 16-bit fixed length code.
- an alternative inter-layer scale in a range corresponding to the limitation may be decoded and used.
- a value obtained by subtracting minScale from the final alternative inter-layer scale value may be encoded as alt_scale_x [i] and alt_scale_y [i].
- the alternate layer scales AltScaleX [i] and AltScaleY [i] that are actually used are derived from the alternate layer scales alt_scale_x [i] and alt_scale_y [i] included in the encoded data.
- the predicted image generation unit 1442 uses (S202 ′) instead of (S202).
- the parameter set decoding unit 12 decodes the scale correction information from the input target layer encoded data.
- the scale correction information is included in the SPS extension, for example, and is decoded according to the syntax table shown in FIG. FIG. 17 is a part of a syntax table that the parameter set decoding unit 12 refers to when performing SPS decoding, and is a part related to scale correction information.
- the alternative inter-layer scale difference is a parameter used to calculate the inter-layer scale.
- the final inter-layer scale is calculated by adding the inter-alternative layer scale difference to the inter-layer scale derived from the ratio between the reference layer corresponding region size and the reference layer size.
- the parameter set includes an alternate layer scale horizontal component difference (alt_scale_x_diff [i]) and an alternate layer scale vertical component difference (alt_scale_y_diff [i]) corresponding to a specific reference layer. It is.
- the meaning of the subscript i is the same as in the case of the alternative inter-layer scale.
- the scale between layers is calculated by the following equation.
- scaleX (((rlW ⁇ 16) + (srlW >> 1)) / srlW) + alt_scale_x_diff [i]
- scaleY (((rlH ⁇ 16) + (srlH >> 1)) / srlH) + alt_scale_y_diff [i] That is, the ratio between the reference layer corresponding region size and the reference layer size, and the sum of the alternative inter-layer scale differences are set as the inter-layer scale.
- the alternative inter-layer scale difference is preferably included in the parameter set by a code that can express a positive or negative integer and that is assigned a shorter code with respect to a smaller absolute value.
- the alternate layer scale difference horizontal component or vertical component can be included in the parameter set (SPS or VPS) using an exponential Golomb code (se (v)) for signed integers defined in HEVC.
- the minimum value of the inter-alternate layer scale difference is considered to be -1 and the maximum value is 1, whether to use the inter-alternate layer scale difference instead of the inter-alternate layer scale difference alt_scale_x_diff [i] and alt_scale_x_diff [j] Encoding alt_scale_x_diff_present_flag [i], alt_scale_y_diff_present_flag [i], and flags alt_scale_x_diff_sign_flag [i] and alt_scale_y_diff_sign_flag [i] that indicate the sign of the scale difference between alternative layers when using the scale difference between alternative layers good.
- the scale between layers is calculated by the following equation.
- the scale correction information in this case is included in the SPS extension, for example, and is decoded according to the syntax table shown in FIG. FIG. 18 is a part of a syntax table that the parameter set decoding unit 12 refers to when performing SPS decoding, and is a part related to scale correction information.
- the virtual reference layer corresponding region direct size (A5WH) is a parameter used for calculating the virtual reference layer corresponding region size used for the inter-layer scale calculation.
- the virtual reference layer corresponding region direct size is directly used as a value of the virtual reference layer corresponding region size.
- the value of the inter-layer scale is calculated as the ratio of the virtual reference layer size to the virtual reference layer corresponding region size by the same method as the processing described in the predicted image generation unit 1442.
- the parameter set includes the virtual reference layer corresponding area width (srl_virtual_width [i]) and virtual reference layer corresponding area height (srl_virtual_height [i]) corresponding to a specific reference layer It is.
- the meaning of the subscript i is the same as in the case of the alternative inter-layer scale.
- the virtual reference layer corresponding region width and height are preferably encoded in the parameter set with the same code as the picture size height and width, respectively.
- the virtual reference layer corresponding region size is calculated by the following equation.
- the virtual reference layer corresponding area difference size (A5WHd) is a parameter used for calculating the virtual reference layer corresponding area size used for the inter-layer scale calculation.
- the virtual reference layer corresponding region size is calculated by adding the virtual reference layer corresponding region difference size to the reference layer corresponding region size.
- the value of the inter-layer scale is calculated as the ratio of the virtual reference layer size to the virtual reference layer corresponding region size by the same method as the processing described in the predicted image generation unit 1442.
- the parameter set includes a virtual reference layer corresponding area difference width (srl_virtual_width_diff [i]) and a virtual reference layer corresponding area difference height (srl_virtual_height_diff [i]) corresponding to a specific reference layer. Is included. The meaning of the subscript i is the same as in the case of the alternative inter-layer scale.
- the virtual reference layer corresponding region size is calculated by the following equation.
- VSRLW currPicW-SRLLOffset-SRLROffset + srl_virtual_width_diff [i]
- VSRLH currPicH-SRLTOffset-SRLBOffset + srl_virtual_height_diff [i]
- the virtual reference layer size difference (A3WHd) is a parameter used for calculating a virtual reference layer size used for calculating an inter-layer scale.
- the virtual reference layer size is calculated by adding the virtual reference layer corresponding region size difference to the reference layer size.
- the value of the inter-layer scale is calculated as the ratio of the virtual reference layer size to the virtual reference layer corresponding region size by the same method as the processing described in the predicted image generation unit 1442.
- the parameter set includes a virtual reference layer difference width (rl_virtual_width_diff [i]) and a virtual reference layer difference height (rl_virtual_height_diff [i]) corresponding to a specific reference layer.
- the meaning of the subscript i is the same as in the case of the alternative inter-layer scale.
- the virtual reference layer corresponding region size is calculated by the following equation.
- refPicW refPicW + rl_virtual_width_diff [i]
- rlVirtualH refPicH + rl_virtual_height_diff [i]
- refPicW and refPicH correspond to the width and height of the reference layer picture size used for the predicted image generation processing, respectively.
- both the width and height of the virtual reference layer size difference are set to 0.
- the virtual reference layer size derived by the above formula matches the reference layer size.
- A2TL Extended Reference Layer Upper Left Additional Offset
- A2BR Extended Reference Layer Lower Right Additional Offset
- A3WH Virtual Reference Layer Size
- A3WHd Virtual Reference Layer Size Difference
- A4S Substitute Layer Scale
- A4Sd Substitute Layer Scale Difference
- A5WH Virtual reference layer corresponding region size / A5WHd: Virtual reference layer corresponding region size difference
- the upper left additional offset of the enlarged reference layer described with reference to FIG. 11 is used.
- the enlarged reference layer upper left additional offset is a component of the scale correction information and a component of the inter-layer phase correspondence information.
- xRef16 xRefOrg16-(xRefOffset ⁇ 4)
- yRef16 yRefOrg16-(yRefOffset ⁇ 4)
- scale correction information when the corresponding reference position is derived using the scale correction information and the inter-layer phase correspondence information
- combinations of various parameters described in the first modification can be used.
- the enlarged reference layer upper left additional offset can be shared between the use of the scale correction information and the use of the inter-layer phase correspondence information, and thus there is an advantage that the code amount of the parameter set is small.
- the slice encoding unit 24 is a part of the input image corresponding to the slice constituting the picture. Is encoded to generate encoded data of the part, and the encoded data is supplied to the NAL multiplexer 21 as a part of the target layer encoded data DATA # T. Detailed description of the slice encoding unit 24 will be described later.
- the slice header setting unit 241 generates a slice header used for encoding an input image input in units of slices based on the input parameter set and slice position information.
- the generated slice header is output as a part of the slice encoded data, and is supplied to the CTU encoding unit 244 together with the input image.
- the prediction image encoding unit 2442 determines a prediction method and prediction parameters of the target CTU included in the target slice, generates a prediction image based on the determined prediction method, and generates a prediction residual encoding unit 2441 and a CTU decoded image. Output to the unit 1443.
- Information on the prediction method and prediction parameters is variable-length encoded as prediction information (PT information) and output as a part of slice data included in the slice encoded data.
- the prediction methods that can be selected by the prediction image encoding unit 2442 include at least inter-layer image prediction.
- the predicted image encoding unit 2442 When inter-layer image prediction is selected as the prediction method, the predicted image encoding unit 2442 performs a corresponding reference position derivation process, determines a reference layer pixel position corresponding to the prediction target pixel, and performs interpolation based on the position A predicted pixel value is determined by processing.
- the corresponding reference position derivation process each process described for the predicted image generation unit 1442 of the hierarchical video decoding device 1 can be applied. For example, the process of deriving the corresponding reference pixel based on the scale correction information included in the parameter set described with reference to FIG. 1 is applied.
- the hierarchical video encoding device 2 (hierarchical image encoding device) according to the present embodiment described above includes a parameter set encoding unit 22 that encodes a parameter set and an inter-layer referring to a decoded pixel of a reference layer picture.
- a prediction image encoding unit 2442 that generates a prediction image by prediction is provided.
- the parameter set encoding unit 22 encodes the scale correction information, and the prediction image encoding unit 2442 derives the corresponding reference position for the pixel on the target layer using the inter-layer scale value derived from the scale correction information. .
- the hierarchical encoded data conversion device 3 includes a NAL demultiplexing unit 11, a NAL multiplexing unit 21, a parameter set decoding unit 12, a tile setting unit 13, a parameter set modification unit 32, and a NAL selection unit 34. including.
- the attention area information is a partial area of a picture specified by a user (for example, a viewer of a reproduction moving image) in a picture constituting the moving image.
- the attention area information is specified by a rectangular area, for example.
- an offset of a position from the corresponding side (upper side, lower side, left side, or right side) of the entire picture of the upper side, the lower side, the left side, and the right side of the rectangle representing the target region can be designated as the attention region information.
- an area having a shape other than a rectangle for example, a circle, a polygon, or an area indicating an object extracted by object extraction
- a rectangular attention area is assumed below. To do.
- a rectangle with the smallest area including the region of interest can be regarded as the region of interest in the following description.
- the hierarchical encoded data conversion apparatus 3 removes tiles that do not have a region of interest and an overlapping region on the enhancement layer from the input pre-conversion hierarchically encoded data, and corrects the related parameter set. Then, the encoded hierarchical encoded data is generated.
- the hierarchical moving image decoding apparatus can generate a decoded image related to a region of interest with the converted hierarchical encoded data as an input.
- the parameter set correction unit 32 removes tiles unnecessary for decoding the tile extracted in the enhancement layer from the tiles included in the pre-conversion BL picture.
- the base layer PPS tile information is updated so that the region on the enhancement layer corresponding to the converted BL picture (the converted reference layer corresponding region) includes the converted EL picture.
- the parameter set correction unit 32 corrects the inter-layer pixel correspondence information included in the parameter set based on the change in the picture size. Specifically, all the inter-layer pixel correspondence offsets included in the inter-layer pixel correspondence information are corrected.
- the enlarged reference layer left offset (scaled_ref_layer_left_offset [i]) constituting the inter-layer pixel corresponding offset is set to a value corresponding to the distance between the upper left pixel of the converted reference layer corresponding region and the upper left pixel of the converted EL picture.
- the attention area display system SYS is generally provided by hierarchically encoding and storing input images having different qualities, and converting and providing the hierarchically encoded data accumulated according to attention area information from the user, By decoding the converted hierarchically encoded data, a high-quality reproduced image related to the region of interest (ROI) is displayed.
- ROI region of interest
- the hierarchical video encoding device 2 described above can be used for the hierarchical video encoding units SYS1A and SYS1B.
- the hierarchically encoded data storage unit SYS2 stores hierarchically encoded data and supplies the hierarchically encoded data as required.
- a computer having a recording medium (memory, hard disk, optical disk) can be used as the hierarchically encoded data storage unit SYS2.
- the ROI notification unit SYS8 notifies attention area information designated by the user by a predetermined method. For example, the user can inform the ROI notification unit of the attention area by designating an area corresponding to the attention area on the display area where the entire display image is displayed. Note that the ROI notification unit SYS8 notifies information indicating that there is no attention area as attention area information when there is no user designation.
- the hierarchical moving image encoding unit SYS1A encodes the input high-quality input image using the input hierarchical encoded data as encoded data of the reference layer, generates hierarchical encoded data, and generates a hierarchical code Output to the digitized data storage unit SYS2.
- the hierarchically encoded data storage unit SYS2 attaches an appropriate index to the input hierarchically encoded data and records it on an internal recording medium.
- the hierarchically encoded data is read from the hierarchically encoded data storage unit SYS2, converted into hierarchically encoded data corresponding to the attention area, and the converted hierarchically encoded data is decoded and reproduced and displayed. .
- the attention area data generation / reproduction processing is executed in the following steps T201 to T205.
- T201 The hierarchically encoded data related to the moving image selected by the user is supplied from the hierarchically encoded data storage unit SYS2 to the hierarchically encoded data conversion unit SYS3.
- the hierarchical encoded data conversion unit SYS3 converts the input hierarchical encoded data based on the input attention area information, and outputs the converted hierarchical encoded data to the hierarchical video decoding unit SYS4.
- the hierarchical video decoding unit SYS4 decodes the input hierarchical video encoded data (after conversion), and outputs the obtained decoded picture of the upper layer to the display unit SYS6.
- the display unit SYS6 displays the input decoded image.
- the attention area display system SYS includes an attention area notification section (ROI notification section SYS8) that supplies attention area information, and converts the hierarchically encoded data based on the attention area information and after conversion.
- ROI notification section SYS8 that supplies attention area information, and converts the hierarchically encoded data based on the attention area information and after conversion.
- a hierarchical encoded data conversion unit SYS3 that generates hierarchical encoded data
- a hierarchical moving image decoding unit SYS4 that decodes the converted hierarchical encoded data and outputs decoded pictures of an upper layer and a lower layer
- a display unit SYS6 I have.
- the decoded picture of the area specified by the attention area information can be displayed.
- the decoded picture of the area specified by the attention area information uses inter-layer image prediction using the inter-layer scale derived from the encoded data of the upper layer of the hierarchical encoded data based on the scale correction information. Therefore, the image quality is high.
- the hierarchically encoded data converted based on the attention area has a smaller code amount than the hierarchically encoded data before conversion. Therefore, by using the attention area display system SYS described above, it is possible to reproduce a decoded picture with high image quality related to the attention area while reducing the bandwidth required for transferring the hierarchically encoded data.
- the above-described hierarchical video encoding device 2 and hierarchical video decoding device 1 can be used by being mounted on various devices that perform transmission, reception, recording, and reproduction of moving images.
- the moving image may be a natural moving image captured by a camera or the like, or may be an artificial moving image (including CG and GUI) generated by a computer or the like.
- FIG. 24A is a block diagram illustrating a configuration of a transmission device PROD_A in which the hierarchical video encoding device 2 is mounted.
- the transmission device PROD_A modulates a carrier wave with an encoding unit PROD_A1 that obtains encoded data by encoding a moving image and the encoded data obtained by the encoding unit PROD_A1.
- a modulation unit PROD_A2 that obtains a modulation signal and a transmission unit PROD_A3 that transmits the modulation signal obtained by the modulation unit PROD_A2 are provided.
- the hierarchical moving image encoding apparatus 2 described above is used as the encoding unit PROD_A1.
- the transmission device PROD_A is a camera PROD_A4 that captures a moving image, a recording medium PROD_A5 that records the moving image, an input terminal PROD_A6 that inputs the moving image from the outside, as a supply source of the moving image input to the encoding unit PROD_A1.
- An image processing unit A7 that generates or processes an image may be further provided.
- FIG. 24A illustrates a configuration in which the transmission apparatus PROD_A includes all of these, but a part of the configuration may be omitted.
- FIG. 24 is a block diagram illustrating a configuration of the receiving device PROD_B in which the hierarchical video decoding device 1 is mounted.
- the reception device PROD_B includes a reception unit PROD_B1 that receives a modulation signal, a demodulation unit PROD_B2 that obtains encoded data by demodulating the modulation signal received by the reception unit PROD_B1, and a demodulation A decoding unit PROD_B3 that obtains a moving image by decoding the encoded data obtained by the unit PROD_B2.
- the above-described hierarchical video decoding device 1 is used as the decoding unit PROD_B3.
- the receiving device PROD_B has a display PROD_B4 for displaying a moving image, a recording medium PROD_B5 for recording the moving image, and an output terminal for outputting the moving image to the outside as a supply destination of the moving image output by the decoding unit PROD_B3.
- PROD_B6 may be further provided.
- FIG. 24B illustrates a configuration in which the reception apparatus PROD_B includes all of these, but a part of the configuration may be omitted.
- the recording medium PROD_B5 may be used for recording a non-encoded moving image, or may be encoded using a recording encoding method different from the transmission encoding method. May be. In the latter case, an encoding unit (not shown) for encoding the moving image acquired from the decoding unit PROD_B3 according to the recording encoding method may be interposed between the decoding unit PROD_B3 and the recording medium PROD_B5.
- the transmission medium for transmitting the modulation signal may be wireless or wired.
- the transmission mode for transmitting the modulated signal may be broadcasting (here, a transmission mode in which the transmission destination is not specified in advance) or communication (here, transmission in which the transmission destination is specified in advance). Refers to the embodiment). That is, the transmission of the modulation signal may be realized by any of wireless broadcasting, wired broadcasting, wireless communication, and wired communication.
- a terrestrial digital broadcast broadcasting station (broadcasting equipment or the like) / receiving station (such as a television receiver) is an example of a transmitting device PROD_A / receiving device PROD_B that transmits and receives a modulated signal by wireless broadcasting.
- a broadcasting station (such as broadcasting equipment) / receiving station (such as a television receiver) of cable television broadcasting is an example of a transmitting device PROD_A / receiving device PROD_B that transmits and receives a modulated signal by cable broadcasting.
- a server workstation etc.
- Client television receiver, personal computer, smart phone etc.
- VOD Video On Demand
- video sharing service using the Internet is a transmitting device for transmitting and receiving modulated signals by communication.
- PROD_A / reception device PROD_B usually, either a wireless or wired transmission medium is used in a LAN, and a wired transmission medium is used in a WAN.
- the personal computer includes a desktop PC, a laptop PC, and a tablet PC.
- the smartphone also includes a multi-function mobile phone terminal.
- the video sharing service client has a function of encoding a moving image captured by the camera and uploading it to the server. That is, the client of the video sharing service functions as both the transmission device PROD_A and the reception device PROD_B.
- FIG. 25A is a block diagram illustrating a configuration of a recording apparatus PROD_C in which the above-described hierarchical video encoding apparatus 2 is mounted.
- the recording device PROD_C includes an encoding unit PROD_C1 that obtains encoded data by encoding a moving image, and the encoded data obtained by the encoding unit PROD_C1 on the recording medium PROD_M.
- the hierarchical moving image encoding device 2 described above is used as the encoding unit PROD_C1.
- the recording medium PROD_M may be of a type built in the recording device PROD_C, such as (1) HDD (Hard Disk Drive) or SSD (Solid State Drive), or (2) SD memory. It may be of the type connected to the recording device PROD_C, such as a card or USB (Universal Serial Bus) flash memory, or (3) DVD (Digital Versatile Disc) or BD (Blu-ray Disc: registration) For example, it may be loaded into a drive device (not shown) built in the recording device PROD_C.
- the recording device PROD_C is a camera PROD_C3 that captures moving images as a supply source of moving images to be input to the encoding unit PROD_C1, an input terminal PROD_C4 for inputting moving images from the outside, and reception for receiving moving images.
- the unit PROD_C5 and an image processing unit C6 that generates or processes an image may be further provided.
- FIG. 25A illustrates a configuration in which the recording apparatus PROD_C includes all of these, but some of them may be omitted.
- the receiving unit PROD_C5 may receive a non-encoded moving image, or may receive encoded data encoded by a transmission encoding scheme different from the recording encoding scheme. You may do. In the latter case, a transmission decoding unit (not shown) that decodes encoded data encoded by the transmission encoding method may be interposed between the reception unit PROD_C5 and the encoding unit PROD_C1.
- Examples of such a recording device PROD_C include a DVD recorder, a BD recorder, and an HDD (Hard Disk Drive) recorder (in this case, the input terminal PROD_C4 or the receiving unit PROD_C5 is a main supply source of moving images).
- a camcorder in this case, the camera PROD_C3 is a main source of moving images
- a personal computer in this case, the receiving unit PROD_C5 or the image processing unit C6 is a main source of moving images
- a smartphone in this case In this case, the camera PROD_C3 or the receiving unit PROD_C5 is a main supply source of moving images
- the camera PROD_C3 or the receiving unit PROD_C5 is a main supply source of moving images
- FIG. 25 is a block showing a configuration of a playback device PROD_D in which the above-described hierarchical video decoding device 1 is mounted.
- the playback device PROD_D reads a moving image by decoding a read unit PROD_D1 that reads encoded data written to the recording medium PROD_M and a coded data read by the read unit PROD_D1. And a decoding unit PROD_D2 to be obtained.
- the hierarchical moving image decoding apparatus 1 described above is used as the decoding unit PROD_D2.
- the recording medium PROD_M may be of the type built into the playback device PROD_D, such as (1) HDD or SSD, or (2) such as an SD memory card or USB flash memory, It may be of a type connected to the playback device PROD_D, or (3) may be loaded into a drive device (not shown) built in the playback device PROD_D, such as DVD or BD. Good.
- the playback device PROD_D has a display PROD_D3 that displays a moving image, an output terminal PROD_D4 that outputs the moving image to the outside, and a transmission unit that transmits the moving image as a supply destination of the moving image output by the decoding unit PROD_D2.
- PROD_D5 may be further provided.
- FIG. 25B illustrates a configuration in which the playback apparatus PROD_D includes all of these, but a part of the configuration may be omitted.
- the transmission unit PROD_D5 may transmit an unencoded moving image, or transmits encoded data encoded by a transmission encoding method different from the recording encoding method. You may do. In the latter case, it is preferable to interpose an encoding unit (not shown) that encodes a moving image with an encoding method for transmission between the decoding unit PROD_D2 and the transmission unit PROD_D5.
- Examples of such a playback device PROD_D include a DVD player, a BD player, and an HDD player (in this case, an output terminal PROD_D4 to which a television receiver or the like is connected is a main supply destination of moving images).
- a television receiver in this case, the display PROD_D3 is a main supply destination of moving images
- a digital signage also referred to as an electronic signboard or an electronic bulletin board
- the display PROD_D3 or the transmission unit PROD_D5 is a main supply of moving images.
- Desktop PC (in this case, the output terminal PROD_D4 or the transmission unit PROD_D5 is the main video image supply destination), laptop or tablet PC (in this case, the display PROD_D3 or the transmission unit PROD_D5 is a moving image)
- a smartphone which is a main image supply destination
- a smartphone in this case, the display PROD_D3 or the transmission unit PROD_D5 is a main moving image supply destination
- the like are also examples of such a playback device PROD_D.
- each block of the hierarchical video decoding device 1 and the hierarchical video encoding device 2 may be realized in hardware by a logic circuit formed on an integrated circuit (IC chip), or may be a CPU (Central It may be realized by software using a Processing Unit).
- IC chip integrated circuit
- CPU Central It may be realized by software using a Processing Unit
- each of the devices includes a CPU that executes instructions of a control program that realizes each function, a ROM (Read Memory) that stores the program, a RAM (Random Access Memory) that expands the program, the program, and A storage device (recording medium) such as a memory for storing various data is provided.
- An object of the present invention is to provide a recording medium in which a program code (execution format program, intermediate code program, source program) of a control program for each of the above devices, which is software that realizes the above-described functions, is recorded in a computer-readable manner This can also be achieved by supplying each of the above devices and reading and executing the program code recorded on the recording medium by the computer (or CPU or MPU (Micro Processing Unit)).
- a program code execution format program, intermediate code program, source program
- Examples of the recording medium include tapes such as magnetic tapes and cassette tapes, magnetic disks such as floppy (registered trademark) disks / hard disks, CD-ROMs (Compact Disc-Read-Only Memory) / MO (Magneto-Optical) / Discs including optical discs such as MD (Mini Disc) / DVD (Digital Versatile Disc) / CD-R (CD Recordable), cards such as IC cards (including memory cards) / optical cards, mask ROM / EPROM (Erasable) Programmable Read-only Memory / EEPROM (registered trademark) (ElectricallyErasable Programmable Read-only Memory) / Semiconductor memories such as flash ROM, or logic circuits such as PLD (Programmable Logic Device) and FPGA (Field Programmable Gate Array) Etc. can be used.
- tapes such as magnetic tapes and cassette tapes
- magnetic disks such as floppy (registered trademark) disks / hard disks
- each of the above devices may be configured to be connectable to a communication network, and the program code may be supplied via the communication network.
- the communication network is not particularly limited as long as it can transmit the program code.
- the Internet intranet, extranet, LAN (Local Area Network), ISDN (Integrated Services Digital Network), VAN (Value-Added Network), CATV (Community Area Antenna Television) communication network, Virtual Private Network (Virtual Private Network), A telephone line network, a mobile communication network, a satellite communication network, etc. can be used.
- the transmission medium constituting the communication network may be any medium that can transmit the program code, and is not limited to a specific configuration or type.
- IEEE Institute of Electrical and Electronic Engineers 1394, USB, power line carrier, cable TV line, telephone line, ADSL (Asymmetric Digital Subscriber Line) line, etc. wired such as IrDA (Infrared Data Association) and remote control, Wireless such as Bluetooth (registered trademark), IEEE 802.11 wireless, HDR (High Data Rate), NFC (Near Field Communication), DLNA (Digital Living Network Alliance, registered trademark), mobile phone network, satellite line, terrestrial digital network, etc.
- the present invention can also be realized in the form of a computer data signal embedded in a carrier wave in which the program code is embodied by electronic transmission.
- Hierarchical video decoding device (image decoding device) 11 NAL Demultiplexing Unit 12 Parameter Set Decoding Unit 13 Tile Setting Unit 14 Slice Decoding Unit 141 Slice Header Decoding Unit 142 Slice Position Setting Unit 144 CTU Decoding Unit 1441 Prediction Residual Restoring Unit 1442 Predictive Image Generation Unit 1443 CTU Decoded Image Generation Unit DESCRIPTION OF SYMBOLS 15 Base decoding part 151 Base NAL demultiplexing part 152 Base parameter set decoding part 153 Base tile setting part 154 Base slice decoding part 156 Base decoded picture management part 16 Decoded picture management part 2 Hierarchical video coding apparatus (image coding apparatus) ) 21 NAL multiplexing unit 22 Parameter set encoding unit 23 Tile setting unit 24 Slice encoding unit 241 Slice header setting unit 242 Slice position setting unit 244 CTU encoding unit 2441 Prediction residual encoding unit 2442 Predictive image encoding unit 3 layers Encoded data converter (encoded data converter) 32 Parameter set correction unit 34 NAL selection unit
Abstract
Description
本実施の形態に係る階層動画像復号装置(画像復号装置)1は、階層動画像符号化装置(画像符号化装置)2によって階層符号化された符号化データを復号する。階層符号化とは、動画像を低品質のものから高品質のものにかけて階層的に符号化する符号化方式のことである。階層符号化は、例えば、SVCやSHVCにおいて標準化されている。なお、ここでいう動画像の品質とは、主観的および客観的な動画像の見栄えに影響する要素のことを広く意味する。動画像の品質には、例えば、“解像度”、“フレームレート”、“画質”、および、“画素の表現精度”が含まれる。よって、以下、動画像の品質が異なるといえば、例示的には、“解像度”等が異なることを指すが、これに限られない。例えば、異なる量子化ステップで量子化された動画像の場合(すなわち、異なる符号化雑音により符号化された動画像の場合)も互いに動画像の品質が異なるといえる。
ここで、図2を用いて、階層符号化データの符号化および復号について説明すると次のとおりである。図2は、動画像を、下位階層L3、中位階層L2、および上位階層L1の3階層により階層的に符号化/復号する場合について模式的に表す図である。つまり、図2(a)および(b)に示す例では、3階層のうち、上位階層L1が最上位層となり、下位階層L3が最下位層となる。
以下、各階層の符号化データを生成する符号化方式として、HEVCおよびその拡張方式を用いる場合について例示する。しかしながら、これに限られず、各階層の符号化データを、MPEG-2や、H.264/AVCなどの符号化方式により生成してもよい。
図3は、基本レイヤにおいて採用できる符号化データ(図2の例でいえば、階層符号化データDATA#C)のデータ構造を例示する図である。階層符号化データDATA#Cは、例示的に、シーケンス、およびシーケンスを構成する複数のピクチャを含む。
シーケンスレイヤでは、処理対象のシーケンスSEQ(以下、対象シーケンスとも称する)を復号するために階層動画像復号装置1が参照するデータの集合が規定されている。シーケンスSEQは、図3の(a)に示すように、ビデオパラメータセットVPS(Video Parameter Set)、シーケンスパラメータセットSPS(Sequence Parameter Set)、ピクチャパラメータセットPPS(Picture Parameter Set)、ピクチャPICT1~PICTNP(NPはシーケンスSEQに含まれるピクチャの総数)、及び、付加拡張情報SEI(Supplemental Enhancement Information)を含んでいる。
ピクチャレイヤでは、処理対象のピクチャPICT(以下、対象ピクチャとも称する)を復号するために階層動画像復号装置1が参照するデータの集合が規定されている。ピクチャPICTは、図3の(b)に示すように、スライスヘッダSH1~SHNS、及び、スライスS1~SNSを含んでいる(NSはピクチャPICTに含まれるスライスの総数)。
スライスレイヤでは、処理対象のスライスS(対象スライスとも称する)を復号するために階層動画像復号装置1が参照するデータの集合が規定されている。スライスSは、図3の(c)に示すように、符号化ツリーユニットCTU1~CTUNC(NCはスライスSに含まれるCTUの総数)を含んでいる。
CTUレイヤでは、処理対象の符号化ツリーユニットCTU(以下、対象CTUとも称する)を復号するために階層動画像復号装置1が参照するデータの集合が規定されている。なお、符号化ツリーユニットのことを符号化ツリーブロック(CTB: Coding Tree block)、または、最大符号化単位(LCU:Largest Cording Unit)と呼ぶこともある。
CTUヘッダCTUHには、対象CTUの復号方法を決定するために階層動画像復号装置1が参照する符号化パラメータが含まれる。具体的には、図3の(d)に示すように、対象CTUの各CUへの分割パターンを指定するCTU分割情報SP_CTU、および、量子化ステップの大きさを指定する量子化パラメータ差分Δqp(qp_delta)が含まれる。
CUレイヤでは、処理対象のCU(以下、対象CUとも称する)を復号するために階層動画像復号装置1が参照するデータの集合が規定されている。
続いて、図3(e)を参照しながらCU情報CUに含まれるデータの具体的な内容を説明する。図3(e)に示すように、CU情報CUは、具体的には、スキップフラグSKIP、予測ツリー情報(以下、PT情報と略称する)PTI、および、変換ツリー情報(以下、TT情報と略称する)TTIを含む。
PT情報PTIは、CUに含まれる予測ツリー(以下、PTと略称する)に関する情報である。言い換えれば、PT情報PTIは、PTに含まれる1または複数のPUそれぞれに関する情報の集合であり、階層動画像復号装置1により予測画像を生成する際に参照される。PT情報PTIは、図3(e)に示すように、予測タイプ情報PType、および、予測情報PInfoを含んでいる。
TT情報TTIは、CUに含まれる変換ツリー(以下、TTと略称する)に関する情報である。言い換えれば、TT情報TTIは、TTに含まれる1または複数の変換ブロックそれぞれに関する情報の集合であり、階層動画像復号装置1により残差データを復号する際に参照される。
処理2:処理1にて得られた変換係数を量子化する;
処理3:処理2にて量子化された変換係数を可変長符号化する;
なお、上述した量子化パラメータqpは、階層動画像符号化装置2が変換係数を量子化する際に用いた量子化ステップQPの大きさを表す(QP=2qp/6)。
PU分割情報によって指定されるPU分割タイプには、対象CUのサイズを2N×2N画素とすると、次の合計8種類のパターンがある。すなわち、2N×2N画素、2N×N画素、N×2N画素、およびN×N画素の4つの対称的分割(symmetric splittings)、並びに、2N×nU画素、2N×nD画素、nL×2N画素、およびnR×2N画素の4つの非対称的分割(asymmetric splittings)である。なお、N=2m(mは1以上の任意の整数)を意味している。以下、対象CUを分割して得られる予測単位のことを予測ブロック、または、パーティションと称する。
拡張レイヤのレイヤ表現に含まれる符号化データ(以下、拡張レイヤ符号化データ)についても、例えば、図3に示すデータ構造とほぼ同様のデータ構造を採用できる。ただし、拡張レイヤ符号化データでは、以下のとおり、付加的な情報を追加したり、パラメータを省略できる。
次に、本発明に係る重要な概念であるピクチャ、タイル、スライスについて、相互の関係および符号化データとの関係を図4を参照して説明する。図4は、階層符号化データにおけるピクチャとタイル・スライスの関係を説明する図である。タイルは、ピクチャ内の矩形の部分領域、および、該部分領域に係る符号化データに対応付けられる。スライスはピクチャ内の部分領域、および、該部分領域に係る符号化データ、すなわち、該部分領域に係るスライスヘッダおよびスライスデータに対応付けられる。
本実施形態に係る階層動画像復号装置1、階層動画像符号化装置2、および、符号化データ変換装置3の説明に先立って、階層動画像復号装置1、階層動画像符号化装置2、および、符号化データ変換装置3を組み合わせて実現できるシステムの例を図5を参照して説明しておく。図5は、階層動画像復号装置1、階層動画像符号化装置2、および、符号化データ変換装置3を組み合わせて実現できる階層動画像の伝送と再生を行うシステムSYS_ROI1を例示している。
以下では、本実施形態に係る階層動画像復号装置1の構成について、図1~図15を参照して説明する。
図6を用いて、階層動画像復号装置1の概略的構成を説明すると次のとおりである。図6は、階層動画像復号装置1の概略的構成を示した機能ブロック図である。階層動画像復号装置1は、階層符号化データDATA(階層動画像符号化装置2から提供される階層符号化データDATAF、または、符号化データ変換装置3から提供される階層符号化データDATAR)を復号して、対象レイヤの復号画像POUT#Tを生成する。なお、以下では、対象レイヤは基本レイヤを参照レイヤとする拡張レイヤであるとして説明する。そのため、対象レイヤは、参照レイヤに対する上位レイヤでもある。逆に、参照レイヤは、対象レイヤに対する下位レイヤでもある。
パラメータセット復号部12は、入力される対象レイヤの符号化データから、対象レイヤの復号に用いられるパラメータセット(VPS、SPS、PPS)を復号して出力する。一般に、パラメータセットの復号は既定のシンタックス表に基づいて実行される。すなわち、シンタックス表の定める手順に従って符号化データからビット列を読み出して、シンタックス表に含まれるシンタックスのシンタックス値を復号する。また、必要に応じて、復号したシンタックス値に基づいて導出した変数を導出して、出力するパラメータセットに含めてもよい。したがって、パラメータセット復号部12から出力されるパラメータセットは、符号化データに含まれるパラメータセット(VPS、SPS、PPS)に係るシンタックスのシンタックス値、および、該シンタックス値より導出される変数の集合と表現することもできる。
パラメータセット復号部12は、入力される対象レイヤ符号化データからピクチャ情報を復号する。ピクチャ情報は、概略的には、対象レイヤの復号ピクチャのサイズを定める情報である。例えば、ピクチャ情報は、対象レイヤの復号ピクチャの幅や高さを表わす情報を含んでいる。
パラメータセット復号部12は、入力される対象レイヤ符号化データから表示領域情報を復号する。表示領域情報は、例えば、SPSに含まれている。SPSから復号される表示領域情報は、表示領域フラグ(conformance_flag)を含む。表示領域フラグは表示領域の位置を表わす情報(表示領域位置情報)が追加でSPSに含まれるか否かを示す。すなわち、表示領域フラグが1の場合、表示領域位置情報が追加で含まれることを示し、表示領域フラグが0の場合、表示領域位置情報が追加で含まれないことを示す。
パラメータセット復号部12は、入力される対象レイヤ符号化データからレイヤ間位置対応情報を復号する。レイヤ間位置対応情報は、概略的には、対象レイヤと参照レイヤの対応する領域の位置関係を示す。例えば、対象レイヤのピクチャと参照レイヤのピクチャにある物体(物体A)が含まれる場合、対象レイヤのピクチャ上の物体Aに対応する領域と、参照レイヤのピクチャ上の物体Aに対応する領域が、前記対象レイヤと参照レイヤの対応する領域に相当する。なお、レイヤ間位置対応情報は、必ずしも上記の対象レイヤと参照レイヤの対応する領域の位置関係を正確に示す情報でなくてもよいが、一般的には、レイヤ間予測の正確性を高めるために正確な対象レイヤと参照レイヤの対応する領域の位置関係を示している。
レイヤ間画素対応情報は、例えば、上位レイヤのSPSの一部であるSPS拡張(sps_estension)に含まれており、図9に示すシンタックス表に従って復号される。図9は、パラメータセット復号部12がSPS復号時に参照するシンタックス表の一部であって、レイヤ間画素対応情報に係る部分である。
パラメータセット復号部12は、入力される対象レイヤ符号化データから、スケール補正情報を復号する。スケール補正情報は、例えば、SPS拡張に含まれており、図11に示すシンタックス表に従って復号される。図11は、パラメータセット復号部12がSPS復号時に参照するシンタックス表の一部であって、スケール補正情報に係る部分である。
・A0:スケール補正情報数(num_scale_adjust_info)
加えて、SPSには、シンタックス要素の組(A1、A2L、A2T、A2R、A2B、A3W、A3H)が、スケール補正情報数の示す個数含まれる。
・A1:スケール補正参照レイヤ識別子(scale_adjust_ref_layer_id[i])
・A2L:拡大参照レイヤ左追加オフセット(srl_add_left_offset[i])
・A2T:拡大参照レイヤ上追加オフセット(srl_add_top _offset[i])
・A2R:拡大参照レイヤ右追加オフセット(srl_add_right_offset[i])
・A2B:拡大参照レイヤ下追加オフセット(srl_add_bottom_offset[i])
・A3W:仮想参照レイヤ幅(rl_virtual_width[i])
・A3H:仮想参照レイヤ高(rl_virtual_height[i])
なお、上記A2L、A2T、A2R、A2Bを総称して、拡大参照レイヤ追加オフセットとも呼ぶ。また、上記A2LとA2Tを総称して、拡大参照レイヤ左上追加オフセットとも呼び、上記A2RとA2Bを総称して、拡大参照レイヤ右下追加オフセットとも呼ぶ。また、A3WとA3Hを総称して、仮想参照レイヤサイズとも呼ぶ。この定義を用いて換言すれば、スケール補正情報には、スケール補正情報数の示す個数の拡大参照レイヤ追加オフセットと仮想参照レイヤサイズが含まれている。
タイル設定部13は、入力されるパラメータセットに基づいてピクチャのタイル情報を導出して出力する。
図13はパラメータセットに含まれるPPSの復号時にパラメータ復号部12により参照されるシンタックス表の一部であって、タイル情報に係る部分である。
uniform_spacing_flagの値が1の場合、ピクチャに含まれるタイルサイズが均等、すなわち、各タイルの幅と高さが等しいことを示す。uniform_spacing_flagの値が0の場合、ピクチャに含まれるタイルサイズが不均等、すなわち、ピクチャに含まれるタイルの幅や高さが必ずしも一致しないことを示す。
つまり、ピクチャをタイル列数で等分して得られる(i+1)番目とi番目の境界位置の差分として、i番目のタイル列のCTU単位の幅であるColWidth[i]が計算される。
rowBd[j+1] = rowBd[j] + rowHeight[j]
続いて、ピクチャに含まれるラスタスキャンCTUアドレス(ctbAddrRs)で識別されるCTUに関連付けられるタイルスキャンCTUアドレスを以下の手順で導出する。
tbY = ctbAddrRs / PicWidthInCtbsY
続いて、対象CTUを含むタイルのピクチャ内のタイル単位の位置(tileX、tileY)を導出する。tileXには、評価式(tbX >= colBd[i])が真となる最大のiの値が設定される。同様に、tileYには、評価式(tbY >= rowBd[j])が真となる最大のjの値が設定される。
(スライス復号部14)
スライス復号部14は、入力されるVCL NAL、パラメータセット、および、タイル情報に基づいて復号ピクチャを生成して出力する。
スライスヘッダ復号部141は、入力されるVCL NALとパラメータセットに基づいてスライスヘッダを復号し、スライス位置設定部142、スキップスライス判定部143、および、CTU復号部144に出力する。
スライス位置設定部142は、入力されるスライスヘッダとタイル情報に基づいてピクチャ内のスライス位置を特定してCTU復号部144に出力する。スライス位置設定部142で導出されるピクチャ内のスライス位置は、スライスに含まれる各CTUのピクチャ内での位置を含む。
CTU復号部144は、概略的には、入力されるスライスヘッダ、スライスデータ、および、パラメータセットに基づいて、スライスに含まれる各CTUに対応する領域の復号画像を復号することで、スライスの復号画像を生成する。スライスの復号画像は、入力されるスライス位置の示す位置に、復号ピクチャの一部として出力される。CTUの復号画像は、CTU復号部144内部の予測残差復元部1441、予測画像生成部1442、および、CTU復号画像生成部1443により生成される。予測残差復元部1441は、入力のスライスデータに含まれる予測残差情報(TT情報)を復号して対象CTUの予測残差を生成して出力する。予測画像生成部1442は、入力のスライスデータに含まれる予測情報(PT情報)の示す予測方法と予測パラメータに基づいて予測画像を生成して出力する。その際、必要に応じて、参照ピクチャの復号画像や符号化らメータが利用される。CTU復号画像生成部1443は、入力される予測画像と予測残差を加算して対象CTUの復号画像を生成して出力する。
前述の予測画像生成部1442による予測画像生成処理のうち、レイヤ間画像予測が選択された場合の予測画像生成処理の詳細を説明する。
VSRLH = currPicH - SRLTOffset + SRLAddTOffset - SRLBOffset + SRLAddBOffset
ここで、currPicWとcurrPicHは対象ピクチャの高さと幅であり、対応参照位置導出処理の対象が輝度画素の場合は、対象レイヤにおけるSPSのピクチャ情報に含まれるpic_width_in_luma_samplesとpic_height_in_luma_samplesの各シンタックス値と一致する。対象が色差の場合は、色フォーマットの種類に応じて前記シンタックス値を変換した値を使用する。例えば色フォーマットが4:2:2の場合、各シンタックス値の半分の値を使用する。また、refPicWとrefPicHは参照ピクチャの高さと幅であり、対象が輝度画素の場合、参照レイヤにおけるSPSのピクチャ情報に含まれるpic_width_in_luma_samplesとpic_height_in_luma_samplesの各シンタックス値と一致する。
scaleY = rlVirtualH ÷ VSRLH
ここで、rlVirtualWは仮想参照レイヤ幅、rlVirtualHは仮想参照レイヤ高であり、パラメータセット復号部12で復号されたスケール補正情報に含まれる。すなわち、変数rlVirtualWにはrl_virtual_width[i]の値が、変数rlVirtualHにはrl_virtual_height[i]の値が設定される。
scaleIntY = ((rlVirtualH << 16) + (VSRLH >> 1)) / VSRLH
ここで、演算子「/」は割り算の商を表わす。「<< 16」の左シフト演算は算出されるレイヤ間スケールの整数表現の精度を維持するための処理であり、必ずしも16である必要はなく他の正の整数値の左シフト演算であってもよい。
yRef = (yP - SRLTOffset) * scaleY
ここで、xPとyPは対象レイヤピクチャ左上画素を基準とする対象レイヤ画素の水平成分と垂直成分をそれぞれ対象レイヤピクチャの画素単位で表わす。
yRef16 = ((yP - SRLTOffset) * scaleIntY + (1 << 11) ) >> 12
また、対象が色差の画素である場合に、輝度と色差の位相差を考慮した補正を行っても構わない。
上記の階層動画像復号装置1の予測画像生成部1442の処理や、変形例1、および、変形例2に記載の処理では、概略的には、拡張レイヤ(対象レイヤ)上の距離にレイヤ間スケールを乗算し、単位を調整した値を参照位置として導出している。特定の参照レイヤ上画素と対象レイヤの画素の間に整数画素以下の位相差がある場合、その位相差を補正するパラメータを追加して参照位置を計算してもよい。例えば、予測画像生成部1442の説明で上げた16分の1画素精度の参照位置は次式により計算できる。
yRef16 = ((yP - SRLTOffset) * scaleIntY + addY + (1 << 11) ) >> 12
ここで、addXとaddYはそれぞれ垂直方向と水平方向の位相のずれに対応する量であり、4分の1画素精度で表現された垂直方向の位相差であるphaseX、同様に水平方向の位相差であるphaseYを用いて次式により計算される。
addY = ( ScaleFactorY * phaseY + 2 ) >> 2
対象レイヤと参照レイヤの間に位相差がある場合は、上記の手順で参照位置を導出することで、正確な参照位置を計算できる。
以上説明した本実施形態に係る階層動画像復号装置1(階層画像復号装置)は、パラメータセットを復号するパラメータセット復号部12と、参照レイヤピクチャの復号画素を参照してレイヤ間予測により予測画像を生成する予測画像生成部1442を備えている。パラメータセット復号部12はスケール補正情報を復号し、予測画像生成部1442は、該スケール補正情報から導出されるレイヤ間スケールの値を用いて対象レイヤ上の画素に対する対応参照位置を導出する。
上記階層動画像復号装置1の説明では、スケール補正情報に含まれる以下のパラメータに基づいてレイヤ間スケールを計算する例を説明した。
・A2TL:拡大参照レイヤ左上追加オフセット
・A2BR:拡大参照レイヤ右下追加オフセット
・A3WH:仮想参照レイヤサイズ
上記とは異なるパラメータをスケール補正情報としてパラメータセットから復号し、該パラメータに基づきレイヤ間スケールを計算してもよい。例えば、以下のパラメータを用いることができる。
・A4S:代替レイヤ間スケール
・A4Sd:代替レイヤ間スケール差分
・A5WH:仮想参照レイヤ対応領域サイズ
・A5WHd:仮想参照レイヤ対応領域サイズ差分
・A3WHd:仮想参照レイヤサイズ差分
以下では、各パラメータの詳細を説明する。
パラメータセット復号部12は、入力される対象レイヤ符号化データから、スケール補正情報を復号する。スケール補正情報は、例えば、SPS拡張に含まれており、図16に示すシンタックス表に従って復号される。図16は、パラメータセット復号部12がSPS復号時に参照するシンタックス表の一部であって、スケール補正情報に係る部分である。
scaleY = alt_scale_y[i]
ここで、参照レイヤのレイヤ間識別子をref_layer_idとした場合、i はref_layer_id = scale_adjust_ref_layer_id[i] を満たす。
scaleY = ((rlH << 16) + (srlH >> 1)) / srlH
ここで、rlWとrlHはそれぞれ参照レイヤピクチャの幅と高さを表し、srlWとsrlHはそれぞれ参照レイヤ対応領域の幅と高さを表わす。
scaleY = alt_scale_y[i] + minScale
上記のようにminScaleの値を減算した代替レイヤ間スケールをSPSから復号することで、代替レイヤ間スケールの値域を狭くできるため、代替レイヤ間スケールの符号量を抑制できる。
パラメータセット復号部12は、入力される対象レイヤ符号化データから、スケール補正情報を復号する。スケール補正情報は、例えば、SPS拡張に含まれており、図17に示すシンタックス表に従って復号される。図17は、パラメータセット復号部12がSPS復号時に参照するシンタックス表の一部であって、スケール補正情報に係る部分である。
scaleY = (((rlH << 16) + (srlH >> 1)) / srlH) + alt_scale_y_diff[i]
すなわち、参照レイヤ対応領域サイズと参照レイヤサイズの比と、代替レイヤ間スケール差分の和をレイヤ間スケールに設定している。
scaleY = (((rlH << 16) + (srlH >> 1)) / srlH) + alt_scale_y_diff_present_flag[i] ? 1 - alt_scale_y_diff_sign_flag[i]*2 : 0
「?」は条件演算子であって、「 a ? b : c 」の値は、a が真ならば b、a が偽ならば cとなる。
仮想参照レイヤ対応領域直接サイズ(A5WH)は、レイヤ間スケール算出に用いる仮想参照レイヤ対応領域サイズの算出に用いるパラメータである。概略的には、仮想参照レイヤ対応領域直接サイズは、仮想参照レイヤ対応領域サイズの値として直接用いられる。レイヤ間スケールの値は、予測画像生成部1442で説明した処理と同じ方法で、仮想参照レイヤ対応領域サイズに対する仮想参照レイヤサイズの比として計算される。
VSRLH = srl_virtual_height[i]
なお、レイヤ間予測画像生成処理に用いる参照レイヤに対応する仮想参照レイヤ対応領域直接サイズがパラメータセットに存在しない場合、参照レイヤ対応領域サイズの値を仮想参照レイヤ対応領域直接サイズの値として用いる。
仮想参照レイヤ対応領域差分サイズ(A5WHd)は、レイヤ間スケール算出に用いる仮想参照レイヤ対応領域サイズの算出に用いるパラメータである。概略的には、仮想参照レイヤ対応領域差分サイズを、参照レイヤ対応領域サイズに加算することで、仮想参照レイヤ対応領域サイズを算出する。レイヤ間スケールの値は、予測画像生成部1442で説明した処理と同じ方法で、仮想参照レイヤ対応領域サイズに対する仮想参照レイヤサイズの比として計算される。
VSRLH = currPicH - SRLTOffset - SRLBOffset + srl_virtual_height_diff[i]
なお、レイヤ間予測画像生成処理に用いる参照レイヤに対応する仮想参照レイヤ対応領域差分サイズがパラメータセットに存在しない場合、仮想参照レイヤ対応領域差分の幅と高さはともに0に設定される。この場合、上記の式で導出される仮想参照レイヤ対応領域サイズは、参照レイヤ対応領域サイズに一致する。
仮想参照レイヤサイズ差分(A3WHd)は、レイヤ間スケール算出に用いる仮想参照レイヤサイズの算出に用いるパラメータである。概略的には、仮想参照レイヤ対応領域サイズ差分を、参照レイヤサイズに加算することで、仮想参照レイヤサイズを算出する。レイヤ間スケールの値は、予測画像生成部1442で説明した処理と同じ方法で、仮想参照レイヤ対応領域サイズに対する仮想参照レイヤサイズの比として計算される。
rlVirtualH = refPicH + rl_virtual_height_diff[i]
ここで、refPicWとrefPicHは予測画像生成処理に用いられる参照レイヤピクチャサイズの幅と高さにそれぞれ相当する。
スケール補正情報に含まれる各種のレイヤ間スケール導出に係るパラメータについて説明したが、ここではパラメータの好適な組み合わせを列挙する。
・A2TL:拡大参照レイヤ左上追加オフセット
・A2BR:拡大参照レイヤ右下追加オフセット
・A3WH:仮想参照レイヤサイズ
・A3WHd:仮想参照レイヤサイズ差分
・A4S:代替レイヤ間スケール
・A4Sd:代替レイヤ間スケール差分
・A5WH:仮想参照レイヤ対応領域サイズ
・A5WHd:仮想参照レイヤ対応領域サイズ差分
次に、上記の記号を用いて、スケール補正情報を構成する好適なパラメータの組み合わせを列挙する。
・Comb1:A2TL、A2BR、A3WH
・Comb2:A2TL、A2BR、A3WHd
・Comb3:A5WH、A3WH
・Comb4:A5WH、A3WHd
・Comb5:A5WHd、A3WH
・Comb6:A5WHd、A3WHd
・Comb7:A4S
・Comb8:A4Sd
上記いずれかのパラメータの組み合わせを含むスケール補正情報をパラメータセットに含めることで、階層符号化データの変換前後でレイヤ間スケールを維持する機能を実現できる。
階層動画像復号装置1の予測画像生成部1442において、パラメータセットに含まれるスケール補正情報を用いて対応参照位置を導出する方法を説明した。ここでは、上位レイヤの画素と下位レイヤの画素の位置関係を整数画素未満の単位で補正する目的で伝送されるレイヤ間位相対応情報を追加で用いて、対応参照位置を導出する方法を説明する。
+ (1 << 11)) >> 12
yRefOrg16 = ((yP - SRLTOffset + srl_add_top_offset[i]) * scaleIntY
+ (1 << 11)) >> 12
上記の式によれば、仮想参照レイヤ対応領域の左上画素を基準とする対象レイヤ上の対象画素の水平方向の位置(xP - SRLLOffset + srl_add_left_offset[i])をレイヤ間スケールの水平成分(scaleIntX)の値を乗じた値を16分の1画素精度に調整した値を仮想参照レイヤピクチャ上の対応参照位置の水平成分(xRefOrg16)に設定している。垂直成分も同様である。
yRefOffset = (srl_add_top_offset[i] * scaleIntY + (1 << 15)) >> 16
上記の式によれば、拡大参照レイヤ左オフセット(srl_add_left_offset[i])の値にレイヤ間スケールの水平成分(scaleIntX)を乗じた値を1画素精度に調整した値を参照レイヤピクチャ左上画素位置の水平成分(xRefOffset)に設定している。垂直成分も同様である。
yRef16 = yRefOrg16 - (yRefOffset << 4)
以上で対応参照位置の導出処理を終了する。
図19を用いて、階層動画像符号化装置2の概略構成を説明する。図19は、階層動画像符号化装置2の概略的構成を示した機能ブロック図である。階層動画像符号化装置2は、対象レイヤの入力画像PIN#Tを、参照レイヤ符号化データDATA#Rを参照しながら符号化して、対象レイヤの階層符号化データDATAを生成する。なお、参照レイヤ符号化データDATA#Rは、参照レイヤに対応する階層動画像符号化装置において符号化済みであるとする。
次に図20を参照して、スライス符号化部24の構成の詳細を説明する。図20は、スライス符号化部24の概略的構成を示した機能ブロック図である。
以上説明した本実施形態に係る階層動画像符号化装置2(階層画像符号化装置)は、パラメータセットを符号化するパラメータセット符号化部22と、参照レイヤピクチャの復号画素を参照してレイヤ間予測により予測画像を生成する予測画像符号化部2442を備えている。パラメータセット符号化部22はスケール補正情報を符号化し、予測画像符号化部2442は、該スケール補正情報から導出されるレイヤ間スケールの値を用いて対象レイヤ上の画素に対する対応参照位置を導出する。
図21を用いて、階層符号化データ変換装置3の概略構成を説明する。図21は、階層符号化データ変換装置3の概略的構成を示した機能ブロック図である。階層符号化データ変換装置3は、入力される階層符号化データDATAを変換して、入力される注目領域情報に係る階層符号化データDATA-ROIを生成する。なお、階層符号化データDATAは階層動画像符号化装置2により生成された階層符号化データである。また、階層符号化データDATA-ROIを階層動画像復号装置1に入力することで注目領域情報に係る上位レイヤの動画像を再生できる。
始めに、階層符号化データ変換装置3による変換処理によるパラメータセット修正の概略について、図22を参照して説明する。図22は、変換前後の階層符号化データにおけるピクチャ、注目領域、および、タイルの関係を例示した図である。図22では、拡張レイヤとベースレイヤの2レイヤから構成される階層符号化データ(変換前階層符号化データ)を変換して注目領域を含む階層符号化データ(変換後階層符号化データ)を生成する場合の、変換前後のピクチャの関係を示している。変換前階層符号化データの拡張レイヤは変換前ELピクチャに相当するデータであり、ベースレイヤは変換前BLピクチャに相当するデータである。同様に、変換後階層符号化データの拡張レイヤは変換後ELピクチャに相当するデータであり、ベースレイヤは変換後BLピクチャに相当するデータである。
パラメータセット修正部32は、入力される注目領域情報とタイル情報を参照して、対応する領域の一部が注目領域と重複するタイル(抽出対象タイル)のみを含むよう拡張レイヤのPPSタイル情報を更新する。抽出対象タイルの情報に基づいて、拡張レイヤのPPSタイル情報を更新する。まず、抽出対象タイルが1個の場合、tiles_enabled_flagを0に修正する。なお、抽出対象タイルが2個以上の場合は修正処理は省略できる。次に、ピクチャの水平方向と垂直方向に含まれる抽出対象タイルの個数に基づいて、タイル行数を表わす(num_tile_columns_minus1)とタイル列数を表わす(num_tile_rows_minus1)を修正する。次に、タイルサイズが不均等(uniform_spacing_flagが0)の場合には、抽出対象タイルを含まないタイル列の幅、抽出対象タイルを含まないタイル行の高さに係るシンタックスに対応するビット列をパラメータセットから削除する。
パラメータセット修正部32は、拡張レイヤの抽出対象タイルの集合に対応する領域を変換後ELピクチャサイズとしてピクチャ情報を修正する。変換後ELピクチャの幅と高さを拡張レイヤSPSのpic_width_in_luma_samplesとpic_height_in_luma_samplesの値としてそれぞれ設定する。
パラメータセット修正部32は、ピクチャサイズの変更を踏まえて、パラメータセットに含まれるレイヤ間画素対応情報を修正する。具体的には、レイヤ間画素対応情報に含まれる全てのレイヤ間画素対応オフセットを修正する。レイヤ間画素対応オフセットを構成する拡大参照レイヤ左オフセット(scaled_ref_layer_left_offset[i])は、変換後参照レイヤ対応領域の左上画素と変換後ELピクチャ左上画素の間の距離に相当する値が設定される。例えば、変換後参照レイヤ対応領域が変換後ELピクチャを包含する場合には拡大参照レイヤ左オフセットは負値に設定され、拡大参照レイヤ左オフセットの単位は2画素単位である場合には、前記距離を2画素単位で表現した値を絶対値とする負の整数が拡大参照レイヤ左オフセットの値に設定される。他の上、右、下に対応する拡大参照レイヤオフセットの値も同様に設定される。
パラメータセット修正部32は、ピクチャサイズの変更を踏まえて、パラメータセットに含まれるスケール補正情報を修正する。スケール補正情報は、概略的には、変換前後の階層符号化データの復号時に導出されるレイヤ間スケールが一致するように更新される。
パラメータセット修正部32は、入力される注目領域情報の示す注目領域と一致するように、入力されるパラメータセットに含まれるSPSの表示領域情報を書き換える。表示領域情報は、例えば、次のS301からS303の手順で書き換えられる。
階層符号化データ変換装置3による階層符号化データ変換処理は、S501~S506に示す手順を順次実行することで実現される。
以上説明した本実施形態に係る階層符号化データ変換装置3は、対象レイヤ(上位レイヤ)の符号化データに含まれるビデオレイヤの符号化データ(VCL NAL)の一部を注目領域情報に基づいて修正するNAL選択部34と、パラメータセット修正部32を備えている。NAL選択部34は、注目領域情報の示す注目領域に基づいて、注目領域と重複する領域をもつタイルを抽出対象タイルとして選択し、前記選択した抽出対象タイルに含まれるスライスに係るビデオレイヤの符号化データが変換後の階層符号化データに含まれる。パラメータセット修正部32は、注目領域情報とタイル情報に基づいて、ピクチャ情報、PPSタイル情報、表示情報、レイヤ間画素対応情報、および、スケール補正情報を修正する。
上述した階層動画像復号装置1、階層動画像符号化装置2、及び、階層符号化データ変換装置3を組み合わせて、注目領域情報を表示するシステム(注目領域表示システムSYS)を構成できる。
注目領域表示システムによる処理は、階層符号化データ生成蓄積処理と注目領域データ生成再生処理に分けることができる。
以上説明した本実施形態に係る注目領域表示システムSYSは、注目領域情報を供給する注目領域通知部(ROI通知部SYS8)と、前記注目領域情報に基づいて階層符号化データを変換して変換後階層符号化データを生成する階層符号化データ変換部SYS3と、上記変換後階層符号化データを復号して上位レイヤ及び下位レイヤの復号ピクチャを出力する階層動画像復号部SYS4と、表示部SYS6を備えている。
上述した階層動画像符号化装置2及び階層動画像復号装置1は、動画像の送信、受信、記録、再生を行う各種装置に搭載して利用できる。なお、動画像は、カメラ等により撮像された自然動画像であってもよいし、コンピュータ等により生成された人工動画像(CGおよびGUIを含む)であってもよい。
最後に、階層動画像復号装置1、階層動画像符号化装置2の各ブロックは、集積回路(ICチップ)上に形成された論理回路によってハードウェア的に実現してもよいし、CPU(Central Processing Unit)を用いてソフトウェア的に実現してもよい。
11 NAL逆多重化部
12 パラメータセット復号部
13 タイル設定部
14 スライス復号部
141 スライスヘッダ復号部
142 スライス位置設定部
144 CTU復号部
1441 予測残差復元部
1442 予測画像生成部
1443 CTU復号画像生成部
15 ベース復号部
151 ベースNAL逆多重化部
152 ベースパラメータセット復号部
153 ベースタイル設定部
154 ベーススライス復号部
156 ベース復号ピクチャ管理部
16 復号ピクチャ管理部
2 階層動画像符号化装置(画像符号化装置)
21 NAL多重化部
22 パラメータセット符号化部
23 タイル設定部
24 スライス符号化部
241 スライスヘッダ設定部
242 スライス位置設定部
244 CTU符号化部
2441 予測残差符号化部
2442 予測画像符号化部
3 階層符号化データ変換装置(符号化データ変換装置)
32 パラメータセット修正部
34 NAL選択部
Claims (6)
- 階層符号化された符号化データに含まれる上位レイヤの符号化データを復号し、対象レイヤである上位レイヤの復号ピクチャを復元する画像復号装置であって、
パラメータセットを復号するパラメータセット復号部と、
参照レイヤピクチャの復号画素を参照して、レイヤ間予測により予測画像を生成する予測画像生成部を備え、
前記パラメータセット復号部は参照レイヤに関するスケール補正情報を復号し、
前記予測画像生成部は、前記スケール補正情報にもとづいて導出されるレイヤ間スケールを用いて対象レイヤの画素に対する参照レイヤ上の対応参照位置を導出することを特徴とする画像復号装置。 - 前記スケール補正情報にもとづいて、仮想参照レイヤサイズ差分が導出されることを特徴とする請求項1に記載の画像復号装置。
- 前記予測画像生成部で導出される前記レイヤ間スケールの値は、仮想参照レイヤ対応領域サイズと仮想参照レイヤサイズとの比の近似値であり、
前記仮想参照レイヤサイズは、参照レイヤピクチャサイズと前記仮想参照レイヤサイズ差分の和であることを特徴とする、請求項2に記載の画像復号装置。 - 前記スケール補正情報がパラメータセットに含まれない場合、前記仮想参照レイヤサイズと上記参照レイヤピクチャサイズが一致するよう上記スケール補正情報の値を設定することを特徴とする、請求項3に記載の画像復号装置。
- 入力画像から上位レイヤの符号化データを生成する画像符号化装置であって、
パラメータセットを復号するパラメータセット復号部と、
参照レイヤピクチャの復号画素を参照して、レイヤ間予測により予測画像を生成する予測画像符号化部を備え、
前記パラメータセット復号部は、スケール補正情報を符号化し、
前記予測画像符号化部は、前記スケール補正情報から導出されるレイヤ間スケールの値を用いて対象レイヤの復号画素に対する対応参照位置を導出し、
前記スケール補正情報にもとづいて、仮想参照レイヤサイズ差分が導出される
ことを特徴とする画像符号化装置。 - 入力される階層符号化データを入力される注目領域情報に基づいて変換し、注目領域階層符号化データを生成して出力する、パラメータセット修正部を備える階層符号化データ変換装置であって、
前記パラメータセット修正部は、変換前後の階層符号化データで導出されるレイヤ間スケールが一致するようにスケール補正情報を修正することを特徴とする階層符号化データ変換装置。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015551470A JP6229904B2 (ja) | 2013-12-06 | 2014-11-25 | 画像復号装置、画像符号化装置、および、符号化データ変換装置 |
CN201480061744.2A CN105981386B (zh) | 2013-12-06 | 2014-11-25 | 图像解码装置、图像编码装置及编码数据变换装置 |
KR1020167016343A KR101782454B1 (ko) | 2013-12-06 | 2014-11-25 | 이미지 복호화 장치, 이미지 부호화 장치, 및 부호화된 데이터 변환 장치 |
EP14868412.9A EP3068136B1 (en) | 2013-12-06 | 2014-11-25 | Image decoding device |
US15/174,648 US10142653B2 (en) | 2013-12-06 | 2016-06-06 | Image decoding apparatus, image coding apparatus, and coded data transformation apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013253259 | 2013-12-06 | ||
JP2013-253259 | 2013-12-06 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/174,648 Continuation US10142653B2 (en) | 2013-12-06 | 2016-06-06 | Image decoding apparatus, image coding apparatus, and coded data transformation apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015083575A1 true WO2015083575A1 (ja) | 2015-06-11 |
Family
ID=53273338
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/081033 WO2015083575A1 (ja) | 2013-12-06 | 2014-11-25 | 画像復号装置、画像符号化装置、および、符号化データ変換装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10142653B2 (ja) |
EP (1) | EP3068136B1 (ja) |
JP (1) | JP6229904B2 (ja) |
KR (1) | KR101782454B1 (ja) |
CN (1) | CN105981386B (ja) |
WO (1) | WO2015083575A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018221368A1 (ja) * | 2017-05-31 | 2018-12-06 | シャープ株式会社 | 動画像復号装置、及び動画像符号化装置 |
Families Citing this family (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9185387B2 (en) | 2012-07-03 | 2015-11-10 | Gopro, Inc. | Image blur based on 3D depth information |
CN103927767B (zh) * | 2014-04-18 | 2018-05-04 | 北京智谷睿拓技术服务有限公司 | 图像处理方法及图像处理装置 |
US10074013B2 (en) | 2014-07-23 | 2018-09-11 | Gopro, Inc. | Scene and activity identification in video summary generation |
US9685194B2 (en) | 2014-07-23 | 2017-06-20 | Gopro, Inc. | Voice-based video tagging |
US9734870B2 (en) | 2015-01-05 | 2017-08-15 | Gopro, Inc. | Media identifier generation for camera-captured media |
EP3099072A1 (en) * | 2015-05-28 | 2016-11-30 | Parabola Research Limited | Method, system and device for image and video coding |
US9639560B1 (en) | 2015-10-22 | 2017-05-02 | Gopro, Inc. | Systems and methods that effectuate transmission of workflow between computing platforms |
US10078644B1 (en) | 2016-01-19 | 2018-09-18 | Gopro, Inc. | Apparatus and methods for manipulating multicamera content using content proxy |
US9871994B1 (en) | 2016-01-19 | 2018-01-16 | Gopro, Inc. | Apparatus and methods for providing content context using session metadata |
US9787862B1 (en) | 2016-01-19 | 2017-10-10 | Gopro, Inc. | Apparatus and methods for generating content proxy |
US10129464B1 (en) | 2016-02-18 | 2018-11-13 | Gopro, Inc. | User interface for creating composite images |
US9972066B1 (en) | 2016-03-16 | 2018-05-15 | Gopro, Inc. | Systems and methods for providing variable image projection for spherical visual content |
US10402938B1 (en) | 2016-03-31 | 2019-09-03 | Gopro, Inc. | Systems and methods for modifying image distortion (curvature) for viewing distance in post capture |
US9838730B1 (en) | 2016-04-07 | 2017-12-05 | Gopro, Inc. | Systems and methods for audio track selection in video editing |
US10229719B1 (en) * | 2016-05-09 | 2019-03-12 | Gopro, Inc. | Systems and methods for generating highlights for a video |
US9953679B1 (en) | 2016-05-24 | 2018-04-24 | Gopro, Inc. | Systems and methods for generating a time lapse video |
US9967515B1 (en) | 2016-06-15 | 2018-05-08 | Gopro, Inc. | Systems and methods for bidirectional speed ramping |
US9922682B1 (en) | 2016-06-15 | 2018-03-20 | Gopro, Inc. | Systems and methods for organizing video files |
US10045120B2 (en) | 2016-06-20 | 2018-08-07 | Gopro, Inc. | Associating audio with three-dimensional objects in videos |
US10395119B1 (en) | 2016-08-10 | 2019-08-27 | Gopro, Inc. | Systems and methods for determining activities performed during video capture |
US9953224B1 (en) | 2016-08-23 | 2018-04-24 | Gopro, Inc. | Systems and methods for generating a video summary |
US10282632B1 (en) | 2016-09-21 | 2019-05-07 | Gopro, Inc. | Systems and methods for determining a sample frame order for analyzing a video |
US10268898B1 (en) | 2016-09-21 | 2019-04-23 | Gopro, Inc. | Systems and methods for determining a sample frame order for analyzing a video via segments |
US10397415B1 (en) | 2016-09-30 | 2019-08-27 | Gopro, Inc. | Systems and methods for automatically transferring audiovisual content |
US10044972B1 (en) | 2016-09-30 | 2018-08-07 | Gopro, Inc. | Systems and methods for automatically transferring audiovisual content |
US11106988B2 (en) | 2016-10-06 | 2021-08-31 | Gopro, Inc. | Systems and methods for determining predicted risk for a flight path of an unmanned aerial vehicle |
US10002641B1 (en) | 2016-10-17 | 2018-06-19 | Gopro, Inc. | Systems and methods for determining highlight segment sets |
US10339443B1 (en) | 2017-02-24 | 2019-07-02 | Gopro, Inc. | Systems and methods for processing convolutional neural network operations using textures |
US9916863B1 (en) | 2017-02-24 | 2018-03-13 | Gopro, Inc. | Systems and methods for editing videos based on shakiness measures |
US10360663B1 (en) | 2017-04-07 | 2019-07-23 | Gopro, Inc. | Systems and methods to create a dynamic blur effect in visual content |
US10395122B1 (en) | 2017-05-12 | 2019-08-27 | Gopro, Inc. | Systems and methods for identifying moments in videos |
CN116248866A (zh) * | 2017-07-03 | 2023-06-09 | 汉阳大学校产学协力团 | 利用包含追加区域的分割单位的影像解码方法以及装置 |
US10402698B1 (en) | 2017-07-10 | 2019-09-03 | Gopro, Inc. | Systems and methods for identifying interesting moments within videos |
US10614114B1 (en) | 2017-07-10 | 2020-04-07 | Gopro, Inc. | Systems and methods for creating compilations based on hierarchical clustering |
KR102229410B1 (ko) | 2017-09-18 | 2021-03-17 | 주식회사 엘지화학 | 배터리 모듈 및 이를 포함하는 배터리 팩 |
WO2019103126A1 (en) * | 2017-11-22 | 2019-05-31 | Sharp Kabushiki Kaisha | Systems and methods for signaling tile structures for pictures of coded video |
CN115103184A (zh) * | 2018-12-21 | 2022-09-23 | 华为技术有限公司 | 一种使用基于历史的运动矢量预测的编码器、解码器及相应方法 |
WO2020141904A1 (ko) * | 2019-01-02 | 2020-07-09 | 주식회사 엑스리스 | 영상 신호 부호화/복호화 방법 및 이를 위한 장치 |
KR102259186B1 (ko) * | 2019-01-16 | 2021-06-01 | 텔레호낙티에볼라게트 엘엠 에릭슨(피유비엘) | 나머지를 갖는 균일한 타일 분열을 포함하는 비디오 코딩 |
KR20220079983A (ko) * | 2019-11-18 | 2022-06-14 | 엘지전자 주식회사 | 필터링을 위한 정보의 시그널링 기반 영상 코딩 장치 및 방법 |
WO2021121418A1 (en) | 2019-12-19 | 2021-06-24 | Beijing Bytedance Network Technology Co., Ltd. | Joint use of adaptive colour transform and differential coding of video |
CN115152220A (zh) * | 2020-01-05 | 2022-10-04 | 抖音视界有限公司 | 自适应颜色变换编解码工具的偏移的使用 |
CN115176470A (zh) | 2020-01-18 | 2022-10-11 | 抖音视界有限公司 | 图像/视频编解码中的自适应颜色变换 |
KR20220137903A (ko) | 2020-02-21 | 2022-10-12 | 베이징 바이트댄스 네트워크 테크놀로지 컴퍼니, 리미티드 | 비디오 코딩의 슬라이스 및 타일 파티셔닝 |
AU2021226551A1 (en) * | 2020-02-28 | 2022-09-29 | Huawei Technologies Co., Ltd. | An encoder, a decoder and corresponding methods simplifying signalling slice header syntax elements |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006109988A1 (en) * | 2005-04-13 | 2006-10-19 | Lg Electronics Inc. | Method and apparatus for decoding video signal using reference pictures |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20060109247A (ko) | 2005-04-13 | 2006-10-19 | 엘지전자 주식회사 | 베이스 레이어 픽처를 이용하는 영상신호의 엔코딩/디코딩방법 및 장치 |
KR20070084002A (ko) * | 2004-11-05 | 2007-08-24 | 마츠시타 덴끼 산교 가부시키가이샤 | 스케일러블 복호화 장치 및 스케일러블 부호화 장치 |
US20080056373A1 (en) * | 2006-08-29 | 2008-03-06 | Newlin John B | Method and system for dynamic frequency adjustment during video decoding |
WO2008060125A1 (en) * | 2006-11-17 | 2008-05-22 | Lg Electronics Inc. | Method and apparatus for decoding/encoding a video signal |
CN101395922A (zh) * | 2006-11-17 | 2009-03-25 | Lg电子株式会社 | 用于解码/编码视频信号的方法及装置 |
EP2051527A1 (en) | 2007-10-15 | 2009-04-22 | Thomson Licensing | Enhancement layer residual prediction for bit depth scalability using hierarchical LUTs |
US8797903B2 (en) * | 2009-12-01 | 2014-08-05 | Qualcomm Incorporated | Method and apparatus of utilizing uplink synchronization shift command bits in TD-SCDMA uplink transmission |
EP2833633A4 (en) * | 2012-03-29 | 2015-11-11 | Lg Electronics Inc | METHOD FOR PREDICTION BETWEEN LAYERS, AND ENCODING DEVICE AND DECODING DEVICE USING THE SAME |
US10225567B2 (en) | 2013-10-08 | 2019-03-05 | Sharp Kabushiki Kaisha | Image decoder, image encoder, and encoded data converter |
-
2014
- 2014-11-25 WO PCT/JP2014/081033 patent/WO2015083575A1/ja active Application Filing
- 2014-11-25 EP EP14868412.9A patent/EP3068136B1/en active Active
- 2014-11-25 KR KR1020167016343A patent/KR101782454B1/ko active IP Right Grant
- 2014-11-25 JP JP2015551470A patent/JP6229904B2/ja active Active
- 2014-11-25 CN CN201480061744.2A patent/CN105981386B/zh active Active
-
2016
- 2016-06-06 US US15/174,648 patent/US10142653B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006109988A1 (en) * | 2005-04-13 | 2006-10-19 | Lg Electronics Inc. | Method and apparatus for decoding video signal using reference pictures |
Non-Patent Citations (7)
Title |
---|
"MV-HEVC Draft Text 5", JOINT COLLABORATIVETEAM ON 3D VIDEO CODING EXTENSION DEVELOPMENT OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 5TH MEETING, 27 July 2013 (2013-07-27) |
"MV-HEVC/SHVC HLS: On conversion to ROI-capablemulti-layer bitstream", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 15TH MEETING, 23 October 2013 (2013-10-23) |
"SHVC Draft 3", JOINT COLLABORATIVE TEAM ONVIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 14TH MEETING, 25 July 2013 (2013-07-25) |
JIANLE CHEN ET AL.: "High efficiency video coding (HEVC) scalable extension Draft 4", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3 AND ISO/IEC JTC1/SC29/WG11 JCTVC-O1008_V1(VERSIONL), ITU-T, 14 November 2013 (2013-11-14), pages 15,30 - 31,35, XP008181482, Retrieved from the Internet <URL:http://phenix.int-evry.fr/jct/doc_end_user/ current_document.php?id=8527> [retrieved on 20150120] * |
RECOMMENDATION H.265 (04/13, 7 June 2013 (2013-06-07) |
See also references of EP3068136A4 |
TOMOYUKI YAMAMOTO ET AL.: "MV-HEVC/SHVC HLS: On conversion to ROI-oriented multi-layer bitstream", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP3 AND ISO/IECJTC 1/SC 29/WG11 ,15TH MEETING ,JCTVC-00056, 23 October 2013 (2013-10-23), pages 1 - 5, XP030115029 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018221368A1 (ja) * | 2017-05-31 | 2018-12-06 | シャープ株式会社 | 動画像復号装置、及び動画像符号化装置 |
Also Published As
Publication number | Publication date |
---|---|
EP3068136A4 (en) | 2016-11-02 |
US20160286235A1 (en) | 2016-09-29 |
CN105981386B (zh) | 2019-02-26 |
US10142653B2 (en) | 2018-11-27 |
EP3068136A1 (en) | 2016-09-14 |
KR20160085895A (ko) | 2016-07-18 |
KR101782454B1 (ko) | 2017-09-28 |
EP3068136B1 (en) | 2022-01-05 |
JP6229904B2 (ja) | 2017-11-15 |
JPWO2015083575A1 (ja) | 2017-03-16 |
CN105981386A (zh) | 2016-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6229904B2 (ja) | 画像復号装置、画像符号化装置、および、符号化データ変換装置 | |
JP6363088B2 (ja) | 画像復号装置、画像復号方法、画像符号化装置、および画像符号化方法 | |
JP6542201B2 (ja) | 画像復号装置および画像復号方法 | |
JP6800837B2 (ja) | 画像復号装置、及び画像復号方法 | |
US10841600B2 (en) | Image decoding device, an image encoding device and a decoding method | |
US20160249056A1 (en) | Image decoding device, image coding device, and coded data | |
WO2015137237A1 (ja) | 画像復号装置 | |
US10136161B2 (en) | DMM prediction section, image decoding device, and image coding device | |
WO2014162954A1 (ja) | 画像復号装置、および画像符号化装置 | |
JP2015073213A (ja) | 画像復号装置、画像符号化装置、符号化データ変換装置、および、注目領域表示システム | |
WO2015098713A1 (ja) | 画像復号装置および画像符号化装置 | |
JP2015177318A (ja) | 画像復号装置、画像符号化装置 | |
JP2016072941A (ja) | Dmm予測装置、画像復号装置、および画像符号化装置 | |
JP2016143962A (ja) | 領域分割画像生成装置、画像復号装置、および符号化装置。 | |
JP2016076904A (ja) | Dcオフセット情報復号装置、画像復号装置、およびdcオフセット情報符号化装置。 | |
JP2015126508A (ja) | 画像復号装置、画像符号化装置、符号化データ変換装置、領域再生装置 | |
JP2015050506A (ja) | 画像復号装置および画像符号化装置 | |
JP2015002495A (ja) | 画像復号装置、および画像符号化装置 | |
JP2015076807A (ja) | 画像復号装置、画像符号化装置、および符号化データのデータ構造 | |
JP2015076806A (ja) | 画像復号装置および画像符号化装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14868412 Country of ref document: EP Kind code of ref document: A1 |
|
REEP | Request for entry into the european phase |
Ref document number: 2014868412 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2014868412 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2015551470 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20167016343 Country of ref document: KR Kind code of ref document: A |