WO2015098315A1 - 画像処理装置及び画像処理方法 - Google Patents
画像処理装置及び画像処理方法 Download PDFInfo
- Publication number
- WO2015098315A1 WO2015098315A1 PCT/JP2014/079622 JP2014079622W WO2015098315A1 WO 2015098315 A1 WO2015098315 A1 WO 2015098315A1 JP 2014079622 W JP2014079622 W JP 2014079622W WO 2015098315 A1 WO2015098315 A1 WO 2015098315A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- prediction
- image
- layer
- unit
- parameter
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/105—Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/186—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/136—Incoming video signal characteristics or properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/30—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
- H04N19/33—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
Definitions
- the present disclosure relates to an image processing apparatus and an image processing method.
- J. Joint Collaboration Team-Video Coding a joint standardization body of ITU-T and ISO / IEC, aims to further improve coding efficiency over H.264 / AVC, by using High Efficiency Video Coding (HEVC).
- HEVC High Efficiency Video Coding
- the standardization of the image coding system called “1.” is in progress (see, for example, Non-Patent Document 1).
- HEVC provides not only single layer coding but also scalable coding as well as existing image coding schemes such as MPEG2 and AVC (Advanced Video Coding).
- the HEVC scalable coding technology is also referred to as SHVC (Scalable HEVC) (see, for example, Non-Patent Document 2).
- Scalable coding generally refers to a technique for hierarchically coding a layer for transmitting a coarse image signal and a layer for transmitting a fine image signal.
- the following three types of typical attributes are layered in scalable coding.
- -Spatial scalability spatial resolution or image size is layered.
- -Temporal scalability Frame rates are layered.
- -Signal to Noise Ratio (SNR) scalability SN ratios are layered.
- the dynamic range of the pixel is also an important attribute that affects the image quality.
- the maximum brightness of a standard dynamic range (SDR) image supported by many existing displays is 100 nit.
- the maximum brightness of a high dynamic range (HDR) image supported by a high-end display put on the market in recent years reaches, for example, 500 nit or 1000 nit.
- the SDR image is also referred to as an LDR (Low Dynamic Range) image in contrast to the HDR image.
- Non-Patent Document 3 below proposes a technique for hierarchically encoding a layer for transmitting an LDR image and a layer for transmitting a residual for recovering an HDR image.
- Non-Patent Document 3 is complex such as filtering of filter taps composed of pixel values over multiple frames and gamma correction in RGB domain to restore an HDR image from LDR image. Requires an algorithm.
- complex algorithms are not suitable for encoder and decoder applications where versatility and extensibility are important and implementation is desired to be as simple as possible.
- each color component of the first layer is a prediction parameter used when predicting an image of the second layer having a luminance dynamic range larger than that of the first layer from the image of the first layer.
- a prediction unit for decoding the prediction parameter including the gain and offset to be multiplied, and a prediction for predicting the image of the second layer from the image of the first layer using the prediction parameter decoded by the decoding unit An image processing apparatus comprising:
- the image processing apparatus can be typically realized as an image decoding apparatus that decodes an image.
- it is a prediction parameter used when predicting an image of a second layer having a luminance dynamic range larger than that of the first layer from an image of the first layer, and the prediction parameter of the first layer Decoding the prediction parameter including a gain and an offset by which each color component is multiplied, and predicting the image of the second layer from the image of the first layer using the prediction parameter to be decoded;
- a prediction unit that predicts the image of the second layer from the image of the first layer when encoding the image of the second layer having a luminance dynamic range larger than that of the first layer
- An image processing apparatus comprising: an encoding unit that encodes the prediction parameter that is a prediction parameter used by the prediction unit and that includes a gain and an offset by which each color component of the first layer is multiplied; Ru.
- the image processing apparatus can be typically realized as an image coding apparatus that codes an image.
- an image processing method including: prediction parameters to be used in the prediction, the prediction parameters including gains and offsets to be multiplied by each color component of the first layer.
- FIG. 13 is an explanatory diagram for describing an example of a syntax related to the method described using FIG.
- FIG. 12 It is an explanatory view for explaining selective use of a prediction parameter according to an image field to which a pixel belongs.
- FIG. 16 is an explanatory diagram for describing an example of a syntax related to the method described using FIG. 15; It is a 1st explanatory view for explaining the method for controlling the processing cost of inter layer prediction proposed in JCTVC-O 0194. It is the 2nd explanatory view for explaining the method for controlling the processing cost of inter layer prediction proposed in JCTVC-O 0194. It is the 3rd explanatory view for explaining the method for controlling the processing cost of inter layer prediction proposed in JCTVC-O 0194. It is a 1st explanatory view for explaining a new technique for controlling processing cost of inter layer prediction.
- FIG. 19 is an explanatory diagram for describing an example of a syntax related to the method described with reference to FIGS. 18A to 18C. It is a flowchart which shows an example of the flow of a rough processing at the time of the encoding which concerns on one Embodiment. It is a flowchart which shows the 1st example of the flow of DR prediction processing in the encoding process of an enhancement layer. It is a flowchart which shows the 2nd example of the flow of DR prediction processing in the encoding process of an enhancement layer.
- scalable coding In scalable coding, a plurality of layers each including a series of images are encoded.
- the base layer is the layer that represents the coarsest image that is initially encoded.
- the base layer coded stream can be decoded independently without decoding the coded streams of other layers.
- Layers other than the base layer are layers which represent finer images, called enhancement layers.
- the enhancement layer coded stream is coded using information contained in the base layer coded stream. Therefore, in order to reproduce the image of the enhancement layer, the coded streams of both the base layer and the enhancement layer will be decoded.
- the number of layers handled in scalable coding may be any number of two or more. When three or more layers are encoded, the lowest layer is the base layer, and the remaining layers are the enhancement layers.
- the higher enhancement layer coded stream may be coded and decoded using the information contained in the lower enhancement layer or base layer coded stream.
- FIG. 1 shows three layers L1, L2 and L3 to be subjected to scalable coding.
- Layer L1 is a base layer
- layers L2 and L3 are enhancement layers.
- the ratio of the spatial resolution of layer L2 to layer L1 is 2: 1.
- the ratio of the spatial resolution of layer L3 to layer L1 is 4: 1.
- the resolution ratio is only an example, and a non-integer resolution ratio such as 1.5: 1 may be used.
- the block B1 of the layer L1 is a processing unit of the encoding process in the picture of the base layer.
- the block B2 of the layer L2 is a processing unit of the encoding process in the picture of the enhancement layer showing the scene common to the block B1.
- the block B2 corresponds to the block B1 of the layer L1.
- the block B3 of the layer L3 is a processing unit of the coding process in the picture of the higher enhancement layer in which the scene common to the blocks B1 and B2 is shown.
- the block B3 corresponds to the block B1 of the layer L1 and the block B2 of the layer L2.
- the textures of the images are similar between the layers reflecting the common scene. That is, the textures of the block B1 in the layer L1, the block B2 in the layer L2, and the block B3 in the layer L3 are similar. Therefore, for example, if pixels in block B2 are used as a reference block to predict pixels in block B2 or block B3 or blocks B2 are used as a reference block, pixels in block B3 may be predicted, and high prediction accuracy may be obtained. .
- Such inter-layer prediction is called inter-layer prediction.
- Non-Patent Document 2 several techniques for inter-layer prediction are proposed.
- the decoded image (reconstruct image) of the base layer is used as a reference image for predicting the decoded image of the enhancement layer.
- a prediction error (residual) image of a base layer is used as a reference image for predicting a prediction error image of an enhancement layer.
- FIG. 2 is an explanatory diagram for explaining the dynamic range of the video format.
- the vertical axis in FIG. 2 represents the luminance [nit].
- the maximum luminance in the natural world may reach 20000 nit, and the luminance of a general subject is, for example, at most about 12000 nit.
- the upper limit of the dynamic range of the image sensor may be lower than the natural maximum brightness, for example, 4000 nit.
- the image signal generated by the image sensor is further recorded in a predetermined video format.
- the dynamic range of the SDR image is indicated by hatched bars in the figure, and the upper limit is 100 nit.
- the dynamic range of luminance is largely compressed, for example, by a technique such as knee compression.
- a technique such as knee compression.
- the maximum luminance that can be expressed by the display is 1000 nit
- scaling by 10 times is performed when displaying the SDR image, but as a result of the scaling, deterioration of the image quality tends to appear in the displayed image.
- the dynamic range of the HDR image is indicated by a bold bar in the figure, and the upper limit is 800 nit. Therefore, also when recording a captured image as an HDR image, the dynamic range of luminance is compressed by a technique such as knee compression, for example.
- the maximum luminance that can be expressed by the display is 1000 nit
- scaling by 1.25 times is performed when displaying the HDR image, but the deterioration of the image quality of the displayed image may be small because the scaling rate is small.
- supporting HDR images as a video format has the benefit of being able to provide high quality images to the user.
- scalable coding technology called dynamic range scalability is realized here because of compatibility with devices that support SDR images, storage restrictions, and support for various transmission bands. It is useful.
- dynamic range scalability an SDR image is transmitted in a base layer, and information for recovering an HDR image from the SDR image is transmitted in an enhancement layer. The key point in recovering the HDR image from the SDR image is to keep the implementation as simple as possible and to ensure the versatility and extensibility of the format.
- the technology according to the present disclosure adopts a model that approximates the relationship between the SDR image and the HDR image to a linear relationship independent for each color component, and according to the linear model, the SDR image to the HDR image Dynamic range scalability is realized by predicting.
- FIG. 3 is a block diagram showing a schematic configuration of an image coding apparatus 10 according to an embodiment that supports scalable coding.
- the image coding apparatus 10 includes a base layer (BL) coding unit 1 a, an enhancement layer (EL) coding unit 1 b, a common memory 2 and a multiplexing unit 3.
- BL base layer
- EL enhancement layer
- the BL encoding unit 1a encodes a base layer image to generate an encoded stream of a base layer.
- the EL encoding unit 1b encodes the enhancement layer image to generate a coded stream of the enhancement layer.
- the common memory 2 stores information commonly used between layers.
- the multiplexing unit 3 multiplexes the coded stream of the base layer generated by the BL coding unit 1a and the coded stream of one or more enhancement layers generated by the EL coding unit 1b, and performs multi-layering. Generate a multiplexed stream.
- FIG. 4 is a block diagram showing a schematic configuration of an image decoding apparatus 60 according to an embodiment that supports scalable coding.
- the image decoding apparatus 60 includes a demultiplexer 5, a base layer (BL) decoder 6a, an enhancement layer (EL) decoder 6b, and a common memory 7.
- BL base layer
- EL enhancement layer
- the demultiplexing unit 5 demultiplexes the multiplexed stream of the multilayer into the coded stream of the base layer and the coded stream of one or more enhancement layers.
- the BL decoding unit 6a decodes a base layer image from the coded stream of the base layer.
- the EL decoding unit 6 b decodes the enhancement layer image from the coded stream of the enhancement layer.
- the common memory 7 stores information commonly used between layers.
- the configuration of the BL encoding unit 1a for encoding the base layer and the configuration of the EL encoding unit 1b for encoding the enhancement layer are similar to each other. . Some parameters and images generated or acquired by the BL encoding unit 1a can be buffered using the common memory 2 and reused by the EL encoding unit 1b. In the next section, the configuration of such an EL encoding unit 1b will be described in detail.
- the configuration of the BL decoding unit 6 a for base layer decoding and the configuration of the EL decoding unit 6 b for decoding of the enhancement layer are similar to each other. Some parameters and images generated or acquired by the BL decoding unit 6a can be buffered using the common memory 7 and reused by the EL decoding unit 6b. Further, in the next section, the configuration of such an EL decoding unit 6b will be described in detail.
- FIG. 5 is a block diagram showing an example of the configuration of the EL encoding unit 1b shown in FIG.
- the EL encoding unit 1 b includes a rearrangement buffer 11, a subtraction unit 13, an orthogonal transformation unit 14, a quantization unit 15, a lossless encoding unit 16, an accumulation buffer 17, a rate control unit 18, and an inverse quantization.
- the unit 21 includes an inverse orthogonal transform unit 22, an addition unit 23, a loop filter 24, a frame memory 25, selectors 26 and 27, an intra prediction unit 30, an inter prediction unit 35, and a dynamic range (DR) prediction unit 40.
- DR dynamic range
- the rearrangement buffer 11 rearranges the images included in the series of image data.
- the rearrangement buffer 11 rearranges the images according to the GOP (Group of Pictures) structure related to the encoding process, and then the image data after the rearrangement is subtracted by the subtraction unit 13, the intra prediction unit 30, the inter prediction unit 35, and DR Output to the prediction unit 40.
- GOP Group of Pictures
- the subtraction unit 13 is supplied with the image data input from the reordering buffer 11 and the prediction image data input from the intra prediction unit 30 or the inter prediction unit 35 described later.
- the subtracting unit 13 calculates prediction error data which is a difference between the image data input from the reordering buffer 11 and the prediction image data, and outputs the calculated prediction error data to the orthogonal transformation unit 14.
- the orthogonal transformation unit 14 performs orthogonal transformation on the prediction error data input from the subtraction unit 13.
- the orthogonal transformation performed by the orthogonal transformation unit 14 may be, for example, Discrete Cosine Transform (DCT) or Karhunen-Loeve Transform.
- DCT Discrete Cosine Transform
- HEVC HEVC
- orthogonal transformation is performed for each block called TU (Transform Unit: Transform Unit).
- the TU is a block formed by dividing a CU (coding unit: Coding Unit).
- the orthogonal transform unit 14 outputs transform coefficient data acquired by the orthogonal transform process to the quantization unit 15.
- the quantization unit 15 is supplied with transform coefficient data input from the orthogonal transform unit 14 and a rate control signal from the rate control unit 18 described later.
- the quantization unit 15 quantizes transform coefficient data in a quantization step determined according to the rate control signal.
- the quantization unit 15 outputs the transform coefficient data after quantization (hereinafter, referred to as quantization data) to the lossless encoding unit 16 and the inverse quantization unit 21.
- the lossless encoding unit 16 performs lossless encoding processing on the quantized data input from the quantization unit 15 to generate an encoded stream of the enhancement layer. Also, the lossless encoding unit 16 encodes various parameters referred to when decoding the encoded stream, and inserts the encoded parameter into the header area of the encoded stream.
- the parameters encoded by the lossless encoding unit 16 may include information on intra prediction and information on inter prediction described later. Prediction parameters associated with prediction of HDR images (hereinafter referred to as DR prediction) may also be encoded. Then, the lossless encoding unit 16 outputs the generated encoded stream to the accumulation buffer 17.
- the accumulation buffer 17 temporarily accumulates the encoded stream input from the lossless encoding unit 16 using a storage medium such as a semiconductor memory. Then, the accumulation buffer 17 outputs the accumulated encoded stream to a transmission unit (not shown) (for example, a communication interface or a connection interface with a peripheral device) at a rate according to the band of the transmission path.
- a transmission unit for example, a communication interface or a connection interface with a peripheral device
- the rate control unit 18 monitors the free space of the accumulation buffer 17. Then, the rate control unit 18 generates a rate control signal according to the free space of the accumulation buffer 17, and outputs the generated rate control signal to the quantization unit 15. For example, when the free space of the accumulation buffer 17 is small, the rate control unit 18 generates a rate control signal for reducing the bit rate of the quantized data. Also, for example, when the free space of the accumulation buffer 17 is sufficiently large, the rate control unit 18 generates a rate control signal for increasing the bit rate of the quantized data.
- the inverse quantization unit 21, the inverse orthogonal transform unit 22 and the addition unit 23 constitute a local decoder.
- the inverse quantization unit 21 inversely quantizes the quantization data of the enhancement layer in the same quantization step as that used by the quantization unit 15 to restore transform coefficient data. Then, the inverse quantization unit 21 outputs the restored transform coefficient data to the inverse orthogonal transform unit 22.
- the inverse orthogonal transform unit 22 restores prediction error data by performing inverse orthogonal transform processing on the transform coefficient data input from the inverse quantization unit 21. Similar to orthogonal transform, inverse orthogonal transform is performed for each TU. Then, the inverse orthogonal transform unit 22 outputs the restored prediction error data to the addition unit 23.
- the adding unit 23 adds the decoded prediction error data input from the inverse orthogonal transformation unit 22 and the predicted image data input from the intra prediction unit 30 or the inter prediction unit 35 to generate decoded image data (enhancement layer (enhancement layer). Reconstruct image of). Then, the adding unit 23 outputs the generated decoded image data to the loop filter 24 and the frame memory 25.
- the loop filter 24 includes a group of filters aiming to improve the image quality.
- the deblocking filter (DF) is a filter that reduces block distortion that occurs when encoding an image.
- a sample adaptive offset (SAO) filter is a filter that adds an offset value that is adaptively determined to each pixel value.
- the adaptive loop filter (ALF) is a filter that minimizes the error between the image after SAO and the original image.
- the loop filter 24 filters the decoded image data input from the addition unit 23 and outputs the filtered decoded image data to the frame memory 25.
- the frame memory 25 includes the decoded image data of the enhancement layer input from the adding unit 23, the decoded image data of the enhancement layer after filtering input from the loop filter 24, and the reference image of the base layer input from the DR prediction unit 40. Data is stored using a storage medium.
- the selector 26 reads the decoded image data before filtering used for intra prediction from the frame memory 25 and supplies the read decoded image data to the intra prediction unit 30 as reference image data. Further, the selector 26 reads the decoded image data after filtering used for inter prediction from the frame memory 25 and supplies the read decoded image data to the inter prediction unit 35 as reference image data. Furthermore, when the inter prediction is performed in the intra prediction unit 30 or the inter prediction unit 35, the selector 26 supplies the reference image data of the base layer to the intra prediction unit 30 or the inter prediction unit 35.
- the selector 27 outputs predicted image data as a result of the intra prediction output from the intra prediction unit 30 to the subtraction unit 13 in the intra prediction mode, and outputs information on the intra prediction to the lossless encoding unit 16.
- the selector 27 outputs predicted image data as a result of the inter prediction output from the inter prediction unit 35 to the subtraction unit 13 and outputs information on the inter prediction to the lossless encoding unit 16. .
- the selector 27 switches between the intra prediction mode and the inter prediction mode according to the size of the cost function value.
- the intra prediction unit 30 performs intra prediction processing for each PU (prediction unit: Prediction Unit) of HEVC based on the original image data and the decoded image data of the enhancement layer. For example, the intra prediction unit 30 evaluates the prediction result in each candidate mode in the prediction mode set using a predetermined cost function. Next, the intra prediction unit 30 selects the prediction mode in which the cost function value is the smallest, that is, the prediction mode in which the compression ratio is the highest, as the optimum prediction mode. In addition, the intra prediction unit 30 generates predicted image data of the enhancement layer according to the optimal prediction mode.
- the intra prediction unit 30 may include intra BL prediction, which is a type of inter layer prediction, in the prediction mode set in the enhancement layer.
- intra BL prediction a co-located block in the base layer corresponding to the prediction target block in the enhancement layer is used as a reference block, and a prediction image is generated based on the decoded image of the reference block.
- the intra prediction unit 30 may include intra residual prediction which is a type of inter-layer prediction.
- intra residual prediction a prediction error of intra prediction is predicted based on a prediction error image of a reference block which is a co-located block in a base layer, and a prediction image to which a predicted prediction error is added is generated.
- the intra prediction unit 30 outputs information on intra prediction including prediction mode information representing the selected optimal prediction mode, a cost function value, and predicted image data to the selector 27.
- the inter prediction unit 35 performs inter prediction processing for each HEVC PU based on the original image data and the decoded image data of the enhancement layer. For example, the inter prediction unit 35 evaluates the prediction result by each candidate mode in the prediction mode set using a predetermined cost function. Next, the inter prediction unit 35 selects the prediction mode in which the cost function value is the smallest, that is, the prediction mode in which the compression ratio is the highest, as the optimum prediction mode. In addition, the inter prediction unit 35 generates prediction image data of the enhancement layer according to the optimal prediction mode.
- the inter prediction unit 35 may include inter residual prediction, which is a type of inter layer prediction, in the prediction mode set in the enhancement layer.
- inter prediction unit 35 In inter residual prediction, a prediction error of inter prediction is predicted based on a prediction error image of a reference block which is a co-located block in a base layer, and a prediction image to which a predicted prediction error is added is generated.
- the inter prediction unit 35 outputs, to the selector 27, information on inter prediction including the prediction mode information indicating the selected optimum prediction mode and the motion information, the cost function value, and the prediction image data.
- the DR prediction unit 40 upsamples the base layer image (decoded image or prediction error image) buffered by the common memory 2 according to the resolution ratio between the base layer and the enhancement layer. Also, when the enhancement layer image has a luminance dynamic range larger than that of the base layer image, the DR prediction unit 40 sets the dynamic range of the up-sampled base layer image to a dynamic range equivalent to that of the enhancement layer image. Convert. In the present embodiment, the DR prediction unit 40 approximately predicts the image of the enhancement layer from the image of the base layer, on the basis of an independent linear relationship for each color component between the base layer and the enhancement layer. , Convert dynamic range.
- the image of the base layer whose dynamic range has been converted by the DR prediction unit 40 is stored in the frame memory 25 and may be used as a reference image in inter-layer prediction by the intra prediction unit 30 or the inter prediction unit 35.
- the DR prediction unit 40 generates several parameters used for DR prediction.
- the parameter generated by the DR prediction unit 40 includes, for example, a prediction mode parameter indicating a prediction mode.
- the parameters generated by the DR prediction unit 40 include prediction parameters for each color component, that is, gains and offsets.
- the DR prediction unit 40 outputs the prediction mode parameter and the prediction parameter to the lossless coding unit 16.
- FIG. 6A is a table showing three candidates for prediction mode when predicting an HDR image from an SDR image.
- the prediction mode number is one of “0”, “1” and “2”, that is, there are three types of prediction mode candidates.
- the pixel values (Y HDR , U HDR , V HDR ) of the HDR image are predicted.
- n shift represents the number of bits corresponding to the difference in dynamic range.
- Such prediction mode is referred to herein as bit shift mode.
- six prediction parameters three gains and three offsets are additionally encoded as prediction parameters.
- FIG. 6B and 6C are explanatory diagrams for describing an example of the syntax of the prediction parameter.
- “Use_dr_prediction” on the third line of FIG. 6B is a flag indicating whether PPS (Picture Parameter Set) includes a syntax for dynamic range prediction.
- “Dr_pred_data ()” in the fifth line of FIG. 6B is a function of the syntax for DR scalability, and the contents are shown in FIG. 6C.
- “Dr_prediction_model” in the first line of FIG. 6C is a parameter indicating the prediction mode to be selected, and takes any one of “0”, “1” and “2” exemplified in FIG. 6A.
- the number of bits corresponding to the denominator of the gain in the third row (“numFractionBits”) is the numerator of the gain for the I-th color component in the fifth row (“dr_prediction_gain [ I] ”) specifies an offset (“ dr_prediction_offset [I] ”) for the I-th color component in the sixth line.
- the adaptive parameter mode is a mode in which the highest prediction accuracy can be expected among the three prediction modes.
- the DR prediction unit 40 may calculate the difference from the past value of the prediction parameter.
- the lossless encoding unit 16 may encode the prediction mode parameter and the difference between the prediction parameters.
- FIG. 7 is a block diagram showing an example of the configuration of DR prediction unit 40 shown in FIG.
- the DR prediction unit 40 includes an upsampling unit 41, a prediction mode setting unit 42, a parameter calculation unit 43, and a dynamic range (DR) conversion unit 44.
- DR dynamic range
- the up-sampling unit 41 up-samples the image of the base layer acquired from the common memory 2 in accordance with the resolution ratio between the base layer and the enhancement layer. More specifically, the up-sampling unit 41 calculates an interpolated pixel value by filtering the base layer image with a filter coefficient defined in advance for each of the interpolated pixels sequentially scanned according to the resolution ratio. Thereby, the spatial resolution of the image of the base layer used as a reference block is increased to a resolution equivalent to that of the enhancement layer.
- the up-sampling unit 41 outputs the up-sampled image to the parameter calculation unit 43 and the DR conversion unit 44. Note that up-sampling may be skipped if resolutions are equal between layers. In this case, the up-sampling unit 41 can output the image of the base layer to the parameter calculation unit 43 and the DR conversion unit 44 as it is.
- the prediction mode setting unit 42 sets, in the DR prediction unit 40, a prediction mode which is predefined or dynamically selected among the prediction mode candidates for DR prediction.
- the prediction mode candidates may include the bit shift mode, the fixed parameter mode and the adaptive parameter mode described above.
- the prediction mode setting unit 42 may set an optimum prediction mode for each picture.
- the prediction mode setting unit 42 may set an optimum prediction mode for each slice.
- One picture may have one or more slices.
- the prediction mode setting unit 42 may set the prediction mode for each sequence, and maintain the same prediction mode across a plurality of pictures and a plurality of slices in one sequence.
- the prediction mode setting unit 42 may evaluate the coding efficiency or the prediction accuracy for each candidate of the prediction mode, and select an optimal prediction mode.
- the prediction mode setting unit 42 outputs a prediction mode parameter indicating the set prediction mode to the lossless encoding unit 16.
- the parameter calculator 43 sets the adaptive parameter mode by the prediction mode setting unit 42 or when the prediction mode setting unit 42 evaluates the coding efficiency or prediction accuracy of the adaptive parameter mode. Calculate the prediction parameters to be used in the adaptive parameter mode.
- the subscript i means each of the three color components.
- the gain g i is a coefficient by which the pixel value of the base layer is multiplied.
- the offset o i is a numerical value to be added to the product of the pixel value of the base layer and the gain g i .
- the parameter calculation unit 43 may, for example, use color components such as gains and offsets that allow the image of the base layer after upsampling input from the upsampling unit 41 to be closest to the original image input from the reordering buffer 11. It can be calculated for each.
- the parameter calculation unit 43 may calculate the difference between the gain and the offset from the past values.
- the past value here may be, for example, a value calculated for the previous picture when gain and offset are calculated for each picture. If gain and offset are calculated for each slice, it may be a value calculated for a slice (co-located slice) of the previous picture.
- the parameter calculation unit 43 can use the values of the gain and the offset corresponding to the bit shift amount as the basis of the difference.
- the parameter calculation unit 43 uses values of fixed gain and offset defined in advance as a basis of the difference. It can.
- the parameter calculator 43 outputs the calculated gain and offset (or their difference) to the lossless encoder 16.
- the value of gain may include fractional values. Therefore, the prediction mode setting unit 42 may divide the value of the gain into a denominator and a numerator, and output the denominator and the numerator (or their difference) to the lossless encoding unit 16.
- the prediction mode setting unit 42 may limit the value of the denominator of the gain to only an integer power of 2 in order to increase the coding efficiency and reduce the calculation cost. In this case, the base 2 logarithm of the value of the denominator may be used as a prediction parameter.
- the DR conversion unit 44 sets the dynamic range of the SDR image of the base layer after upsampling input from the upsampling unit 41 according to the prediction mode set by the prediction mode setting unit Convert to the same dynamic range as the HDR image. For example, when the bit shift mode is set, DR conversion unit 44 shifts the pixel value of the up-sampled base layer to the left by a predetermined bit shift amount according to equations (1) to (3). By doing this, the predicted pixel value is calculated. Further, when the fixed parameter mode is set, the DR conversion unit 44 multiplies the pixel value of the base layer after upsampling by a fixed gain according to Equations (4) to (6), and further fixes the values. The predicted pixel value is calculated by adding the typical offset.
- the DR conversion unit 44 uses the gain and offset adaptively calculated by the parameter calculation unit 43 instead of the fixed gain and offset to obtain predicted pixel values. Calculate Thereby, a reference image for inter-layer prediction is generated.
- the DR conversion unit 44 stores the reference image for inter-layer prediction generated in this manner (a base layer image having a wide dynamic range corresponding to an HDR image) in the frame memory 25.
- FIG. 8 is an explanatory diagram for describing an example of the syntax of these coding parameters for DR prediction. Although an example is given here in which the difference between the gain and the offset is encoded, the syntax described here is applicable even when the gain and the offset itself are encoded (the beginning of the parameter name delta_ "can be deleted).
- the syntax shown in FIG. 8 may be included in PPS, for example, or may be included in a slice header.
- the “dr_prediction_flag” in the first line of the syntax is a flag indicating whether or not the PPS or the slice header includes an extended syntax for DR prediction.
- “Dr_prediction_model” on the third line is a prediction mode parameter indicating the prediction mode set by the prediction mode setting unit 42. As described with FIG. 6A, if the prediction mode parameter is equal to "0", then the prediction mode is a bit shift mode. If the prediction mode parameter is equal to "1”, then the prediction mode is a fixed parameter mode. If the prediction mode parameter is equal to "2", then the prediction mode is an adaptive parameter mode.
- the present invention is not limited to these examples, and other types of prediction modes may be adopted.
- the prediction parameters in the fifth line and thereafter are encoded when the prediction mode parameter indicates an adaptive parameter mode.
- the tenth line “delta_luma_log2_gain_denom” is the difference from the past value of the base 2 logarithm of the value of the denominator of the gain of the luminance component.
- “Delta_luma_gain_dr” on the 11th line is the difference from the past value of the numerator value of the gain of the luminance component.
- the “delta_luma_offset_dr” on the 12th line is the difference from the past value of the value of the offset of the luminance component.
- the denominator of the gain may be made common.
- the “delta_chroma_offset_dr [j]” on line 18 is the difference from the past value of the value of the offset of the j-th color difference component.
- chroma_gain_dr_flag on the seventh line indicates zero, the coding of the difference between the prediction parameters of these color difference components may be omitted. In this case, the past value of the prediction parameter may be used as it is in the latest picture or slice (ie, the difference is zero).
- the prediction mode parameter "dr_prediction_model" is encoded for each PPS or slice header.
- the prediction mode parameter is adaptive parameter mode for the previous picture to be the basis of the difference or the slice at the same position of the previous picture. It is not always the case.
- the difference parameter prefixed with “delta_” in the syntax of FIG. 8 is changed from the latest value (gain or offset) of the prediction parameter to the bit shift amount.
- the value of the corresponding offset may be zero regardless of the bit shift amount n shift .
- the difference parameter in the syntax of FIG. 8 is a fixed parameter value (gain g i_fixed or offset o i_fixed from the latest value (gain or offset) of the prediction parameter) Each represents the difference calculated by subtracting.
- the gain g i — fixed and the offset o i — fixed are not coded and are pre-stored at the encoder and decoder.
- FIG. 9 illustrates in tabular form the basis of the gain and offset differences described herein. Note that if there is no past value at the beginning of the sequence or the like, the basis of the difference may be zero or may be a fixed parameter value (gain g i — fixed or offset o i — fixed ).
- prediction mode parameters and prediction parameters for DR prediction are to be coded for each picture and inserted into PPS. explained. However, assuming an application in which different luminance dynamic ranges are used for partial areas of an image, it is useful to encode prediction mode parameters and differences of prediction parameters for each slice or tile.
- the base layer image IM B1 is divided into four tiles TB1 , TB2 , TB3 and TB4 .
- the enhancement layer image IM E1 is divided into four tiles T E1 , T E2 , T E3 and T E4 . Each of the four tiles displays an image captured by a different camera.
- the tile T E2 and T E4 of the base layer image IM B1 displays the SDR version of the synthesis image from the camera to be installed in four locations
- the enhancement layer image IM E1 is the same synthetic as the tile T B2 and T B4 It can display the HDR version of the video.
- FIG. 8 shows a syntax defined by Non-Patent Document 1 for weighted prediction related parameters.
- “Luma_log2_weight_denom” in the second line of FIG. 11 and “delta_chroma_log2_weight_denom” in the fourth line are common to the L0 reference frame and the L1 reference frame that can be used in weighted prediction, respectively, for the weight denominator values of the luminance component and the color difference component.
- Lines 5 to 20 specify the remaining weighted prediction related parameters for the L0 reference frame.
- Lines 21 to 38 identify the remaining weighted prediction related parameters for the L1 reference frame, if bi-prediction is possible. The meaning of each parameter is described in Non-Patent Document 1.
- Table 1 shows an example of the mapping between the weighted prediction related parameters shown in FIG. 11 and the parameters of the DR prediction illustrated in FIG.
- all the parameters other than the extension flag “dr_prediction_flag” and the prediction mode parameter “dr_prediction_model” are any parameters for weighted prediction. It can be mapped.
- the roles of the individual parameters may differ, for example, in that the values of the weighted prediction related parameters do not necessarily mean differences from past values, but the types of parameters mapped to each other are equivalent. Since only one reference frame (base layer image) is used in DR prediction, the argument i corresponding to the reference frame number and the variable "num_ref_idx_10_active_minus1" become unnecessary.
- the lossless encoding unit 16 may encode, for example, a prediction parameter of DR prediction in a header (slice header) having a syntax shared with weighted prediction related parameters.
- a prediction parameter of DR prediction in a header (slice header) having a syntax shared with weighted prediction related parameters.
- the extension flag “dr_prediction_flag” and the prediction mode parameter “dr_prediction_model” may be separately encoded in the SPS, PPS or slice header.
- a flag indicating which of the weighted prediction related parameter and the parameter for DR prediction is coded may be additionally coded.
- the lossless encoding unit 16 does not encode the enhancement layer specific weighted prediction related parameters.
- the syntax of FIG. 11 defined by Non-Patent Document 1 is not used for weighted prediction in the enhancement layer. Therefore, syntax definition can be efficiently utilized by coding prediction parameters of DR prediction with the same syntax instead of weighted prediction related parameters.
- the syntax for parameters of the L1 reference frame (lines 21 to 38 in FIG. 11) may not be used.
- the value of the variable “num_ref_idx_I0_active_minus1” corresponding to the reference frame number (minus 1) may be considered to be zero (ie, the number of base layer images whose dynamic range is to be converted is 1).
- the weighted prediction related parameters may be encoded in the enhancement layer, and a portion thereof may be reused for DR prediction.
- the denominators specified by “luma_log2_weight_denom” and “delta_chroma_log2_weight_denom” illustrated in FIG. 11 may be reused as the denominators of the gains of the luminance component and the color difference component.
- the lossless encoding unit 16 does not encode “delta_luma_log2_gain_denom” and “delta_chroma_log2_gain_denom” illustrated in FIG. Thereby, it is possible to reduce the amount of code additionally required for DR prediction and to improve coding efficiency.
- the DR conversion unit 44 selectively selects the first version and the second version of these prediction parameters in order to predict the image of the enhancement layer, that is, to generate a reference image for inter-layer prediction. use.
- the parameter calculator 43 may calculate the difference from the past value of the first version of the prediction parameter and the difference from the past value of the second version of the prediction parameter.
- the lossless encoding unit 16 encodes the value (or the difference thereof) calculated for the first version into the portion for the L0 reference frame of the syntax shared with the weighted prediction related parameter. Also, the lossless encoding unit 16 encodes the value (or the difference thereof) calculated for the second version into the portion for the L1 reference frame of the syntax shared with the weighted prediction related parameter.
- the first and second versions of the prediction parameter are selectively used according to the band to which the pixel value belongs.
- the band of pixel values here is not limited, but may correspond to brightness for the luminance component and vividness for the color difference component.
- FIG. 12 is an explanatory diagram for describing selective use of a prediction parameter according to a band to which a pixel value belongs.
- FIG. 12 shows two bars representing the range of pixel values of the luminance component (Y) and the color difference component (Cb / Cr). When the bit depth is 8 bits, these ranges are 0. Through 255.
- the range of the luminance component is divided into a lower band Pb11 and an upper band Pb12 on the basis of the boundary value, and in the example of FIG. 12, the boundary value of the luminance component is equal to 128 (ie, the center of the range).
- the DR conversion unit 44 uses the first version of the prediction parameter when calculating the predicted pixel value from the pixel value. Can be used.
- the DR conversion unit 44 may use the second version of the prediction parameter when calculating the predicted pixel value from the pixel value.
- the range of the color difference component is divided into an inner band Pb21 and an outer band Pb22 on the basis of two boundary values, and in the example of FIG. 12, the boundary values of the color difference component are 64 and 191 (that is, 1 of the range). Equal to / 4 and 3/4).
- the DR conversion unit 44 uses the first version of the prediction parameter when calculating the predicted pixel value from the pixel value. Can be used.
- the DR conversion unit 44 may use the second version of the prediction parameter when calculating the predicted pixel value from the pixel value.
- the above boundary value for switching the version to be used may be known in advance to both the encoder and the decoder.
- the lossless encoding unit 16 may further encode the boundary information that specifies the above-mentioned boundary value.
- the boundary information may indicate, for example, an adjustment value for the luminance component to be added to a reference value at the center of the range (for example, 128 if the bit depth is 8 bits) for the luminance component.
- the boundary information is adjusted for the chrominance component, which is subtracted from the first reference value equal to 1/4 of the range and added to the second reference value equal to 3/4 of the range for the chrominance component. Can be shown.
- FIG. 13 is a graph schematically representing the prediction model realized by the first method for luminance components.
- the horizontal axis of the graph of FIG. 13 corresponds to the pixel value of the luminance component of the base layer, and the pixel value is expressed by SDR.
- the vertical axis corresponds to the pixel value of the luminance component of the enhancement layer, and the pixel value is expressed by HDR.
- a thick line indicates a locus of predicted pixel values of the enhancement layer predicted using the gain and offset of the adaptive parameter mode from the pixel values of the base layer. This locus has a different slope and intercept in the band Pb11 on the left side and the band P12 on the right side of the boundary value Y border on the horizontal axis, and has a polygonal shape.
- the boundary value Y border may be equal to half (Y max / 2) of the maximum value Y max of pixel values of luminance components of the base layer (Y max / 2) or may be equal to Y max / 2 plus the adjustment value dY .
- the additional coding of the adjustment value dY means that the boundary value Y border can be controlled adaptively. In this case, as a result of the flexibility of the prediction model of DR prediction being extended, it is possible to further improve the prediction accuracy.
- FIG. 14 is an explanatory diagram for describing an example of syntax related to the method described with reference to FIG.
- the line numbers in the syntax shown in FIG. 14 correspond to the line numbers of the syntax of the weighted prediction related parameter shown in FIG.
- the part for the parameter of the L1 reference frame of the syntax of a weighting prediction relevant parameter is abbreviate
- an additional flag “inter_layer_pred_flag” is defined.
- the flag "inter_layer_pred_flag” is set to True if this syntax is used for DR prediction parameters.
- the parameter "delta_pix_value_luma [i]” after the 13th line is the above-mentioned boundary information about the luminance component.
- the parameter “delta_pix_value_luma [i]” specifies, for the luminance component, an adjustment value for the luminance component to be added to the reference value at the center of the range.
- the parameter “delta_pix_value_chroma [i] [j]” after the 18th line is the above-mentioned boundary information about the color difference component.
- the parameter “delta_pix_value_chroma [i] [j]” is subtracted from the first reference value equal to 1 ⁇ 4 of the range and added to the second reference value equal to 3 ⁇ 4 of the range for the color difference component
- the adjustment value for the color difference component is specified. Note that these additional parameters shown in FIG. 14 may be included in the extension of the slice header instead of the slice header.
- the first and second versions of the prediction parameter are selectively used according to the image area to which the pixel belongs.
- An image area here may correspond to an individual area that may be formed by segmenting a picture, slice or tile.
- FIG. 15 is an explanatory diagram for describing selective use of a prediction parameter according to an image region to which a pixel belongs.
- the image IM 2 is shown.
- Image IM 2 can be a upsampled image that can be output for example from the up-sampling unit 41.
- Image IM 2 is segmented over the image area PA1 and below the image area PA2.
- the DR conversion unit 44 uses, for example, the first version of the prediction parameter when calculating the prediction pixel value for the pixels belonging to the image area PA1, and the prediction parameter when calculating the prediction pixel values for the pixels belonging to the image area PA2.
- a second version of can be used.
- the area boundaries for switching the version to be used may be known in advance to both the encoder and the decoder (eg, a picture, a slice or a tile dividing a tile equally into two, etc.).
- the lossless encoding unit 16 may further encode the boundary information specifying the region boundary.
- the boundary information may be, for example, information specifying the first LCU (LCU L border in the figure) following the region boundary in raster scan order.
- the first LCU following the area boundary may be specified by the number of LCUs counted from a given location of the picture, slice or tile, or may be specified by a flag included in the header of the first LCU .
- the predetermined place in the former case may be the beginning of a picture, slice or tile, or may be an intermediate point thereof (eg, a place exactly half the total number of LCUs).
- size information directly indicating the size of a slice is not encoded.
- the decoder usually does not recognize the size of the slice while decoding the slice (before the decoding of the slice is completed). Therefore, additionally coding the boundary information specifying the region boundary is also useful in the case where the region boundary is fixed (for example, a boundary dividing a slice into two equally).
- FIG. 16 is an explanatory diagram for describing an example of syntax related to the method described with reference to FIG.
- the line numbers in the syntax shown in FIG. 16 correspond to the line numbers of the syntax of the weighted prediction related parameter shown in FIG.
- the part for the parameter of the L1 reference frame of the syntax of a weighting prediction relevant parameter is abbreviate
- an additional flag “inter_layer_pred_flag” similar to that shown in FIG. 14 is defined.
- the flag “inter_layer_pred_flag” is set to True if this syntax is used for DR prediction parameters.
- the parameter “delta_num_ctb” after the flag is the boundary information described above.
- the parameter “delta_num_ctb” is information that specifies the first LCU following the area boundary in raster scan order by the number of LCU. If the number of LCUs is counted from the midpoint of a picture, slice or tile, the parameter “delta_num_ctb” may indicate a positive or negative integer. Note that these additional parameters shown in FIG. 16 may also be included in the extension of the slice header instead of the slice header.
- the optimal prediction model when different versions of prediction parameters can be used for each image area, it is possible to apply the optimal prediction model to each of the image areas to the DR prediction.
- the optimal combination of gain and offset is different for bright and dark areas in an image.
- gains and offsets optimized for each region for DR prediction prediction errors in DR prediction can be reduced and coding efficiency can be improved.
- the additional coding of the boundary information identifying the region boundaries means that the location of the region boundaries can be adaptively controlled. In this case, it is possible to further reduce the prediction error of the DR prediction by moving the region boundary in accordance with the content of the image.
- the method described in this section to provide two versions of the prediction parameter may be applied only to the luminance component and not to the color difference component.
- the chrominance component is encoded and decoded from the portion for the L0 reference frame in the syntax of the weighted prediction related parameter regardless of the band to which the pixel value belongs or the image region to which the pixel belongs.
- the prediction parameters typically gain and offset
- the parameters for the chrominance components included in the part for the L1 reference frame may be set to any value (eg zero) that can be mapped to the shortest code word by variable length coding, which value is in the DR transform Can be ignored).
- the chroma format indicates that the resolution of the chrominance component is equal to the resolution of the luminance component
- two versions of the prediction parameter are provided for both the luminance component and the chrominance component, and the resolution of the chrominance component is greater than that of the luminance component.
- Only one version of the prediction parameter may be provided for the chrominance component if the chroma format indicates that it is also low. For example, when the chroma format is 4: 2: 0, the resolution of the chrominance component is lower than the luminance component in both the vertical and horizontal directions. When the chroma format is 4: 2: 2, the resolution of the chrominance component is lower than the luminance component in the horizontal direction. Therefore, the code amount of the prediction parameter can be effectively reduced by executing the prediction more coarsely for only the color difference component in these cases.
- the image size of the base layer is 2K (for example, 1920 ⁇ 1080 pixels), the dynamic range is SDR, and the bit depth is 8 bits.
- the image size of the enhancement layer is 4K (for example, 3840 ⁇ 2160 pixels), the dynamic range is HDR, and the bit depth is 10 bits.
- the up-sampling unit 41 performs bit shift together with up-sampling (step S1). For example, the addition of two terms in the filter operation may correspond to a left shift of one bit, and the addition of four terms may correspond to a left shift of two bits. Thus, bit shifting can be performed substantially simultaneously with upsampling.
- the DR conversion unit 44 converts the dynamic range of the up-sampled image input from the up-sampling unit 41 (step S3).
- the transformation of the dynamic range here may be a linear transformation similar to weighted prediction.
- the image size of the base layer is 2K
- the dynamic range is SDR
- the bit depth is 8 bits
- the image size of the enhancement layer is 2K
- the dynamic range is HDR
- the bit depth is 10 bits.
- the up-sampling unit 41 performs only bit shifting because the resolution is the same between layers (step S2).
- the DR conversion unit 44 converts the dynamic range of the up-sampled image input from the up-sampling unit 41 (step S3).
- the image size of the base layer is 2K
- the dynamic range is SDR
- the bit depth is 8 bits.
- the image size of the enhancement layer is 4K
- the dynamic range is SDR
- the bit depth is 10 bits.
- the up-sampling unit 41 performs bit shift together with up-sampling (step S1). Since the dynamic range is the same between layers, the DR conversion unit 44 does not execute DR conversion.
- the upsampling and the bit shift are simultaneously performed, which reduces the processing cost required for the entire inter-layer processing as compared to the case where they are separately performed.
- the bit shift is performed independently from the DR conversion, even though the DR conversion itself includes an operation similar to the bit shift, so there is room for improvement in terms of processing cost. There is.
- the DR conversion unit 44 can execute bit shift together in calculation of DR conversion.
- the syntax for weighted prediction is reused, the DR conversion operation can be expressed as follows.
- X k is a pixel value before conversion of the k-th color component
- X k and Pred are pixel values after conversion of the k-th color component.
- w k, n k and o k are applied to the k-th color component, a molecule, the denominator of the logarithm of the weight of the base 2, and the offset of the weight respectively (gain).
- the difference in bit depth between layers is m bits
- the operation when the DR conversion unit 44 simultaneously performs bit shift (left shift) for m bits can be expressed as follows: .
- bit shifting can be performed simultaneously with upsampling or bit shifting can be performed simultaneously with DR conversion
- the timing for performing bit shifting is between the encoder and the decoder (or between decoders with different implementations) )
- the encoder performs bit shifting simultaneously with DR conversion
- the decoder performs bit shifting simultaneously with up-sampling
- the accuracy of inter-layer prediction decreases.
- the lossless encoding part 16 further encodes the bit shift control flag which controls the execution timing of a bit shift.
- bit depth of the enhancement layer is larger than the bit depth of the base layer, whether the bit shift for inter-layer prediction should be performed simultaneously with DR conversion or should be performed simultaneously with up-sampling, for example, , Is a control parameter indicating.
- 18A to 18C are explanatory diagrams for describing a new method for suppressing the processing cost of inter-layer prediction.
- the attributes of the base layer and the enhancement layer in the example of FIG. 18A are the same as in FIG. 17A.
- the bit shift control flag indicates "1" (the bit shift is performed simultaneously with the weighting prediction).
- the up-sampling unit 41 performs up-sampling without performing bit shift for increasing the bit depth (step S4).
- the DR conversion unit 44 performs bit shift at the same time as converting the dynamic range of the up-sampled image input from the up-sampling unit 41 as in the above equation (8) (step S6).
- bit shift control flag indicates "1" (the bit shift is performed simultaneously with the weighting prediction).
- the up-sampling unit 41 neither performs bit shift nor up-sampling.
- the DR conversion unit 44 performs bit shift at the same time as converting the dynamic range of the image of the base layer as in the above equation (8) (step S6).
- bit shift control flag indicates "0" (the bit shift is performed simultaneously with the up sampling).
- the up-sampling unit 41 performs bit shift together with up-sampling (step S5). Since the dynamic range is the same between layers, the DR conversion unit 44 does not execute DR conversion.
- the processing steps are reduced by a new method especially regarding the second example (FIGS. 17B and 18B) in which the image size does not change between layers. Is understood.
- the presence of the bit shift control flag makes it possible to adaptively switch the timing of performing the bit shift and minimize the number of processing steps of DR prediction.
- FIG. 19 is an explanatory diagram for describing an example of a syntax related to the method described with reference to FIGS. 18A to 18C.
- the line numbers in the syntax shown in FIG. 19 correspond to the line numbers of the syntax of the weighted prediction related parameter shown in FIG.
- the part for the parameter of the L1 reference frame of the syntax of a weighting prediction relevant parameter is abbreviate
- two coding parameters “weighted_prediction_and_bit_shift_luma_flag” and “weighted_prediction_and_bit_shift_chroma_flag” are defined, which are encoded when the layer ID is not zero (that is, the layer is an enhancement layer).
- the former is a bit shift control flag for controlling the execution timing of the bit shift of the luminance component.
- the latter is a bit shift control flag for controlling the execution timing of the bit shift of the color difference component.
- These flags are set to True if bit shifting should be performed simultaneously with DR conversion, and False if bit shifting should be performed simultaneously with up sampling. Since the image size and bit depth can be defined differently for each color component, by encoding the bit shift control flag separately for the luminance component and the color difference component, the bit shift execution timing can be adjusted according to the definition of those attributes. It becomes possible to control flexibly.
- the present invention is not limited to this example, and a single bit shift control flag may be encoded for both the luminance component and the color difference component.
- bit shift control flag may be omitted or the flag may be set to a particular value (e.g. zero). Also, when the syntax of FIG. 19 is used for weighted prediction in inter prediction within a layer rather than inter layer prediction, coding of the bit shift control flag may be omitted or the flag may be specified It may be set to a value (eg, zero).
- FIG. 20 is a flowchart illustrating an example of a schematic processing flow at the time of encoding according to an embodiment. Note that, for the sake of simplicity, processing steps not directly related to the technology of the present disclosure are omitted from the drawings.
- the BL encoding unit 1a executes encoding processing of the base layer to generate an encoded stream of the base layer (step S11).
- the common memory 2 buffers an image of the base layer (one or both of the decoded image and the prediction error image) generated in the coding process of the base layer and parameters to be reused between the layers (step S12).
- the parameters reused between layers may include weighted prediction related parameters.
- the EL encoding unit 1b performs an enhancement layer encoding process to generate an enhancement layer encoded stream (step S13).
- the image of the base layer buffered by the common memory 2 is upsampled by the DR prediction unit 40, and its dynamic range is converted from SDR to HDR. Then, the image of the base layer after DR conversion may be used as a reference image in inter-layer prediction.
- the multiplexing unit 3 multiplexes the encoded stream of the base layer generated by the BL encoding unit 1a and the encoded stream of the enhancement layer generated by the EL encoding unit 1b, and performs multiplexing of multiple layers. To generate an integrated stream (step S14).
- FIG. 21 is a flowchart showing a first example of the flow of the DR prediction process in the encoding process of the enhancement layer. The DR prediction process described here is repeated for each picture or slice.
- the upsampling unit 41 upsamples the image of the base layer acquired from the common memory 2 according to the resolution ratio between the base layer and the enhancement layer (step S20).
- the prediction mode setting unit 42 sets one of prediction mode candidates for DR prediction as a picture (or a slice) (step S22).
- the prediction mode setting unit 42 may set a prediction mode defined in advance, or set a prediction mode to be dynamically selected based on the evaluation of coding efficiency or prediction accuracy for each candidate of the prediction mode. May be
- the lossless encoding unit 16 encodes a prediction mode parameter indicating the prediction mode set by the prediction mode setting unit 42 (step S25).
- the prediction mode parameter encoded by the lossless encoding unit 16 is inserted into, for example, PPS or a slice header.
- the subsequent processing branches depending on the prediction mode set by the prediction mode setting unit 42 (steps S26 and S28).
- the parameter calculator 43 calculates values of optimal gains and offsets to be used for DR prediction (conversion) (step S30).
- the parameter calculator 43 may further calculate the difference between the calculated optimum gain and offset values from the past values.
- the lossless encoding unit 16 encodes the gain and the offset (or their difference) calculated by the parameter calculation unit 43 (step S32).
- These prediction parameters encoded by the lossless encoding unit 16 are inserted into, for example, PPS or a slice header.
- the DR conversion unit 44 is adaptively calculated on the pixel values of the base layer after upsampling according to the equations (4) to (6).
- the predicted pixel value of each pixel is calculated by multiplying a fixed gain and further adding an offset (step S34).
- the DR conversion unit 44 shifts the pixel value of the up-sampled base layer to the left by a predetermined bit shift amount according to Equation (1) to Equation (3). By doing this, the predicted pixel value of each pixel is calculated (step S36).
- the DR conversion unit 44 stores the image of the base layer after DR conversion, that is, the predicted image as the HDR image in the frame memory 25 ( Step S38).
- step S40 Thereafter, if there is an unprocessed next picture or slice, the process returns to step S20, and the above-described process is repeated for the next picture or slice (step S40).
- FIG. 22 is a flowchart illustrating a second example of the flow of the DR prediction process in the encoding process of the enhancement layer.
- the prediction mode setting unit 42 sets one of prediction mode candidates for DR prediction as a sequence (step S21).
- the lossless encoding unit 16 encodes a prediction mode parameter indicating the prediction mode set by the prediction mode setting unit 42 (step S23).
- the prediction mode parameter encoded by the lossless encoding unit 16 is inserted into the SPS.
- steps S24 to S40 are repeated for each picture or slice in the sequence.
- the upsampling unit 41 upsamples the image of the base layer obtained from the common memory 2 in accordance with the resolution ratio between the base layer and the enhancement layer (step S24).
- the processing branches depending on the prediction mode set by the prediction mode setting unit 42 (steps S26 and S28).
- the parameter calculator 43 calculates values of optimal gains and offsets to be used for DR prediction (conversion) (step S30).
- the parameter calculator 43 may further calculate the difference between the calculated optimum gain and offset values from the past values.
- the lossless encoding unit 16 encodes the gain and the offset (or their difference) calculated by the parameter calculation unit 43 (step S32).
- These prediction parameters encoded by the lossless encoding unit 16 are inserted into, for example, PPS or a slice header.
- the DR conversion unit 44 is adaptively calculated on the pixel values of the base layer after upsampling according to the equations (4) to (6).
- the predicted pixel value of each pixel is calculated by multiplying a fixed gain and further adding an offset (step S34).
- the DR conversion unit 44 shifts the pixel value of the up-sampled base layer to the left by a predetermined bit shift amount according to Equation (1) to Equation (3). By doing this, the predicted pixel value of each pixel is calculated (step S36).
- the DR conversion unit 44 stores the image of the base layer after DR conversion, that is, the predicted image as the HDR image in the frame memory 25 ( Step S38).
- step S40 the process returns to step S24, and the upsampling and DR conversion are repeated for the next picture or slice.
- step S42 it is further determined whether the next sequence exists. Then, if there is a next sequence, the process returns to step S21, and the above-described process is repeated for the next sequence.
- FIG. 23 is a flowchart illustrating a third example of the flow of the DR prediction process in the encoding process of the enhancement layer.
- the prediction mode setting unit 42 sets one of prediction mode candidates for DR prediction as a sequence (step S21).
- the lossless encoding unit 16 encodes a prediction mode parameter indicating the prediction mode set by the prediction mode setting unit 42 (step S23).
- the prediction mode parameter encoded by the lossless encoding unit 16 is inserted into the SPS.
- steps S24 to S41 are repeated for each slice in the sequence.
- the upsampling unit 41 upsamples the image of the base layer obtained from the common memory 2 in accordance with the resolution ratio between the base layer and the enhancement layer (step S24).
- the upsampling filter operations here may or may not include bit shifts.
- the processing branches depending on the prediction mode set by the prediction mode setting unit 42 (steps S26 and S28).
- the parameter calculator 43 calculates values of optimal gains and offsets to be used for DR prediction (conversion) (step S30).
- the parameter calculator 43 may further calculate the difference between the calculated optimum gain and offset values from the past values.
- the lossless encoding unit 16 encodes the calculated gain and offset (or their difference) by reusing the syntax of the weighted prediction related parameter (step S33).
- These prediction parameters encoded by the lossless encoding unit 16 are inserted into the slice header. If the bit shift control flag described above is employed in the syntax, then the encoded bit shift control flag may also be inserted into the slice header here.
- the DR conversion unit 44 is adaptively calculated on the pixel values of the base layer after upsampling according to the equations (4) to (6).
- the predicted pixel value of each pixel is calculated by multiplying a fixed gain and further adding an offset (step S34). If bit shift is not performed in step S24, the calculation of the predicted pixel value here may include bit shift.
- the DR conversion unit 44 shifts the pixel value of the up-sampled base layer to the left by a predetermined bit shift amount according to Equation (1) to Equation (3). By doing this, the predicted pixel value of each pixel is calculated (step S36).
- the DR conversion unit 44 stores the image of the base layer after DR conversion, that is, the predicted image as the HDR image in the frame memory 25 (step S38). ).
- step S41 when there is an unprocessed next slice in the sequence, the process returns to step S24, and upsampling and DR conversion are repeated for the next slice (step S41).
- step S42 when DR conversion is completed for all slices in the sequence, it is further determined whether the next sequence exists (step S42). Then, if there is a next sequence, the process returns to step S21, and the above-described process is repeated for the next sequence.
- FIG. 24 is a flowchart showing a fourth example of the flow of the DR prediction process in the encoding process of the enhancement layer.
- the prediction mode setting unit 42 sets one of prediction mode candidates for DR prediction as a sequence (step S21).
- the lossless encoding unit 16 encodes a prediction mode parameter indicating the prediction mode set by the prediction mode setting unit 42 (step S23).
- the prediction mode parameter encoded by the lossless encoding unit 16 is inserted into the SPS.
- steps S24 to S41 are repeated for each slice in the sequence.
- the upsampling unit 41 upsamples the image of the base layer obtained from the common memory 2 in accordance with the resolution ratio between the base layer and the enhancement layer (step S24).
- the upsampling filter operations here may or may not include bit shifts.
- the processing branches depending on the prediction mode set by the prediction mode setting unit 42 (steps S26 and S28).
- the parameter calculator 43 calculates a first version of the gain and offset to be used for DR prediction (conversion) (step S31a).
- the parameter calculator 43 calculates a second version of the gain and the offset (step S31b).
- the first and second versions may each include a set of optimal values to use for the first and second bands of the range of pixel values. Instead, these first and second versions may contain sets of optimal values to be used for the first and second image areas, respectively.
- the parameter calculation unit 43 may further calculate, for the first version and the second version, the differences from the past values of the gain and offset values, respectively.
- the lossless encoding unit 16 adds the prediction parameters calculated for the first version and the second version to the portion for the L0 reference frame and the portion for the L1 reference frame of the syntax of the weighted prediction related parameter (or Are encoded (step S33 b). These prediction parameters encoded by the lossless encoding unit 16 are inserted into the slice header. If the bit shift control flag described above is employed in the syntax, then the encoded bit shift control flag may also be inserted into the slice header here.
- the subsequent process flow is the third example described with reference to FIG. 23 except that switching of the version of the prediction parameter may be performed in step S34 according to the band to which the pixel value belongs or the image region to which the pixel belongs. It may be similar.
- the lossless encoding unit 16 may, for example, extend the slice header or the slice header to specify boundary values between bands for switching versions of prediction parameters or boundary information for specifying an area boundary between image areas. , And may be additionally encoded.
- DR conversion is performed after up-sampling (and bit shift as needed) is performed.
- the flowcharts of FIGS. 21 to 24 also follow such a processing order.
- the processing cost of DR conversion is proportional to the number of pixels to be converted
- performing DR conversion on pixels increased by upsampling is not optimal in terms of processing cost.
- performing DR conversion on a pixel having an expanded bit depth after bit shifting means that the processing resources required for the DR conversion operation (for example, the required number of bits of the register) are also increased. .
- the DR prediction unit 40 converts the dynamic range of the base layer image and then converts the image.
- the enhancement layer image may be predicted by up-sampling.
- FIG. 25A is an explanatory diagram for describing a typical example of a processing order of inter-layer prediction.
- two process steps similar to FIG. 17A are shown as an example.
- the image size and bit depth (for example, 2K / 8 bits) of an image included in slice data of the base layer are increased (for example, to 4K / 10 bits) by upsampling and bit shift, respectively.
- the dynamic range of the up-sampled image is converted to the wider dynamic range of the enhancement layer according to the prediction parameters.
- FIG. 25B is an explanatory diagram for describing an example of a new inter-layer prediction processing order in a modification.
- the DR prediction unit 40 converts the dynamic range of the image included in the slice data of the base layer into the wider dynamic range of the enhancement layer according to the prediction parameter.
- the DR prediction unit 40 increases the image size (for example, 2K) of the image after DR conversion by upsampling (for example, to 4K).
- Bit shifting may be performed simultaneously with DR conversion or may be performed simultaneously with up-sampling.
- the timing at which the bit shift should be performed may be specified by the bit shift control flag. According to such a new processing order, the number of pixels to be converted in DR conversion and the bit depth are reduced as compared to the case of the existing processing order, so that the processing cost for the entire inter-layer processing is further increased. Be suppressed.
- FIG. 26 is a block diagram showing an example of the configuration of EL decoding unit 6b shown in FIG.
- the EL decoding unit 6b includes an accumulation buffer 61, a lossless decoding unit 62, an inverse quantization unit 63, an inverse orthogonal transformation unit 64, an addition unit 65, a loop filter 66, a rearrangement buffer 67, D / A ( Digital to Analogue conversion unit 68, frame memory 69, selectors 70 and 71, intra prediction unit 80, inter prediction unit 85, and DR prediction unit 90.
- D / A Digital to Analogue conversion unit 68, frame memory 69, selectors 70 and 71, intra prediction unit 80, inter prediction unit 85, and DR prediction unit 90.
- the accumulation buffer 61 temporarily accumulates the encoded stream of the enhancement layer input from the demultiplexer 5 using a storage medium.
- the lossless decoding unit 62 decodes quantized data of the enhancement layer from the encoded stream of the enhancement layer input from the accumulation buffer 61 in accordance with the encoding scheme used in the encoding. Further, the lossless decoding unit 62 decodes the information inserted in the header area of the encoded stream.
- the information decoded by the lossless decoding unit 62 may include, for example, information on intra prediction and information on inter prediction. Parameters for DR prediction may also be decoded in the enhancement layer.
- the lossless decoding unit 62 outputs the quantized data to the inverse quantization unit 63. In addition, the lossless decoding unit 62 outputs information on intra prediction to the intra prediction unit 80. In addition, the lossless decoding unit 62 outputs information on inter prediction to the inter prediction unit 85. Also, the lossless decoding unit 62 outputs parameters for DR prediction to the DR prediction unit 90.
- the inverse quantization unit 63 inversely quantizes the quantized data input from the lossless decoding unit 62 in the same quantization step as that used in the encoding, and restores the transform coefficient data of the enhancement layer.
- the inverse quantization unit 63 outputs the restored transform coefficient data to the inverse orthogonal transform unit 64.
- the inverse orthogonal transform unit 64 generates prediction error data by performing inverse orthogonal transform on the transform coefficient data input from the dequantization unit 63 according to the orthogonal transform scheme used at the time of encoding.
- the inverse orthogonal transformation unit 64 outputs the generated prediction error data to the addition unit 65.
- the addition unit 65 adds the prediction error data input from the inverse orthogonal transform unit 64 and the prediction image data input from the selector 71 to generate decoded image data. Then, the addition unit 65 outputs the generated decoded image data to the loop filter 66 and the frame memory 69.
- the loop filter 66 is a deblocking filter for reducing block distortion, a sample adaptive offset filter for adding an offset value to each pixel value, and an adaptation for minimizing an error with the original image. Includes loop filter.
- the loop filter 66 filters the decoded image data input from the adding unit 65, and outputs the decoded image data after filtering to the rearrangement buffer 67 and the frame memory 69.
- the rearrangement buffer 67 rearranges the images input from the loop filter 66 to generate a series of time-series image data. Then, the rearrangement buffer 67 outputs the generated image data to the D / A converter 68.
- the D / A conversion unit 68 converts the digital format image data input from the rearrangement buffer 67 into an analog format image signal. Then, the D / A conversion unit 68 displays the image of the enhancement layer by, for example, outputting an analog image signal to a display (not shown) connected to the image decoding device 60.
- the frame memory 69 stores decoded image data before filtering input from the adding unit 65, decoded image data after filtering input from the loop filter 66, and reference image data of the base layer input from the DR prediction unit 90. Store using a medium.
- the selector 70 switches the output destination of the image data from the frame memory 69 between the intra prediction unit 80 and the inter prediction unit 85 for each block in the image according to the mode information acquired by the lossless decoding unit 62. .
- the selector 70 outputs the decoded image data before filtering supplied from the frame memory 69 to the intra prediction unit 80 as reference image data.
- the selector 70 outputs the decoded image data after filtering to the inter prediction unit 85 as reference image data.
- the selector 70 supplies the reference image data of the base layer to the intra prediction unit 80 or the inter prediction unit 85.
- the selector 71 switches the output source of the predicted image data to be supplied to the addition unit 65 between the intra prediction unit 80 and the inter prediction unit 85 in accordance with the mode information acquired by the lossless decoding unit 62. For example, when the intra prediction mode is designated, the selector 71 supplies predicted image data output from the intra prediction unit 80 to the addition unit 65. Further, when the inter prediction mode is specified, the selector 71 supplies predicted image data output from the inter prediction unit 85 to the addition unit 65.
- the intra prediction unit 80 performs intra prediction processing of the enhancement layer based on the information on intra prediction input from the lossless decoding unit 62 and the reference image data from the frame memory 69 to generate predicted image data. Intra prediction processing is performed for each PU.
- intra BL prediction or intra residual prediction is designated as the intra prediction mode
- the intra prediction unit 80 uses a co-located block in the base layer corresponding to the prediction target block as a reference block.
- the intra prediction unit 80 generates a predicted image based on the decoded image of the reference block.
- intra prediction unit 80 predicts the prediction error of intra prediction based on the prediction error image of the reference block, and generates a predicted image to which the predicted errors are added.
- the intra prediction unit 80 outputs the generated prediction image data of the enhancement layer to the selector 71.
- the inter prediction unit 85 performs inter prediction processing (motion compensation processing) of the enhancement layer based on the information on inter prediction input from the lossless decoding unit 62 and the reference image data from the frame memory 69, and generates prediction image data. Do.
- the inter prediction process is performed for each PU.
- the inter prediction unit 85 uses the co-located block in the base layer corresponding to the target block to be predicted as the reference block.
- the inter prediction unit 85 predicts the prediction error of inter prediction based on the prediction error image of the reference block, and generates a prediction image to which the predicted prediction errors are added.
- the inter prediction unit 85 outputs the generated prediction image data of the enhancement layer to the selector 71.
- the DR prediction unit 90 upsamples the base layer image (decoded image or prediction error image) buffered by the common memory 7 according to the resolution ratio between the base layer and the enhancement layer.
- the DR prediction unit 90 sets the dynamic range of the up-sampled base layer image to a dynamic range equivalent to that of the enhancement layer image. Convert.
- the DR prediction unit 90 approximately predicts the image of the enhancement layer from the image of the base layer, on the basis of an independent linear relationship for each color component between the base layer and the enhancement layer. , Convert dynamic range.
- the image of the base layer whose dynamic range has been converted by the DR prediction unit 90 is stored in the frame memory 69, and may be used as a reference image in inter-layer prediction by the intra prediction unit 80 or the inter prediction unit 85.
- the DR prediction unit 90 acquires, from the lossless decoding unit 62, a prediction mode parameter indicating a prediction mode for DR prediction. Further, when the prediction mode parameter indicates the adaptive parameter mode, the DR prediction unit 90 further acquires the prediction parameter (or the difference from the past value thereof) from the lossless decoding unit 62. Then, using these parameters acquired from the lossless decoding unit 62, the DR prediction unit 90 predicts an image of the enhancement layer from the image of the base layer after upsampling.
- FIG. 27 is a block diagram showing an example of a configuration of DR prediction unit 90 shown in FIG. Referring to FIG. 27, the DR prediction unit 90 includes an upsampling unit 91, a prediction mode setting unit 92, a parameter setting unit 93, and a DR conversion unit 94.
- the up-sampling unit 91 up-samples the image of the base layer obtained from the common memory 7 in accordance with the resolution ratio between the base layer and the enhancement layer. More specifically, the up-sampling unit 91 calculates an interpolated pixel value by filtering an image of the base layer with a filter coefficient defined in advance for each of the interpolated pixels sequentially scanned according to the resolution ratio. Thereby, the spatial resolution of the image of the base layer used as a reference block is increased to a resolution equivalent to that of the enhancement layer. The up-sampling unit 91 outputs the up-sampled image to the DR conversion unit 94. Note that up-sampling may be skipped if resolutions are equal between layers. In this case, the up-sampling unit 41 can output the image of the base layer to the parameter calculation unit 43 and the DR conversion unit 44 as it is.
- the prediction mode setting unit 92 sets the prediction mode indicated by the prediction mode parameter decoded by the lossless decoding unit 62 among the prediction mode candidates for DR prediction to the DR prediction unit 90. Do.
- the prediction mode candidates may include the bit shift mode, the fixed parameter mode and the adaptive parameter mode described above.
- the prediction mode setting unit 92 may set the prediction mode in accordance with the prediction mode parameter decoded from the PPS.
- the prediction mode setting unit 92 may set the prediction mode according to the prediction mode parameter decoded from the slice header.
- the prediction mode setting unit 92 may set the prediction mode according to the prediction mode parameter decoded from the SPS. When prediction mode parameters are decoded from SPS, the same prediction mode may be maintained in one sequence.
- the parameter setting unit 93 predicts that the lossless decoding unit 62 decodes prediction parameters to be used for DR prediction when the adaptive parameter mode is set by the prediction mode setting unit 92. Set according to the parameters.
- the parameter setting unit 93 obtains the difference between the gain and the offset from the lossless decoding unit 62 in the adaptive parameter mode.
- the parameter setting unit 93 can calculate the latest values of the gain and the offset by adding the differences to the past values of the gain and the offset.
- the past value here may be, for example, a value calculated for the previous picture when gain and offset are calculated for each picture. If gain and offset are calculated for each slice, it may be a value calculated for the slice at the same position of the previous picture.
- the parameter setting unit 93 corresponds the difference decoded by the lossless decoding unit 62 to the bit shift amount. Add to prediction parameter value.
- the parameter setting unit 93 defines in advance the difference to be decoded by the lossless decoding unit 62. Add to fixed forecast parameter values.
- the past values (i.e., the basis of the differences) to which the gain and offset differences should be added are shown in FIG.
- the parameter setting unit 93 outputs the set values of the gain and the offset to the DR conversion unit 94.
- the value of the gain may contain fractional values, its denominator and numerator (or their difference) may be decoded separately. Therefore, the parameter setting unit 93 can obtain the denominator and the numerator (or their difference) of the gain from the lossless decoding unit 62, respectively.
- the multiplication of the gain by the DR conversion unit 94 can be performed by multiplication of numerators that are integers and a shift operation corresponding to division by a denominator.
- the range of values of the denominator of the gain may be limited to only integer powers of 2 to reduce computational cost. In this case, the base 2 logarithm of the value of the denominator may be used as a prediction parameter.
- the DR conversion unit 94 sets the dynamic range of the SDR image of the base layer after upsampling input from the upsampling unit 91 according to the prediction mode set by the prediction mode setting unit 92 to the enhancement layer Convert to the same dynamic range as the HDR image. For example, when the bit shift mode is set, DR conversion unit 94 shifts the pixel value of the up-sampled base layer to the left by a predetermined bit shift amount according to equations (1) to (3). By doing this, the predicted pixel value is calculated. Further, when the fixed parameter mode is set, the DR conversion unit 94 multiplies the pixel value of the base layer after upsampling by a fixed gain according to Equations (4) to (6), and further fixes the values.
- the predicted pixel value is calculated by adding the typical offset. Also, when the adaptive parameter mode is set, the DR conversion unit 94 calculates a predicted pixel value using the gain and offset set by the parameter setting unit 93 instead of the fixed gain and offset. . Thereby, a reference image for inter-layer prediction is generated.
- the DR conversion unit 94 stores, in the frame memory 69, the reference image (image of the base layer having a wide dynamic range corresponding to the HDR image) for inter-layer prediction generated in this manner.
- FIG. 8 An example of the syntax of the prediction mode parameter and the prediction parameter (gain and offset for each color component) decoded by the lossless decoding unit 62 is shown in FIG. These parameters may be decoded by the lossless decoder 62 from the coded stream of the enhancement layer.
- the syntax shown in FIG. 8 may be included in PPS, for example, or may be included in a slice header.
- the example in which the prediction mode parameter and the prediction parameter are decoded from the slice header is useful in applications where different dynamic ranges are used for each partial region of the image.
- the extension flag "dr_prediction_flag" and the prediction mode parameter "dr_prediction_model” may be decoded from the SPS for each sequence. In this case, the same prediction mode is maintained in one sequence.
- the lossless decoding unit 62 decodes the prediction parameter of the DR prediction from the header (slice header) having the syntax shared with the weighted prediction related parameter according to the mapping shown in Table 1. Good. Such syntax reuse reduces syntax redundancy and facilitates ensuring compatibility when implementing and upgrading encoders and decoders.
- the extension flag "dr_prediction_flag" and the prediction mode parameter "dr_prediction_model” may be separately decoded from the SPS, PPS or slice header.
- the lossless decoding unit 62 decodes a flag indicating which of the weighted prediction related parameter and the parameter for DR prediction is encoded, and decodes the parameter for DR prediction according to the decoded flag.
- the lossless decoding unit 62 does not decode enhancement layer specific weighted prediction related parameters, but instead decodes prediction parameters of DR prediction with the same syntax. obtain.
- the syntax for parameters of the L1 reference frame (lines 21 to 38 in FIG. 11) may not be used.
- the value of the variable “num_ref_idx_I0_active_minus1” corresponding to the reference frame number (minus 1) may be considered to be zero (ie, the number of base layer images whose dynamic range is to be converted is 1).
- the lossless decoding unit 62 may reuse part of the weighted prediction related parameters for DR prediction.
- the denominators specified by “luma_log2_weight_denom” and “delta_chroma_log2_weight_denom” illustrated in FIG. 11 may be reused as the denominators of the gains of the luminance component and the color difference component.
- the lossless decoding unit 62 does not decode "delta_luma_log2_gain_denom" and "delta_chroma_log2_gain_denom” shown in FIG.
- the lossless decoding unit 62 decodes the first version (or the difference thereof) of the prediction parameter of the DR prediction from the portion for the L0 reference frame of the syntax shared with the weighted prediction related parameter, And the second version (or the difference thereof) of the prediction parameters of the DR prediction may be decoded from the portion for the L1 reference frame of the syntax.
- the parameter setting unit 93 calculates the first version of the prediction parameter of the DR prediction using the difference decoded for the first version, and calculates the second of the prediction parameters of the DR prediction. A version is calculated using the decoded difference for the second version.
- the DR conversion unit 94 selectively uses the first and second versions of the prediction parameter to predict the image of the enhancement layer, ie, to generate a reference image for inter-layer prediction. Do.
- the DR conversion unit 94 may select the version to be used among the first version and the second version of the prediction parameter, for example, according to the band to which the pixel value belongs. Boundary values corresponding to the boundaries between bands for switching the version to be used may be known in advance to both the encoder and the decoder, or may be set adaptively.
- the DR conversion unit 94 determines the band to which the pixel value belongs according to the boundary value specified by the boundary information that can be further decoded by the lossless decoding unit 62. Then, based on the determination result, the DR conversion unit 94 can select a version to be used from the first version or the second version of the prediction parameter.
- the DR conversion unit 94 may select the version to be used from the first version and the second version of the prediction parameter, for example, according to the image region to which the pixel belongs.
- the area boundaries for switching the version to be used may be known in advance to both the encoder and the decoder, or may be set adaptively.
- the DR conversion unit 94 determines the image area to which the pixel belongs according to the area boundary specified by the boundary information that can be further decoded by the lossless decoding unit 62. Then, based on the determination result, the DR conversion unit 94 can select a version to be used from the first version or the second version of the prediction parameter.
- the prediction error of the DR prediction is reduced compared to the existing method, and the code amount of the prediction error data is reduced.
- coding efficiency is enhanced.
- the approach of providing two versions of the prediction parameter may be applied only to the luminance component and not to the chrominance components, as described above.
- the DR conversion unit 94 when the bit depth of the enhancement layer is larger than the bit depth of the base layer, the DR conversion unit 94 simultaneously executes the bit shift when predicting the image of the enhancement layer at the same time as the DR conversion. Is made possible.
- the lossless decoding unit 62 decodes, as a control parameter of the enhancement layer, a bit shift control flag indicating whether to perform bit shift simultaneously with DR conversion in inter-layer processing. Then, if the bit shift control flag indicates that the bit shift should be performed simultaneously with the DR conversion, the DR conversion unit 94 executes the bit shift simultaneously with the DR conversion, otherwise, for example, the bit shift is performed. Execute at the same time as upsampling.
- the lossless decoding unit 62 may decode the bit shift control flag separately for the luminance component and the chrominance component. In this case, more flexible control can be performed according to the setting for each color component (setting of the image size and bit depth).
- the bit shift control flag may typically be decoded from the slice header, as illustrated in FIG. However, not limited to such an example, the bit shift control flag may be decoded from elsewhere such as SPS or PPS.
- FIG. 28 is a flowchart illustrating an example of a schematic processing flow at the time of decoding according to an embodiment. Note that, for the sake of simplicity, processing steps not directly related to the technology of the present disclosure are omitted from the drawings.
- the demultiplexer 5 demultiplexes the multiplexed stream of the multilayer into the coded stream of the base layer and the coded stream of the enhancement layer (step S60).
- the BL decoding unit 6a executes base layer decoding processing to reconstruct a base layer image from the base layer coded stream (step S61).
- the common memory 7 buffers the base layer image (one or both of the decoded image and the prediction error image) generated in the decoding process of the base layer and the parameter to be reused between the layers (step S62).
- the parameters reused between layers may include weighted prediction related parameters.
- the EL decoding unit 6b executes enhancement layer decoding processing to reconstruct an enhancement layer image (step S63).
- the image of the base layer buffered by the common memory 7 is upsampled by the DR prediction unit 90, and its dynamic range is converted from SDR to HDR. Then, the image of the base layer after DR conversion may be used as a reference image in inter-layer prediction.
- FIG. 29 is a flowchart showing a first example of the flow of the DR prediction process in the enhancement layer decoding process. The DR prediction process described here is repeated for each picture or slice.
- the upsampling unit 91 upsamples the image of the base layer acquired from the common memory 7 in accordance with the resolution ratio between the base layer and the enhancement layer (step S70).
- the lossless decoding unit 62 decodes the prediction mode parameter indicating the prediction mode to be set for DR prediction from the PPS or the slice header (step S72). Then, the prediction mode setting unit 92 sets the prediction mode indicated by the decoded prediction mode parameter to a picture (or slice) (step S75).
- the subsequent processing branches depending on the prediction mode set by the prediction mode setting unit 92 (steps S76 and S78). For example, if the set prediction mode is the adaptive parameter mode, the lossless decoding unit 62 decodes the gain and offset (or the difference from their past values) from the PPS or slice header (step S80). Then, the parameter setting unit 93 sets a gain and an offset to be used for the latest picture or slice (step S82). The parameter setting unit 93 can calculate the gain and the offset to be used by adding the decoded difference to the past values of the gain and the offset when the difference between the gain and the offset is decoded.
- the DR conversion unit 94 is dynamically set to the pixel value of the base layer after upsampling according to the equations (4) to (6).
- the predicted pixel value of each pixel is calculated by multiplying a fixed gain and further adding an offset (step S84).
- the DR conversion unit 94 shifts the pixel value of the up-sampled base layer to the left by a predetermined bit shift amount according to Expression (1) to Expression (3). By doing this, the predicted pixel value of each pixel is calculated (step S86).
- the DR conversion unit 94 stores the image of the base layer after DR conversion, that is, the predicted image as the HDR image in the frame memory 69 ( Step S88).
- step S90 Thereafter, if there is an unprocessed next picture or slice, the process returns to step S70, and the above-described process is repeated for the next picture or slice (step S90).
- FIG. 30 is a flowchart showing a second example of the flow of the DR prediction process in the enhancement layer decoding process.
- the lossless decoding unit 62 decodes, from the SPS, a prediction mode parameter indicating a prediction mode to be set for DR prediction (step S71). Then, the prediction mode setting unit 92 sets, as a sequence, the prediction mode indicated by the decoded prediction mode parameter (step S73).
- steps S74 to S90 are repeated for each picture or slice in the sequence.
- the upsampling unit 91 upsamples the image of the base layer acquired from the common memory 7 in accordance with the resolution ratio between the base layer and the enhancement layer (step S74).
- the process branches depending on the prediction mode set by the prediction mode setting unit 92 (steps S76 and S78). For example, if the set prediction mode is the adaptive parameter mode, the lossless decoding unit 62 decodes the gain and offset (or the difference from their past values) from the PPS or slice header (step S80). Then, the parameter setting unit 93 sets a gain and an offset to be used for the latest picture or slice (step S82). The parameter setting unit 93 can calculate the gain and the offset to be used by adding the decoded difference to the past values of the gain and the offset when the difference between the gain and the offset is decoded.
- the DR conversion unit 94 is dynamically set to the pixel value of the base layer after upsampling according to the equations (4) to (6).
- the predicted pixel value of each pixel is calculated by multiplying a fixed gain and further adding an offset (step S84).
- the DR conversion unit 94 shifts the pixel value of the up-sampled base layer to the left by a predetermined bit shift amount according to Expression (1) to Expression (3). By doing this, the predicted pixel value of each pixel is calculated (step S86).
- the DR conversion unit 94 stores the image of the base layer after DR conversion, that is, the predicted image as the HDR image in the frame memory 69 ( Step S88).
- step S74 Upsampling and DR conversion are repeated for the next picture or slice.
- step S92 it is further determined whether there is a next sequence. Then, when the next sequence exists, the process returns to step S71, and the above-described process is repeated for the next sequence.
- FIG. 31 is a flowchart showing a third example of the flow of the DR prediction process in the decoding process of the enhancement layer.
- the lossless decoding unit 62 decodes, from the SPS, a prediction mode parameter indicating a prediction mode to be set for DR prediction (step S71). Then, the prediction mode setting unit 92 sets, as a sequence, the prediction mode indicated by the decoded prediction mode parameter (step S73).
- steps S74 to S91 are repeated for each slice in the sequence.
- the upsampling unit 91 upsamples the image of the base layer acquired from the common memory 7 in accordance with the resolution ratio between the base layer and the enhancement layer (step S74).
- the process branches depending on the prediction mode set by the prediction mode setting unit 92 (steps S76 and S78).
- the lossless decoding unit 62 may use the gains and offsets (or their past values) encoded by reusing the syntax of the weighted prediction related parameters.
- the parameter setting unit 93 sets a gain and an offset to be used for the latest picture or slice (step S82).
- the parameter setting unit 93 can calculate the gain and the offset to be used by adding the decoded difference to the past values of the gain and the offset when the difference between the gain and the offset is decoded.
- the DR conversion unit 94 is dynamically set to the pixel value of the base layer after upsampling according to the equations (4) to (6).
- the predicted pixel value of each pixel is calculated by multiplying a fixed gain and further adding an offset (step S84).
- the calculation of the predicted pixel value here may include a bit shift for compensating for a different bit depth between layers. Also, the bit shift may be included in the upsampling filter operation in step S74.
- the DR conversion unit 94 shifts the pixel value of the up-sampled base layer to the left by a predetermined bit shift amount according to Expression (1) to Expression (3). By doing this, the predicted pixel value of each pixel is calculated (step S86).
- the DR conversion unit 94 stores the image of the base layer after DR conversion, that is, the predicted image as the HDR image in the frame memory 69 (step S88). ).
- step S91 the process returns to step S74, and the upsampling and DR conversion are repeated for the next slice.
- step S92 it is further determined whether there is a next sequence. Then, when the next sequence exists, the process returns to step S71, and the above-described process is repeated for the next sequence.
- FIG. 32 is a flowchart showing a fourth example of the flow of the DR prediction process in the enhancement layer decoding process.
- the lossless decoding unit 62 decodes, from the SPS, a prediction mode parameter indicating a prediction mode to be set for DR prediction (step S71). Then, the prediction mode setting unit 92 sets, as a sequence, the prediction mode indicated by the decoded prediction mode parameter (step S73).
- steps S74 to S91 are repeated for each slice in the sequence.
- the upsampling unit 91 upsamples the image of the base layer acquired from the common memory 7 in accordance with the resolution ratio between the base layer and the enhancement layer (step S74).
- the process branches depending on the prediction mode set by the prediction mode setting unit 92 (steps S76 and S78). For example, when the set prediction mode is the adaptive parameter mode, the lossless decoding unit 62 determines whether the first of the prediction parameters from the portion for the L0 reference frame and the portion for the L1 reference frame of the syntax of the weighted prediction related parameter The version and the second version are decoded respectively (step S81b). Then, the parameter setting unit 93 sets a first version of the gain and the offset to be used (step S83a). Similarly, the parameter setting unit 93 sets a second version of the gain and the offset to be used (step S83b).
- the first and second versions may each include a set of optimal values to use for the first and second bands of the range of pixel values. Instead, these first and second versions may contain sets of optimal values to be used for the first and second image areas, respectively.
- the parameter setting unit 93 adds the decoded difference to the past values of the gain and offset of each version to obtain the gain and offset to be used. Can be calculated.
- step S84 switching of the version of the prediction parameter may be performed in step S84 according to the band to which the pixel value belongs or the image region to which the pixel belongs. It may be similar.
- step S81 b the lossless decoding unit 62 determines the boundary value between bands for switching the version of the prediction parameter or the boundary information for specifying the area boundary between the image areas, for example, from the extension of the slice header or slice header. It may additionally be decoded.
- the flowcharts of FIG. 29 to FIG. 32 show an example in which DR conversion is performed after the up-sampling is performed.
- the DR prediction unit 90 determines that the base layer image is generated when the spatial resolution (image size) of the enhancement layer is higher than the spatial resolution of the base layer.
- the enhancement layer image may be predicted by upsampling the converted image. According to such a processing order, the number of pixels and bit depth of conversion target of DR conversion are reduced compared to the case of the existing processing order, thereby further reducing the processing cost for the entire inter-layer processing. be able to.
- the image encoding device 10 and the image decoding device 60 according to the embodiments described above are transmitters or receivers for satellite broadcasting, cable broadcasting such as cable TV, distribution over the Internet, distribution to terminals by cellular communication, etc.
- the present invention can be applied to various electronic devices such as a recording device which records an image on a medium such as an optical disk, a magnetic disk and a flash memory, or a reproducing device which reproduces an image from the storage medium.
- a recording device which records an image on a medium such as an optical disk, a magnetic disk and a flash memory
- a reproducing device which reproduces an image from the storage medium.
- FIG. 33 illustrates an example of a schematic configuration of a television to which the above-described embodiment is applied.
- the television device 900 includes an antenna 901, a tuner 902, a demultiplexer 903, a decoder 904, a video signal processing unit 905, a display unit 906, an audio signal processing unit 907, a speaker 908, an external interface 909, a control unit 910, a user interface 911, And a bus 912.
- the tuner 902 extracts a signal of a desired channel from a broadcast signal received via the antenna 901, and demodulates the extracted signal. Then, the tuner 902 outputs the coded bit stream obtained by demodulation to the demultiplexer 903. That is, the tuner 902 has a role as a transmission means in the television apparatus 900 for receiving a coded stream in which an image is coded.
- the demultiplexer 903 separates the video stream and audio stream of the program to be viewed from the coded bit stream, and outputs the separated streams to the decoder 904. Also, the demultiplexer 903 extracts auxiliary data such as an EPG (Electronic Program Guide) from the encoded bit stream, and supplies the extracted data to the control unit 910. When the coded bit stream is scrambled, the demultiplexer 903 may perform descrambling.
- EPG Electronic Program Guide
- the decoder 904 decodes the video stream and audio stream input from the demultiplexer 903. Then, the decoder 904 outputs the video data generated by the decoding process to the video signal processing unit 905. Further, the decoder 904 outputs the audio data generated by the decoding process to the audio signal processing unit 907.
- the video signal processing unit 905 reproduces the video data input from the decoder 904 and causes the display unit 906 to display a video. Also, the video signal processing unit 905 may cause the display unit 906 to display an application screen supplied via the network. Further, the video signal processing unit 905 may perform additional processing such as noise removal on the video data according to the setting. Furthermore, the video signal processing unit 905 may generate an image of a graphical user interface (GUI) such as a menu, a button, or a cursor, for example, and may superimpose the generated image on the output image.
- GUI graphical user interface
- the display unit 906 is driven by a drive signal supplied from the video signal processing unit 905, and displays a video or an image on the video surface of a display device (for example, a liquid crystal display, a plasma display, or an OLED).
- a display device for example, a liquid crystal display, a plasma display, or an OLED.
- the audio signal processing unit 907 performs reproduction processing such as D / A conversion and amplification on audio data input from the decoder 904, and causes the speaker 908 to output audio. Further, the audio signal processing unit 907 may perform additional processing such as noise removal on the audio data.
- the external interface 909 is an interface for connecting the television device 900 to an external device or a network.
- a video stream or an audio stream received via the external interface 909 may be decoded by the decoder 904. That is, the external interface 909 also serves as a transmission means in the television apparatus 900 for receiving the coded stream in which the image is coded.
- the control unit 910 includes a processor such as a central processing unit (CPU) and memories such as a random access memory (RAM) and a read only memory (ROM).
- the memory stores a program executed by the CPU, program data, EPG data, data acquired via a network, and the like.
- the program stored by the memory is read and executed by the CPU, for example, when the television device 900 is started.
- the CPU controls the operation of the television apparatus 900 according to an operation signal input from, for example, the user interface 911 by executing a program.
- the user interface 911 is connected to the control unit 910.
- the user interface 911 has, for example, buttons and switches for the user to operate the television device 900, a receiver of remote control signals, and the like.
- the user interface 911 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 910.
- the bus 912 mutually connects the tuner 902, the demultiplexer 903, the decoder 904, the video signal processing unit 905, the audio signal processing unit 907, the external interface 909, and the control unit 910.
- the decoder 904 has the function of the image decoding device 60 according to the above-described embodiment. As a result, it is possible to incorporate into the television apparatus 900 a mechanism for predicting an HDR image from the SDR image with a suitable image quality without requiring a complicated implementation.
- FIG. 34 illustrates an example of a schematic configuration of a mobile phone to which the above-described embodiment is applied.
- the mobile phone 920 includes an antenna 921, a communication unit 922, an audio codec 923, a speaker 924, a microphone 925, a camera unit 926, an image processing unit 927, a multiplexing and separating unit 928, a recording and reproducing unit 929, a display unit 930, a control unit 931, an operation.
- a unit 932 and a bus 933 are provided.
- the antenna 921 is connected to the communication unit 922.
- the speaker 924 and the microphone 925 are connected to the audio codec 923.
- the operation unit 932 is connected to the control unit 931.
- the bus 933 mutually connects the communication unit 922, the audio codec 923, the camera unit 926, the image processing unit 927, the demultiplexing unit 928, the recording / reproducing unit 929, the display unit 930, and the control unit 931.
- the cellular phone 920 can transmit and receive audio signals, transmit and receive electronic mail or image data, capture an image, and record data in various operation modes including a voice call mode, a data communication mode, a shooting mode, and a videophone mode. Do the action.
- the analog voice signal generated by the microphone 925 is supplied to the voice codec 923.
- the audio codec 923 converts an analog audio signal into audio data, and A / D converts and compresses the converted audio data. Then, the audio codec 923 outputs the compressed audio data to the communication unit 922.
- the communication unit 922 encodes and modulates audio data to generate a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
- the communication unit 922 also amplifies and frequency-converts a radio signal received via the antenna 921 to obtain a reception signal.
- the communication unit 922 demodulates and decodes the received signal to generate audio data, and outputs the generated audio data to the audio codec 923.
- the audio codec 923 decompresses and D / A converts audio data to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
- the control unit 931 generates character data constituting an electronic mail in accordance with an operation by the user via the operation unit 932. Further, the control unit 931 causes the display unit 930 to display characters. Further, the control unit 931 generates electronic mail data in response to a transmission instruction from the user via the operation unit 932, and outputs the generated electronic mail data to the communication unit 922.
- a communication unit 922 encodes and modulates electronic mail data to generate a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921. The communication unit 922 also amplifies and frequency-converts a radio signal received via the antenna 921 to obtain a reception signal.
- the communication unit 922 demodulates and decodes the received signal to restore the e-mail data, and outputs the restored e-mail data to the control unit 931.
- the control unit 931 causes the display unit 930 to display the content of the e-mail, and stores the e-mail data in the storage medium of the recording and reproduction unit 929.
- the recording and reproducing unit 929 includes an arbitrary readable and writable storage medium.
- the storage medium may be a built-in storage medium such as a RAM or a flash memory, or an externally mounted storage medium such as a hard disk, a magnetic disk, a magnetooptical disk, an optical disk, a USB memory, or a memory card. May be
- the camera unit 926 captures an image of a subject to generate image data, and outputs the generated image data to the image processing unit 927.
- the image processing unit 927 encodes the image data input from the camera unit 926, and stores the encoded stream in the storage medium of the recording and reproduction unit 929.
- the demultiplexing unit 928 multiplexes the video stream encoded by the image processing unit 927 and the audio stream input from the audio codec 923, and the communication unit 922 multiplexes the multiplexed stream.
- Output to The communication unit 922 encodes and modulates the stream to generate a transmission signal.
- the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
- the communication unit 922 also amplifies and frequency-converts a radio signal received via the antenna 921 to obtain a reception signal.
- the transmission signal and the reception signal may include a coded bit stream.
- the communication unit 922 demodulates and decodes the received signal to restore the stream, and outputs the restored stream to the demultiplexing unit 928.
- the demultiplexing unit 928 separates the video stream and the audio stream from the input stream, and outputs the video stream to the image processing unit 927 and the audio stream to the audio codec 923.
- the image processing unit 927 decodes the video stream to generate video data.
- the video data is supplied to the display unit 930, and the display unit 930 displays a series of images.
- the audio codec 923 decompresses and D / A converts the audio stream to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
- the image processing unit 927 has the functions of the image encoding device 10 and the image decoding device 60 according to the above-described embodiment. As a result, it is possible to incorporate into the mobile phone 920 a mechanism for predicting the HDR image from the SDR image with a suitable image quality without requiring a complicated implementation.
- FIG. 35 shows an example of a schematic configuration of a recording and reproducing device to which the embodiment described above is applied.
- the recording / reproducing device 940 encodes, for example, audio data and video data of the received broadcast program, and records the encoded data on a recording medium.
- the recording and reproduction device 940 may encode, for example, audio data and video data acquired from another device and record the encoded data on a recording medium.
- the recording / reproducing device 940 reproduces the data recorded on the recording medium on the monitor and the speaker, for example, in accordance with the user's instruction. At this time, the recording / reproducing device 940 decodes the audio data and the video data.
- the recording / reproducing apparatus 940 includes a tuner 941, an external interface 942, an encoder 943, an HDD (Hard Disk Drive) 944, a disk drive 945, a selector 946, a decoder 947, an OSD (On-Screen Display) 948, a control unit 949, and a user interface. And 950.
- the tuner 941 extracts a signal of a desired channel from a broadcast signal received via an antenna (not shown) and demodulates the extracted signal. Then, the tuner 941 outputs the coded bit stream obtained by demodulation to the selector 946. That is, the tuner 941 has a role as a transmission means in the recording / reproducing device 940.
- the external interface 942 is an interface for connecting the recording and reproducing device 940 to an external device or a network.
- the external interface 942 may be, for example, an IEEE 1394 interface, a network interface, a USB interface, or a flash memory interface.
- video data and audio data received via the external interface 942 are input to the encoder 943. That is, the external interface 942 has a role as a transmission unit in the recording / reproducing device 940.
- the encoder 943 encodes video data and audio data when the video data and audio data input from the external interface 942 are not encoded. Then, the encoder 943 outputs the coded bit stream to the selector 946.
- the HDD 944 records an encoded bit stream obtained by compressing content data such as video and audio, various programs, and other data in an internal hard disk. Also, the HDD 944 reads these data from the hard disk when reproducing video and audio.
- the disk drive 945 records and reads data on the attached recording medium.
- the recording medium mounted on the disk drive 945 may be, for example, a DVD disk (DVD-Video, DVD-RAM, DVD-R, DVD-RW, DVD + R, DVD + RW, etc.) or a Blu-ray (registered trademark) disk, etc. .
- the selector 946 selects the coded bit stream input from the tuner 941 or the encoder 943 at the time of recording video and audio, and outputs the selected coded bit stream to the HDD 944 or the disk drive 945. Also, the selector 946 outputs the encoded bit stream input from the HDD 944 or the disk drive 945 to the decoder 947 at the time of reproduction of video and audio.
- the decoder 947 decodes the coded bit stream to generate video data and audio data. Then, the decoder 947 outputs the generated video data to the OSD 948. Also, the decoder 904 outputs the generated audio data to an external speaker.
- the OSD 948 reproduces the video data input from the decoder 947 and displays the video.
- the OSD 948 may superimpose an image of a GUI such as a menu, a button, or a cursor on the video to be displayed.
- the control unit 949 includes a processor such as a CPU, and memories such as a RAM and a ROM.
- the memory stores programs executed by the CPU, program data, and the like.
- the program stored by the memory is read and executed by the CPU, for example, when the recording and reproducing device 940 is started.
- the CPU controls the operation of the recording / reproducing apparatus 940 in accordance with an operation signal input from, for example, the user interface 950 by executing a program.
- the user interface 950 is connected to the control unit 949.
- the user interface 950 includes, for example, buttons and switches for the user to operate the recording and reproducing device 940, a receiver of a remote control signal, and the like.
- the user interface 950 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 949.
- the encoder 943 has the function of the image coding apparatus 10 according to the embodiment described above.
- the decoder 947 has the function of the image decoding device 60 according to the above-described embodiment. This makes it possible to incorporate into the recording / reproducing apparatus 940 a mechanism for predicting an HDR image from the SDR image with a suitable image quality without requiring a complicated implementation.
- FIG. 36 illustrates an example of a schematic configuration of an imaging device to which the above-described embodiment is applied.
- the imaging device 960 captures an object to generate an image, encodes image data, and records the image data in a recording medium.
- the imaging device 960 includes an optical block 961, an imaging unit 962, a signal processing unit 963, an image processing unit 964, a display unit 965, an external interface 966, a memory 967, a media drive 968, an OSD 969, a control unit 970, a user interface 971, and a bus. 972 is provided.
- the optical block 961 is connected to the imaging unit 962.
- the imaging unit 962 is connected to the signal processing unit 963.
- the display unit 965 is connected to the image processing unit 964.
- the user interface 971 is connected to the control unit 970.
- the bus 972 mutually connects the image processing unit 964, the external interface 966, the memory 967, the media drive 968, the OSD 969, and the control unit 970.
- the optical block 961 has a focus lens, an aperture mechanism, and the like.
- the optical block 961 forms an optical image of a subject on the imaging surface of the imaging unit 962.
- the imaging unit 962 includes an image sensor such as a CCD or a CMOS, and converts an optical image formed on an imaging surface into an image signal as an electrical signal by photoelectric conversion. Then, the imaging unit 962 outputs the image signal to the signal processing unit 963.
- the signal processing unit 963 performs various camera signal processing such as knee correction, gamma correction, and color correction on the image signal input from the imaging unit 962.
- the signal processing unit 963 outputs the image data after camera signal processing to the image processing unit 964.
- the image processing unit 964 encodes the image data input from the signal processing unit 963 to generate encoded data. Then, the image processing unit 964 outputs the generated encoded data to the external interface 966 or the media drive 968. The image processing unit 964 also decodes encoded data input from the external interface 966 or the media drive 968 to generate image data. Then, the image processing unit 964 outputs the generated image data to the display unit 965.
- the image processing unit 964 may output the image data input from the signal processing unit 963 to the display unit 965 to display an image. The image processing unit 964 may superimpose the display data acquired from the OSD 969 on the image to be output to the display unit 965.
- the OSD 969 generates an image of a GUI such as a menu, a button, or a cursor, for example, and outputs the generated image to the image processing unit 964.
- a GUI such as a menu, a button, or a cursor
- the external interface 966 is configured as, for example, a USB input / output terminal.
- the external interface 966 connects the imaging device 960 and the printer, for example, when printing an image.
- a drive is connected to the external interface 966 as necessary.
- removable media such as a magnetic disk or an optical disk may be attached to the drive, and a program read from the removable media may be installed in the imaging device 960.
- the external interface 966 may be configured as a network interface connected to a network such as a LAN or the Internet. That is, the external interface 966 has a role as a transmission unit in the imaging device 960.
- the recording medium mounted in the media drive 968 may be, for example, any readable / writable removable medium such as a magnetic disk, a magneto-optical disk, an optical disk, or a semiconductor memory.
- the recording medium may be fixedly attached to the media drive 968, and a non-portable storage unit such as, for example, a built-in hard disk drive or a solid state drive (SSD) may be configured.
- SSD solid state drive
- the control unit 970 includes a processor such as a CPU, and memories such as a RAM and a ROM.
- the memory stores programs executed by the CPU, program data, and the like.
- the program stored by the memory is read and executed by the CPU, for example, when the imaging device 960 starts up.
- the CPU controls the operation of the imaging device 960 according to an operation signal input from, for example, the user interface 971 by executing a program.
- the user interface 971 is connected to the control unit 970.
- the user interface 971 includes, for example, buttons and switches for the user to operate the imaging device 960.
- the user interface 971 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 970.
- the image processing unit 964 has the functions of the image encoding device 10 and the image decoding device 60 according to the above-described embodiment. This makes it possible to incorporate into the imaging device 960 a mechanism for predicting the HDR image from the SDR image with a suitable image quality without requiring a complicated implementation.
- the data transmission system 1000 includes a stream storage device 1001 and a distribution server 1002.
- the distribution server 1002 is connected to several terminal devices via the network 1003.
- the network 1003 may be a wired network or a wireless network, or a combination thereof.
- FIG. 37 shows a PC (Personal Computer) 1004, an AV device 1005, a tablet device 1006, and a cellular phone 1007 as an example of a terminal device.
- PC Personal Computer
- the stream storage device 1001 stores, for example, stream data 1011 including a multiplexed stream generated by the image coding device 10.
- the multiplexed stream includes the base layer (BL) coded stream and the enhancement layer (EL) coded stream.
- the distribution server 1002 reads the stream data 1011 stored in the stream storage device 1001, and at least a part of the read stream data 1011 is transmitted to the PC 1004, the AV device 1005, the tablet device 1006, and the portable telephone 1007 via the network 1003. Deliver to.
- the delivery server 1002 selects a stream to be delivered based on some condition such as the capability of the terminal device or the communication environment. For example, the distribution server 1002 may avoid the occurrence of delay, overflow, or processor overload at the terminal device by not distributing a coded stream having an image quality as high as that the terminal device can handle. . In addition, the distribution server 1002 may avoid the occupation of the communication band of the network 1003 by not distributing the encoded stream having high image quality. On the other hand, the distribution server 1002 may distribute all multiplexed streams to the terminal device if there is no risk to be avoided or if it is judged appropriate based on a contract with the user or some conditions. Good.
- the distribution server 1002 reads the stream data 1011 from the stream storage device 1001. Then, the distribution server 1002 distributes the stream data 1011 as it is to the PC 1004 having high processing capability. Further, since the AV device 1005 has low processing capability, the distribution server 1002 generates stream data 1012 including only the base layer encoded stream extracted from the stream data 1011 and distributes the stream data 1012 to the AV device 1005. Do. Also, the distribution server 1002 distributes the stream data 1011 as it is to the tablet device 1006 that can communicate at a high communication rate. Further, since the mobile telephone 1007 can communicate only at a low communication rate, the distribution server 1002 distributes stream data 1012 including only the base layer coded stream to the mobile telephone 1007.
- the multiplexed stream By using the multiplexed stream in this way, the amount of traffic to be transmitted can be adaptively adjusted.
- the code amount of the stream data 1011 is reduced as compared to the case where each layer is individually encoded alone, even if the entire stream data 1011 is distributed, the load on the network 1003 is suppressed. Be done. Furthermore, memory resources of the stream storage device 1001 are also saved.
- the hardware performance of the terminal device varies from device to device. Also, the capabilities of the application executed in the terminal device are various. Furthermore, the communication capacity of the network 1003 also varies. The capacity available for data transmission may change from time to time due to the presence of other traffic. Therefore, before starting distribution of stream data, the distribution server 1002 performs terminal information on the hardware performance and application capability of the terminal device, communication capacity of the network 1003, and the like through signaling with the terminal device of the distribution destination. And network information related to Then, the distribution server 1002 can select a stream to be distributed based on the acquired information.
- the extraction of the layer to be decoded may be performed in the terminal device.
- the PC 1004 may display a base layer image extracted and decoded from the received multiplexed stream on its screen. Also, the PC 1004 may extract the base layer coded stream from the received multiplexed stream to generate stream data 1012 and store the generated stream data 1012 in a storage medium or transfer it to another device .
- the configuration of the data transmission system 1000 shown in FIG. 37 is merely an example.
- the data transmission system 1000 may include any number of stream storage devices 1001, distribution servers 1002, networks 1003, and terminal devices.
- a data transmission system 1100 includes a broadcasting station 1101 and a terminal device 1102.
- the broadcast station 1101 broadcasts the base layer coded stream 1121 on the terrestrial channel 1111. Also, the broadcast station 1101 transmits the encoded stream 1122 of the enhancement layer to the terminal device 1102 via the network 1112.
- the terminal device 1102 has a reception function for receiving terrestrial broadcast broadcasted by the broadcast station 1101, and receives the base layer coded stream 1121 via the terrestrial channel 1111. Further, the terminal device 1102 has a communication function for communicating with the broadcast station 1101, and receives the encoded stream 1122 of the enhancement layer via the network 1112.
- the terminal device 1102 receives the base layer coded stream 1121 in response to an instruction from the user, decodes the base layer image from the received coded stream 1121, and displays the base layer image on the screen. Good. Also, the terminal device 1102 may store the decoded base layer image in a storage medium or transfer it to another device.
- the terminal device 1102 receives the coded stream 1122 of the enhancement layer via the network 1112 according to an instruction from the user, for example, and transmits the coded stream 1121 of the base layer and the coded stream 1122 of the enhancement layer. Multiplexed streams may be generated by multiplexing. Also, the terminal device 1102 may decode the enhancement layer image from the encoded stream 1122 of the enhancement layer and display the enhancement layer image on the screen. The terminal device 1102 may also store the decoded enhancement layer image in a storage medium or transfer it to another device.
- the coded stream of each layer included in the multiplexed stream may be transmitted via different communication channels for each layer. Thereby, the load applied to each channel can be distributed, and the occurrence of communication delay or overflow can be suppressed.
- the communication channel to be used for transmission may be dynamically selected according to some conditions.
- the base layer coded stream 1121 having a relatively large amount of data is transmitted via a wide bandwidth communication channel
- the coded layer 1122 of an enhancement layer having a relatively small amount of data is arranged via a narrow bandwidth communication channel.
- the communication channel through which the coded stream 1122 of a particular layer is transmitted may be switched according to the bandwidth of the communication channel. Thereby, loads applied to the individual channels can be more effectively suppressed.
- Data transmission system 1100 shown in FIG. 38 is merely an example.
- Data transmission system 1100 may include any number of communication channels and terminals.
- the configuration of the system described herein may be used in applications other than broadcasting.
- the data transmission system 1200 includes an imaging device 1201 and a stream storage device 1202.
- the imaging apparatus 1201 scalable encodes image data generated by imaging the subject 1211, and generates a multiplexed stream 1221.
- the multiplexed stream 1221 includes the coded stream of the base layer and the coded stream of the enhancement layer. Then, the imaging device 1201 supplies the multiplexed stream 1221 to the stream storage device 1202.
- the stream storage device 1202 stores the multiplexed stream 1221 supplied from the imaging device 1201 with different image quality for each mode. For example, the stream storage device 1202 extracts the base layer coded stream 1222 from the multiplexed stream 1221 in the normal mode, and stores the extracted base layer coded stream 1222. On the other hand, the stream storage device 1202 stores the multiplexed stream 1221 as it is in the high image quality mode. As a result, the stream storage device 1202 can record a high quality stream having a large amount of data only when it is desired to record a high quality video. Therefore, memory resources can be saved while suppressing the influence on the user of the deterioration of the image quality.
- the imaging device 1201 is a surveillance camera.
- the monitoring target for example, an intruder
- the normal mode is selected. In this case, since the captured image is not likely to be important, reduction of the data amount is prioritized, and the video is recorded with low image quality (ie, only the base layer coded stream 1222 is stored).
- the monitoring target for example, the object 1211 which is an intruder
- the high image quality mode is selected. In this case, since the captured image is likely to be important, high image quality is given priority, and the video is recorded with high image quality (ie, the multiplexed stream 1221 is stored).
- the mode is selected by the stream storage device 1202 based on, for example, an image analysis result.
- the imaging device 1201 may select the mode. In the latter case, the imaging device 1201 may supply the base layer coded stream 1222 to the stream storage device 1202 in the normal mode, and may supply the multiplexed stream 1221 to the stream storage device 1202 in the high image quality mode.
- the selection criteria for selecting the mode may be any criteria.
- the mode may be switched according to the size of the sound acquired through the microphone or the waveform of the sound.
- the mode may be switched periodically.
- the mode may be switched according to an instruction from the user.
- the number of selectable modes may be any number as long as the number of layers to be hierarchized is not exceeded.
- Data transmission system 1200 may include any number of imaging devices 1201.
- system configuration described herein may be used in applications other than surveillance cameras.
- the multi-view codec is a type of multi-layer codec, and is an image coding method for coding and decoding so-called multi-view video.
- FIG. 40 is an explanatory diagram for describing a multiview codec. Referring to FIG. 40, a sequence of frames of three views taken respectively at three viewpoints is shown. Each view is assigned a view ID (view_id). One of these multiple views is designated as a base view. Views other than the base view are called non-base views. In the example of FIG. 40, a view whose view ID is "0" is a base view, and two views whose view ID is "1" or "2" are non-base views.
- each view may correspond to a layer.
- the non-base view image is encoded and decoded with reference to the base view image (other non-base view images may also be referred to).
- FIG. 41 is a block diagram showing a schematic configuration of an image coding device 10v supporting a multiview codec.
- the image coding device 10v includes a first layer coding unit 1c, a second layer coding unit 1d, a common memory 2 and a multiplexing unit 3.
- the function of the first layer coding unit 1c is equivalent to the function of the BL coding unit 1a described using FIG. 3 except that a base view image is received instead of the base layer image as an input.
- the first layer coding unit 1c codes the base view image to generate a coded stream of the first layer.
- the function of the second layer coding unit 1 d is equivalent to the function of the EL coding unit 1 b described with reference to FIG. 3 except that a non-base view image is received instead of the enhancement layer image as an input.
- the second layer coding unit 1 d codes the non-base view image to generate a coded stream of the second layer.
- the common memory 2 stores information commonly used between layers.
- the multiplexing unit 3 multiplexes the coded stream of the first layer generated by the first layer coding unit 1 c and the coded stream of the second layer generated by the second layer coding unit 1 d, Generate a multiplexed stream of layers.
- FIG. 42 is a block diagram showing a schematic configuration of an image decoding device 60v that supports a multiview codec.
- the image decoding device 60v includes a demultiplexing unit 5, a first layer decoding unit 6c, a second layer decoding unit 6d, and a common memory 7.
- the demultiplexer 5 demultiplexes the multiplexed stream of the multilayer into the coded stream of the first layer and the coded stream of the second layer.
- the function of the first layer decoding unit 6c is equivalent to the function of the BL decoding unit 6a described using FIG. 4 except that it receives an encoded stream in which a base view image is encoded instead of a base layer image as an input. It is.
- the first layer decoding unit 6c decodes a base view image from the coded stream of the first layer.
- the function of the second layer decoding unit 6d is the same as the function of the EL decoding unit 6b described with reference to FIG. 4 except that it receives an encoded stream in which a non-base view image is encoded instead of an enhancement layer image as input. It is equivalent.
- the second layer decoding unit 6d decodes the non-base view image from the coded stream of the second layer.
- the common memory 7 stores information commonly used between layers.
- conversion of the dynamic range between views may be controlled according to the technology according to the present disclosure, if the luminance dynamic range differs between the views.
- scalable coding it is possible to realize a dynamic range conversion mechanism with a simple implementation even in a multiview codec.
- the technology according to the present disclosure may be applied to a streaming protocol.
- a streaming protocol For example, in MPEG-DASH (Dynamic Adaptive Streaming over HTTP), a plurality of encoded streams having different parameters such as resolution are prepared in advance in a streaming server. Then, the streaming server dynamically selects, in units of segments, appropriate data to be streamed from the plurality of encoded streams, and delivers the selected data.
- transformation of dynamic range between encoded streams may be controlled in accordance with the techniques of this disclosure.
- prediction mode parameters indicating whether to set the gain and offset adaptively may be encoded and decoded. Therefore, it is possible to achieve an optimal balance between the prediction accuracy of DR prediction and the coding / decoding overhead, in accordance with various conditions such as encoder and decoder performance, transmission bandwidth or required image quality. Become.
- the differences from the past values of these prediction parameters may be encoded and decoded.
- an increase in code amount can be suppressed while achieving high prediction accuracy in dynamic range scalability.
- said prediction parameters are decoded from a header having a syntax shared with weighted prediction related parameters.
- syntax redundancy is reduced, and it becomes easy to ensure compatibility when implementing and upgrading encoders and decoders.
- both versions of the prediction parameters for DR prediction are coded using the portions for the L0 reference frame and the portions for the L1 reference frame of the syntax of the weighted prediction related parameters. Can be digitized and decoded. In this case, since it is possible to use a more flexible prediction model with high prediction accuracy, it is possible to improve the coding efficiency of dynamic range scalability.
- control parameters may be encoded and decoded which indicate whether bit shifting in inter-layer processing should be performed simultaneously with DR conversion.
- DR conversion may be performed prior to performing upsampling. In this case, since the number of conversion target pixels for DR conversion is reduced, the processing cost of DR conversion can be further reduced.
- CU, PU, and TU described herein mean a logical unit that also includes syntax associated with individual blocks in HEVC. When focusing only on individual blocks as part of an image, these may be replaced with the terms CB (Coding Block), PB (Prediction Block) and TB (Transform Block) respectively.
- the CB is formed by hierarchically dividing a coding tree block (CTB) into a quad-tree shape. The whole of one quadtree corresponds to CTB, and a logical unit corresponding to CTB is called CTU (Coding Tree Unit).
- CTBs and CBs in HEVC are processing units of the encoding process in that they are H.264.
- CTB and CB differ from macroblocks in that their sizes are not fixed (macroblock size is always 16 ⁇ 16 pixels).
- the size of the CTB is selected from 16 ⁇ 16 pixels, 32 ⁇ 32 pixels and 64 ⁇ 64 pixels, and is specified by parameters in the coding stream.
- the size of CB may vary with the depth of CTB segmentation.
- the method of transmitting such information is not limited to such an example.
- the information may be transmitted or recorded as separate data associated with the coded bit stream without being multiplexed into the coded bit stream.
- the term “associate” allows an image (a slice or a block, which may be a part of an image) included in a bitstream to be linked at the time of decoding with information corresponding to the image. Means That is, the information may be transmitted on a different transmission path from the image (or bit stream).
- the information may be recorded on a recording medium (or another recording area of the same recording medium) different from the image (or bit stream).
- the information and the image (or bit stream) may be associated with each other in any unit such as, for example, a plurality of frames, one frame, or a part in a frame.
- the prediction unit predicts the image of the second layer using the prediction parameter when the prediction mode parameter indicates an adaptive parameter mode.
- the decoding unit does not decode the weighted prediction related parameter in the second layer, The weighted prediction related parameters of the first layer are reused in the second layer, The image processing apparatus according to (4).
- the first set of prediction parameters are decoded from the portion for the L0 reference frame of the syntax shared with the weighted prediction related parameters;
- the second set of prediction parameters is decoded from the portion of the syntax for the L1 reference frame shared with the weighted prediction related parameters;
- the prediction unit selectively uses the first set of prediction parameters and the second set of prediction parameters to predict an image of the second layer.
- the image processing apparatus according to (4) or (5).
- the prediction unit according to (6) which selects a set to be used among the first set of prediction parameters and the second set of prediction parameters according to a band to which a pixel value belongs. Image processing device.
- the prediction unit according to (6) which selects a set to be used among the first set of prediction parameters and the second set of prediction parameters according to an image region to which a pixel value belongs. Image processing device.
- the decoding unit indicates whether the bit shift in predicting the image of the second layer should be performed simultaneously with the dynamic range conversion when the bit depth of the second layer is larger than the bit depth of the first layer. Further decode the control parameters, The prediction unit does not upsample the bit shift but does not upsample if the control parameter indicates that the bit shift at the time of predicting the image of the second layer should be performed simultaneously with the dynamic range conversion. Execute simultaneously with conversion, The image processing apparatus according to any one of the above (1) to (8).
- the decoding unit decodes the control parameter separately for the luminance component and the color difference component.
- the prediction unit converts the dynamic range of the image of the first layer using the prediction parameter, and then converts the dynamic range.
- the image processing apparatus according to any one of (1) to (10), wherein the image of the second layer is predicted by upsampling the image.
- the encoding unit further encodes a prediction mode parameter indicating an adaptive parameter mode as a prediction mode, when the image of the second layer is predicted using the prediction parameter, (13) or (14) The image processing apparatus as described in 2.).
- a prediction mode parameter indicating an adaptive parameter mode as a prediction mode, when the image of the second layer is predicted using the prediction parameter, (13) or (14) The image processing apparatus as described in 2.).
- the encoding unit encodes the prediction parameter in a header having a syntax shared with a weighted prediction related parameter.
- the encoding unit does not encode the weighted prediction related parameter in the second layer, The weighted prediction related parameters of the first layer are reused in the second layer, The image processing apparatus according to (16).
- the prediction unit selectively uses the first set of prediction parameters and the second set of prediction parameters to predict an image of the second layer,
- the encoding unit encodes the first set of prediction parameters into a portion for the L0 reference frame of the syntax shared with the weighted prediction related parameters, and the second set of prediction parameters. Encoding into a portion for the L1 reference frame of the syntax shared with the weighted prediction related parameters,
- the image processing apparatus according to (16) or (17).
- the prediction unit according to (18) which selects a set to be used among the first set of prediction parameters and the second set of prediction parameters according to a band to which a pixel value belongs. Image processing device.
- the prediction unit converts the dynamic range of the image of the first layer using the prediction parameter, and then converts the dynamic range.
- the image processing apparatus according to any one of (13) to (22), wherein the image of the second layer is predicted by upsampling the image.
- (24) Predicting an image of the second layer from an image of the first layer when encoding an image of the second layer having a luminance dynamic range larger than the first layer; Encoding a prediction parameter used in the prediction, the prediction parameter including a gain and an offset by which each color component of the first layer is multiplied;
- Image processing method including:
Abstract
Description
-空間スケーラビリティ:空間解像度あるいは画像サイズが階層化される。
-時間スケーラビリティ:フレームレートが階層化される。
-SNR(Signal to Noise Ratio)スケーラビリティ:SN比が階層化される。
標準規格で未だ採用されていないものの、色域(Color Gamut)スケーラビリティ、ビット深度スケーラビリティ及びクロマフォーマットスケーラビリティもまた議論されている。
1.概要
1-1.スケーラブル符号化
1-2.ダイナミックレンジスケーラビリティ
1-3.エンコーダの基本的な構成例
1-4.デコーダの基本的な構成例
2.一実施形態に係るEL符号化部の構成例
2-1.全体的な構成
2-2.DR予測部の詳細な構成
2-3.シンタックスの例
3.一実施形態に係る符号化時の処理の流れ
3-1.概略的な流れ
3-2.DR予測処理
4.一実施形態に係るEL復号部の構成例
4-1.全体的な構成
4-2.DR予測部の詳細な構成
5.一実施形態に係る復号時の処理の流れ
5-1.概略的な流れ
5-2.DR予測処理
6.応用例
6-1.様々な製品への応用
6-2.スケーラブル符号化の様々な用途
6-3.その他
7.まとめ
[1-1.スケーラブル符号化]
スケーラブル符号化においては、一連の画像をそれぞれ含む複数のレイヤが符号化される。ベースレイヤ(base layer)は、最初に符号化される、最も粗い画像を表現するレイヤである。ベースレイヤの符号化ストリームは、他のレイヤの符号化ストリームを復号することなく、独立して復号され得る。ベースレイヤ以外のレイヤは、エンハンスメントレイヤ(enhancement layer)と呼ばれる、より精細な画像を表現するレイヤである。エンハンスメントレイヤの符号化ストリームは、ベースレイヤの符号化ストリームに含まれる情報を用いて符号化される。従って、エンハンスメントレイヤの画像を再現するためには、ベースレイヤ及びエンハンスメントレイヤの双方の符号化ストリームが復号されることになる。スケーラブル符号化において扱われるレイヤの数は、2つ以上のいかなる数であってもよい。3つ以上のレイヤが符号化される場合には、最下位のレイヤがベースレイヤ、残りの複数のレイヤがエンハンスメントレイヤである。より上位のエンハンスメントレイヤの符号化ストリームは、より下位のエンハンスメントレイヤ又はベースレイヤの符号化ストリームに含まれる情報を用いて符号化され及び復号され得る。
図1に例示したレイヤ構造において、画像のテクスチャは、共通するシーンを映したレイヤ間で類似する。即ち、レイヤL1内のブロックB1、レイヤL2内のブロックB2、及びレイヤL3内のブロックB3のテクスチャは類似する。従って、例えばブロックB1を参照ブロックとして用いてブロックB2又はブロックB3の画素を予測し、又はブロックB2を参照ブロックとして用いてブロックB3の画素を予測すれば、高い予測精度が得られる可能性がある。このようなレイヤ間の予測を、インターレイヤ予測という。非特許文献2では、インターレイヤ予測のためのいくつかの手法が提案されている。それら手法のうち、イントラBL予測では、ベースレイヤの復号画像(リコンストラクト画像)が、エンハンスメントレイヤの復号画像を予測するための参照画像として使用される。イントラ残差予測及びインター残差予測では、ベースレイヤの予測誤差(残差)画像が、エンハンスメントレイヤの予測誤差画像を予測するための参照画像として使用される。
図3は、スケーラブル符号化をサポートする、一実施形態に係る画像符号化装置10の概略的な構成を示すブロック図である。図3を参照すると、画像符号化装置10は、ベースレイヤ(BL)符号化部1a、エンハンスメントレイヤ(EL)符号化部1b、共通メモリ2及び多重化部3を備える。
図4は、スケーラブル符号化をサポートする、一実施形態に係る画像復号装置60の概略的な構成を示すブロック図である。図4を参照すると、画像復号装置60は、逆多重化部5、ベースレイヤ(BL)復号部6a、エンハンスメントレイヤ(EL)復号部6b及び共通メモリ7を備える。
[2-1.全体的な構成]
図5は、図3に示したEL符号化部1bの構成の一例を示すブロック図である。図5を参照すると、EL符号化部1bは、並び替えバッファ11、減算部13、直交変換部14、量子化部15、可逆符号化部16、蓄積バッファ17、レート制御部18、逆量子化部21、逆直交変換部22、加算部23、ループフィルタ24、フレームメモリ25、セレクタ26及び27、イントラ予測部30、インター予測部35並びにダイナミックレンジ(DR)予測部40を備える。
図7は、図5に示したDR予測部40の構成の一例を示すブロック図である。図7を参照すると、DR予測部40は、アップサンプリング部41、予測モード設定部42、パラメータ計算部43及びダイナミックレンジ(DR)変換部44を有する。
アップサンプリング部41は、共通メモリ2から取得されるベースレイヤの画像を、ベースレイヤとエンハンスメントレイヤとの間の解像度比に従ってアップサンプリングする。より具体的には、アップサンプリング部41は、解像度比に応じて順に走査される補間画素の各々について、予め定義されるフィルタ係数でベースレイヤの画像をフィルタリングすることにより補間画素値を計算する。それにより、参照ブロックとして使用されるベースレイヤの画像の空間解像度が、エンハンスメントレイヤと同等の解像度まで高められる。アップサンプリング部41は、アップサンプリング後の画像をパラメータ計算部43及びDR変換部44へ出力する。なお、レイヤ間で解像度が等しい場合には、アップサンプリングはスキップされてよい。この場合、アップサンプリング部41はベースレイヤの画像をそのままパラメータ計算部43及びDR変換部44へ出力し得る。
予測モード設定部42は、DR予測のための予測モードの候補のうち、予め定義され又は動的に選択される予測モードを、DR予測部40に設定する。予測モードの候補は、上述したビットシフトモード、固定パラメータモード及び適応パラメータモードを含み得る。ある実施例において、予測モード設定部42は、ピクチャごとに最適な予測モードを設定し得る。他の実施例において、予測モード設定部42は、スライスごとに最適な予測モードを設定し得る。1つのピクチャは、1つ以上のスライスを有し得る。また別の実施例において、予測モード設定部42は、シーケンスごとに予測モードを設定し、1つのシーケンス内の複数のピクチャ及び複数のスライスにわたって同じ予測モードを維持し得る。予測モード設定部42は、予測モードの各候補について符号化効率又は予測精度を評価し、最適な予測モードを選択してもよい。予測モード設定部42は、設定した予測モードを示す予測モードパラメータを、可逆符号化部16へ出力する。
パラメータ計算部43は、予測モード設定部42により適応パラメータモードが設定された場合に、又は予測モード設定部42により適応パラメータモードの符号化効率若しくは予測精度が評価される場合に、適応パラメータモードにおいて使用されるべき予測パラメータを計算する。当該予測パラメータは、式(4)~式(6)に示したゲインgi及びオフセットoi(i=1,2,3)を含む。ここで、下付き文字のiは、3種類の色成分の各々を意味する。ゲインgiは、ベースレイヤの画素値に乗算される係数である。オフセットoiは、ベースレイヤの画素値とゲインgiとの積に加算される数値である。パラメータ計算部43は、例えば、アップサンプリング部41から入力されるアップサンプリング後のベースレイヤの画像を並び替えバッファ11から入力される原画像に最も近付けることのできるようなゲイン及びオフセットを、色成分ごとに計算し得る。
DR変換部44は、予測モード設定部42により設定される予測モードに従って、アップサンプリング部41から入力されるアップサンプリング後のベースレイヤのSDR画像のダイナミックレンジを、エンハンスメントレイヤのHDR画像と同等のダイナミックレンジに変換する。例えば、DR変換部44は、ビットシフトモードが設定された場合には、式(1)~式(3)に従って、アップサンプリング後のベースレイヤの画素値を所定のビットシフト量だけ左方へシフトさせることにより、予測画素値を計算する。また、DR変換部44は、固定パラメータモードが設定された場合には、式(4)~式(6)に従って、アップサンプリング後のベースレイヤの画素値に固定的なゲインを乗算し、さらに固定的なオフセットを加算することにより、予測画素値を計算する。また、DR変換部44は、適応パラメータモードが設定された場合には、固定的なゲイン及びオフセットの代わりに、パラメータ計算部43により適応的に計算されるゲイン及びオフセットを用いて、予測画素値を計算する。それにより、インターレイヤ予測のための参照画像が生成される。DR変換部44は、このように生成されるインターレイヤ予測のための参照画像(HDR画像に相当する広いダイナミックレンジを有するベースレイヤの画像)を、フレームメモリ25に格納する。
(1)基本的な例
予測モード設定部42から出力される予測モードパラメータ及びパラメータ計算部43から出力される予測パラメータ(色成分ごとのゲイン及びオフセット)は、図5に示した可逆符号化部16により符号化され、エンハンスメントレイヤの符号化ストリームに挿入され得る。図8は、DR予測のためのこれら符号化パラメータのシンタックスの一例について説明するための説明図である。なお、ここではゲイン及びオフセットの差分が符号化される例を示すが、ゲイン及びオフセットそのものが符号化される場合にも、ここで説明するシンタックスは適用可能である(パラメータ名の先頭の“delta_”は削除され得る)。
なお、図8の第1行の拡張フラグ“dr_prediction_flag”及び第3行の予測モードパラメータ“dr_prediction_model”は、シーケンスごとに符号化され、SPS(Sequence Parameter Set)に挿入されてもよい。この場合、1つのシーケンス内では同じ予測モードが維持される。1つのシーケンス内で予測モードが変更されなければ、図9に例示したような前回の予測モードに依存する差分の基礎の切替えが不要となるため、差分計算の複雑さが緩和され、装置の実装が容易となる。また、拡張フラグ及び予測モードパラメータのための符号量を削減することもできる。
図6B及び図6Cに関連して、DR予測のための予測モードパラメータ及び予測パラメータは、ピクチャごとに符号化され、PPSへ挿入されるものと説明した。しかし、画像の部分領域ごとに異なる輝度ダイナミックレンジが使用される用途を想定すると、予測モードパラメータ及び予測パラメータの差分をスライス又はタイルごとに符号化することが有益である。例えば、図10に示した例において、ベースレイヤ画像IMB1は、4つのタイルTB1、TB2、TB3及びTB4に分割される。エンハンスメントレイヤ画像IME1は、4つのタイルTE1、TE2、TE3及びTE4に分割される。4つのタイルは、それぞれ異なるカメラにより撮像される映像を映し出す。例えば、ベースレイヤ画像IMB1は4つの地点に設置されるカメラからの合成映像のSDRバージョンを表示し、エンハンスメントレイヤ画像IME1のタイルTE2及びTE4は、タイルTB2及びTB4と同じ合成映像のHDRバージョンを表示し得る。この場合、タイルTE2及びタイルTE4に相当するスライスのスライスヘッダにおいてそれぞれ予測モードパラメータ及び予測パラメータを符号化することにより、タイルごとに最適なDR予測を可能とし、符号化効率を高めることができる。
図8に例示したDR予測の予測パラメータのシンタックスは、HEVCにおいて導入される重み付け予測(Weighted Prediction)に関連するパラメータのシンタックスに類似する。重み付け予測は、フェードイン及びフェードアウトなどのエフェクトが適用される映像についてインター予測の予測精度を改善するために導入される技術である。図11は、重み付け予測関連パラメータについて非特許文献1により定義されているシンタックスを示している。
前項では、重み付け予測関連パラメータのシンタックスをDR予測の予測パラメータのために再利用する際に、L1参照フレームのパラメータのためのシンタックスは使用されなくてもよいと説明した。しかしながら、ある変形例において、L0参照フレームの及びL1参照フレームのパラメータのためのシンタックスの双方を再利用して、DR予測の予測パラメータの2つのバージョンが提供されてもよい。
第1の手法において、予測パラメータの第1のバージョン及び第2のバージョンは、画素値が属するバンドに応じて選択的に使用される。ここでの画素値のバンドとは、限定ではないものの、輝度成分については明るさ、色差成分については鮮やかさに相当し得る。
第2の手法において、予測パラメータの第1のバージョン及び第2のバージョンは、画素が属する画像領域に応じて選択的に使用される。ここでの画像領域とは、ピクチャ、スライス又はタイルをセグメント化することにより形成され得る個々の領域に相当し得る。
一変形例において、予測パラメータの2つのバージョンを提供するという本項で述べた手法は、輝度成分にのみ適用され、色差成分には適用されなくてもよい。この場合、色差成分については、画素値が属するバンド又は画素が属する画像領域に関わらず、重み付け予測関連パラメータのシンタックスのうちのL0参照フレームのための部分に符号化され及び当該部分から復号され得る予測パラメータ(典型的には、ゲイン及びオフセット)が利用される。L1参照フレームのための部分に含まれる色差成分についてのパラメータは、可変長符号化によって最も短い符号語にマッピングされ得る任意の値(例えばゼロ)に設定されてよい(その値は、DR変換において無視され得る)。一般的に、主観的な画質への色差成分の寄与は輝度成分の寄与と比較してより小さいため、このようにDR変換において色差成分の処理を簡略化することで、わずかな画質の犠牲を伴うだけで予測パラメータの符号量を削減することができる。
ベースレイヤ画像とエンハンスメントレイヤ画像との間では、ダイナミックレンジのみならず、画像サイズ及びビット深度が異なり得る。これら3つの属性を変換する処理がそれぞれ別個に実行されるとすれば、インターレイヤの処理全体のために要する処理コストは非常に大きくなる。そこで、JCTVC-O0194(“SCE4: Test 5.1 results on bit-depth and color-gamut scalability”,Alireza Aminlou, el. al, 2013年10月23-11月1日)は、アップサンプリングのフィルタ演算の中にビットシフト演算を組み込むことで、処理コストを抑制することを提案している。
[3-1.概略的な流れ]
図20は、一実施形態に係る符号化時の概略的な処理の流れの一例を示すフローチャートである。なお、説明の簡明さのために、本開示に係る技術に直接的に関連しない処理ステップは、図から省略されている。
(1)第1の例
図21は、エンハンスメントレイヤの符号化処理におけるDR予測処理の流れの第1の例を示すフローチャートである。ここで説明するDR予測処理は、ピクチャ又はスライスごとに繰り返される。
図22は、エンハンスメントレイヤの符号化処理におけるDR予測処理の流れの第2の例を示すフローチャートである。
図23は、エンハンスメントレイヤの符号化処理におけるDR予測処理の流れの第3の例を示すフローチャートである。
図24は、エンハンスメントレイヤの符号化処理におけるDR予測処理の流れの第4の例を示すフローチャートである。
既存の手法によれば、インターレイヤ処理(inter-layer processing)において、DR変換は、アップサンプリング(及び必要に応じてビットシフト)が実行された後に実行される。図21~図24のフローチャートも、そのような処理順序を踏襲している。しかしながら、DR変換の処理コストは変換対象の画素数に比例するため、アップサンプリングによって増加した画素を対象としてDR変換を実行することは、処理コストの観点で最適とは言えない。また、ビットシフト後の拡張されたビット深度を有する画素を対象としてDR変換を実行することは、DR変換の演算に要する処理リソース(例えば、レジスタの所要ビット数)もより多くなることを意味する。そこで、一変形例として、DR予測部40は、エンハンスメントレイヤの空間解像度(画像サイズ)がベースレイヤの空間解像度よりも高い場合に、ベースレイヤ画像のダイナミックレンジを変換した後、変換された当該画像をアップサンプリングすることにより、エンハンスメントレイヤ画像を予測してもよい。
[4-1.全体的な構成]
図26は、図4に示したEL復号部6bの構成の一例を示すブロック図である。図26を参照すると、EL復号部6bは、蓄積バッファ61、可逆復号部62、逆量子化部63、逆直交変換部64、加算部65、ループフィルタ66、並び替えバッファ67、D/A(Digital to Analogue)変換部68、フレームメモリ69、セレクタ70及び71、イントラ予測部80、インター予測部85並びにDR予測部90を備える。
図27は、図26に示したDR予測部90の構成の一例を示すブロック図である。図27を参照すると、DR予測部90は、アップサンプリング部91、予測モード設定部92、パラメータ設定部93及びDR変換部94を有する。
アップサンプリング部91は、共通メモリ7から取得されるベースレイヤの画像を、ベースレイヤとエンハンスメントレイヤとの間の解像度比に従ってアップサンプリングする。より具体的には、アップサンプリング部91は、解像度比に応じて順に走査される補間画素の各々について、予め定義されるフィルタ係数でベースレイヤの画像をフィルタリングすることにより補間画素値を計算する。それにより、参照ブロックとして使用されるベースレイヤの画像の空間解像度が、エンハンスメントレイヤと同等の解像度まで高められる。アップサンプリング部91は、アップサンプリング後の画像をDR変換部94へ出力する。なお、レイヤ間で解像度が等しい場合には、アップサンプリングはスキップされてよい。この場合、アップサンプリング部41はベースレイヤの画像をそのままパラメータ計算部43及びDR変換部44へ出力し得る。
予測モード設定部92は、DR予測のための予測モードの候補のうち、可逆復号部62により復号される予測モードパラメータにより示される予測モードを、DR予測部90に設定する。予測モードの候補は、上述したビットシフトモード、固定パラメータモード及び適応パラメータモードを含み得る。ある実施例において、予測モード設定部92は、PPSから復号される予測モードパラメータに従って、予測モードを設定し得る。他の実施例において、予測モード設定部92は、スライスヘッダから復号される予測モードパラメータに従って、予測モードを設定し得る。また別の実施例において、予測モード設定部92は、SPSから復号される予測モードパラメータに従って、予測モードを設定し得る。SPSから予測モードパラメータが復号される場合、1つのシーケンス内で同じ予測モードが維持され得る。
パラメータ設定部93は、予測モード設定部92により適応パラメータモードが設定された場合に、DR予測のために使用されるべき予測パラメータを、可逆復号部62により復号される予測パラメータに従って設定する。ここでの予測パラメータは、式(4)~式(6)に示したゲインgi及びオフセットoi(i=1,2,3)を含む。
DR変換部94は、予測モード設定部92により設定される予測モードに従って、アップサンプリング部91から入力されるアップサンプリング後のベースレイヤのSDR画像のダイナミックレンジを、エンハンスメントレイヤのHDR画像と同等のダイナミックレンジに変換する。例えば、DR変換部94は、ビットシフトモードが設定された場合には、式(1)~式(3)に従って、アップサンプリング後のベースレイヤの画素値を所定のビットシフト量だけ左方へシフトさせることにより、予測画素値を計算する。また、DR変換部94は、固定パラメータモードが設定された場合には、式(4)~式(6)に従って、アップサンプリング後のベースレイヤの画素値に固定的なゲインを乗算し、さらに固定的なオフセットを加算することにより、予測画素値を計算する。また、DR変換部94は、適応パラメータモードが設定された場合には、固定的なゲイン及びオフセットの代わりに、パラメータ設定部93により設定されるゲイン及びオフセットを用いて、予測画素値を計算する。それにより、インターレイヤ予測のための参照画像が生成される。DR変換部94は、このように生成されるインターレイヤ予測のための参照画像(HDR画像に相当する広いダイナミックレンジを有するベースレイヤの画像)を、フレームメモリ69に格納する。
[5-1.概略的な流れ]
図28は、一実施形態に係る復号時の概略的な処理の流れの一例を示すフローチャートである。なお、説明の簡明さのために、本開示に係る技術に直接的に関連しない処理ステップは、図から省略されている。
(1)第1の例
図29は、エンハンスメントレイヤの復号処理におけるDR予測処理の流れの第1の例を示すフローチャートである。ここで説明するDR予測処理は、ピクチャ又はスライスごとに繰り返される。
図30は、エンハンスメントレイヤの復号処理におけるDR予測処理の流れの第2の例を示すフローチャートである。
図31は、エンハンスメントレイヤの復号処理におけるDR予測処理の流れの第3の例を示すフローチャートである。
図32は、エンハンスメントレイヤの復号処理におけるDR予測処理の流れの第4の例を示すフローチャートである。
図29~図32のフローチャートは、アップサンプリングの実行後にDR変換が実行される例を示している。しかしながら、図25A及び図25Bを用いて説明したように、一変形例として、DR予測部90は、エンハンスメントレイヤの空間解像度(画像サイズ)がベースレイヤの空間解像度よりも高い場合に、ベースレイヤ画像のダイナミックレンジを変換した後、変換された当該画像をアップサンプリングすることにより、エンハンスメントレイヤ画像を予測してもよい。そのような処理順序によれば、DR変換の変換対象の画素数及びビット深度が既存の処理順序のケースと比較して低減されるため、インターレイヤの処理全体のための処理コストを一層抑制することができる。
[6-1.様々な製品への応用]
上述した実施形態に係る画像符号化装置10及び画像復号装置60は、衛星放送、ケーブルTVなどの有線放送、インターネット上での配信、及びセルラー通信による端末への配信などにおける送信機若しくは受信機、光ディスク、磁気ディスク及びフラッシュメモリなどの媒体に画像を記録する記録装置、又は、これら記憶媒体から画像を再生する再生装置などの様々な電子機器に応用され得る。以下、4つの応用例について説明する。
図33は、上述した実施形態を適用したテレビジョン装置の概略的な構成の一例を示している。テレビジョン装置900は、アンテナ901、チューナ902、デマルチプレクサ903、デコーダ904、映像信号処理部905、表示部906、音声信号処理部907、スピーカ908、外部インタフェース909、制御部910、ユーザインタフェース911、及びバス912を備える。
図34は、上述した実施形態を適用した携帯電話機の概略的な構成の一例を示している。携帯電話機920は、アンテナ921、通信部922、音声コーデック923、スピーカ924、マイクロホン925、カメラ部926、画像処理部927、多重分離部928、記録再生部929、表示部930、制御部931、操作部932、及びバス933を備える。
図35は、上述した実施形態を適用した記録再生装置の概略的な構成の一例を示している。記録再生装置940は、例えば、受信した放送番組の音声データ及び映像データを符号化して記録媒体に記録する。また、記録再生装置940は、例えば、他の装置から取得される音声データ及び映像データを符号化して記録媒体に記録してもよい。また、記録再生装置940は、例えば、ユーザの指示に応じて、記録媒体に記録されているデータをモニタ及びスピーカ上で再生する。このとき、記録再生装置940は、音声データ及び映像データを復号する。
図36は、上述した実施形態を適用した撮像装置の概略的な構成の一例を示している。撮像装置960は、被写体を撮像して画像を生成し、画像データを符号化して記録媒体に記録する。
上述したスケーラブル符号化の利点は、様々な用途において享受され得る。以下、3つの用途の例について説明する。
第1の例において、スケーラブル符号化は、データの選択的な伝送のために利用される。図37を参照すると、データ伝送システム1000は、ストリーム記憶装置1001及び配信サーバ1002を含む。配信サーバ1002は、ネットワーク1003を介して、いくつかの端末装置と接続される。ネットワーク1003は、有線ネットワークであっても無線ネットワークであってもよく、又はそれらの組合せであってもよい。図37には、端末装置の例として、PC(Personal Computer)1004、AV機器1005、タブレット装置1006及び携帯電話機1007が示されている。
第2の例において、スケーラブル符号化は、複数の通信チャネルを介するデータの伝送のために利用される。図38を参照すると、データ伝送システム1100は、放送局1101及び端末装置1102を含む。放送局1101は、地上波チャネル1111上で、ベースレイヤの符号化ストリーム1121を放送する。また、放送局1101は、ネットワーク1112を介して、エンハンスメントレイヤの符号化ストリーム1122を端末装置1102へ送信する。
第3の例において、スケーラブル符号化は、映像の記憶のために利用される。図39を参照すると、データ伝送システム1200は、撮像装置1201及びストリーム記憶装置1202を含む。撮像装置1201は、被写体1211を撮像することにより生成される画像データをスケーラブル符号化し、多重化ストリーム1221を生成する。多重化ストリーム1221は、ベースレイヤの符号化ストリーム及びエンハンスメントレイヤの符号化ストリームを含む。そして、撮像装置1201は、多重化ストリーム1221をストリーム記憶装置1202へ供給する。
(1)マルチビューコーデックへの応用
マルチビューコーデックは、マルチレイヤコーデックの一種であり、いわゆる多視点映像を符号化し及び復号するための画像符号化方式である。図40は、マルチビューコーデックについて説明するための説明図である。図40を参照すると、3つの視点においてそれぞれ撮影される3つのビューのフレームのシーケンスが示されている。各ビューには、ビューID(view_id)が付与される。これら複数のビューのうちいずれか1つのビューが、ベースビュー(base view)に指定される。ベースビュー以外のビューは、ノンベースビューと呼ばれる。図40の例では、ビューIDが“0”であるビューがベースビューであり、ビューIDが“1”又は“2”である2つのビューがノンベースビューである。これらビューが階層的に符号化される場合、各ビューがレイヤに相当し得る。図中に矢印で示したように、ノンベースビューの画像は、ベースビューの画像を参照して符号化され及び復号される(他のノンベースビューの画像も参照されてよい)。
本開示に係る技術は、ストリーミングプロトコルに適用されてもよい。例えば、MPEG-DASH(Dynamic Adaptive Streaming over HTTP)では、解像度などのパラメータが互いに異なる複数の符号化ストリームがストリーミングサーバにおいて予め用意される。そして、ストリーミングサーバは、複数の符号化ストリームからストリーミングすべき適切なデータをセグメント単位で動的に選択し、選択したデータを配信する。このようなストリーミングプロトコルにおいて、本開示に係る技術に従って、符号化ストリーム間のダイナミックレンジの変換が制御されてもよい。
ここまで、図1~図42を用いて、本開示に係る技術の実施形態について詳細に説明した。上述した実施形態によれば、第1レイヤ(例えば、ベースレイヤ)よりも大きい輝度ダイナミックレンジを有する第2レイヤ(例えば、エンハンスメントレイヤ)の画像を第1レイヤの画像から予測する際に使用される予測パラメータとして、第1レイヤの各色成分に乗算されるゲインと、オフセットとが定義される。そして、これらゲイン及びオフセットを用いて第1レイヤの画像から第2レイヤの画像が予測される。これは、輝度ダイナミックレンジの互いに異なる画像間の関係が、色成分ごとに独立した線型的な関係に近似されることを意味する。このような予測モデルを用いることで、色領域の変換又は複数フレームにまたがるフィルタリング処理などの複雑なアルゴリズムを要求することなく、SDR画像からHDR画像を相応の精度で予測することが可能となる。また、こうした仕組みをビデオフォーマットにおいて採用することで、フォーマットの汎用性及び拡張性が保証され、多様なエンコーダ及びデコーダにおいて当該ビデオフォーマットを活用することが容易となる。また、上述した予測モデルに基づく実装は比較的簡易であることから、回路規模の増大、及び増加する演算量に起因する消費電力の上昇も回避される。
(1)
第1レイヤよりも大きい輝度ダイナミックレンジを有する第2レイヤの画像を前記第1レイヤの画像から予測する際に使用される予測パラメータであって、前記第1レイヤの各色成分に乗算されるゲイン及びオフセットを含む当該予測パラメータを復号する復号部と、
前記復号部により復号される前記予測パラメータを用いて、前記第1レイヤの画像から前記第2レイヤの画像を予測する予測部と、
を備える画像処理装置。
(2)
前記復号部は、前記予測パラメータの過去の値からの差分を復号し、
前記予測部は、前記復号部により復号される前記差分を用いて前記予測パラメータを設定する、
前記(1)に記載の画像処理装置。
(3)
前記復号部は、予測モードを示す予測モードパラメータをさらに復号し、
前記予測部は、前記予測モードパラメータが適応パラメータモードを示す場合に、前記予測パラメータを用いて前記第2レイヤの画像を予測する、
前記(1)又は前記(2)に記載の画像処理装置。
(4)
前記復号部は、重み付け予測関連パラメータと共用されるシンタックスを有するヘッダから、前記予測パラメータを復号する、前記(1)~(3)のいずれか1項に記載の画像処理装置。
(5)
前記復号部は、前記第2レイヤにおいて前記重み付け予測関連パラメータを復号せず、
前記第2レイヤにおいて前記第1レイヤの前記重み付け予測関連パラメータが再利用される、
前記(4)に記載の画像処理装置。
(6)
前記予測パラメータの第1のセットが、前記重み付け予測関連パラメータと共用される前記シンタックスのL0参照フレーム用の部分から復号され、
前記予測パラメータの第2のセットが、前記重み付け予測関連パラメータと共用される前記シンタックスのL1参照フレーム用の部分から復号され、
前記予測部は、前記予測パラメータの前記第1のセット及び前記予測パラメータの前記第2のセットを、前記第2レイヤの画像を予測するために選択的に使用する、
前記(4)又は前記(5)に記載の画像処理装置。
(7)
前記予測部は、画素値が属するバンドに応じて、前記予測パラメータの前記第1のセット及び前記予測パラメータの前記第2のセットのうち使用すべきセットを選択する、前記(6)に記載の画像処理装置。
(8)
前記予測部は、画素値が属する画像領域に応じて、前記予測パラメータの前記第1のセット及び前記予測パラメータの前記第2のセットのうち使用すべきセットを選択する、前記(6)に記載の画像処理装置。
(9)
前記復号部は、前記第2レイヤのビット深度が前記第1レイヤのビット深度よりも大きい場合に、前記第2レイヤの画像を予測する際のビットシフトをダイナミックレンジ変換と同時に実行すべきかを示す制御パラメータ、をさらに復号し、
前記予測部は、前記第2レイヤの画像を予測する際の前記ビットシフトをダイナミックレンジ変換と同時に実行すべきであることを前記制御パラメータが示す場合に、前記ビットシフトをアップサンプリングではなくダイナミックレンジ変換と同時に実行する、
前記(1)~(8)のいずれか1項に記載の画像処理装置。
(10)
前記復号部は、輝度成分及び色差成分について別々に前記制御パラメータを復号する、前記(9)に記載の画像処理装置。
(11)
前記予測部は、前記第2レイヤの空間解像度が前記第1レイヤの空間解像度よりも高い場合に、前記第1レイヤの画像のダイナミックレンジを前記予測パラメータを用いて変換した後、変換された当該画像をアップサンプリングすることにより、前記第2レイヤの画像を予測する、前記(1)~(10)のいずれか1項に記載の画像処理装置。
(12)
第1レイヤよりも大きい輝度ダイナミックレンジを有する第2レイヤの画像を前記第1レイヤの画像から予測する際に使用される予測パラメータであって、前記第1レイヤの各色成分に乗算されるゲイン及びオフセットを含む当該予測パラメータを復号することと、
復号される前記予測パラメータを用いて、前記第1レイヤの画像から前記第2レイヤの画像を予測することと、
を含む画像処理方法。
(13)
第1レイヤよりも大きい輝度ダイナミックレンジを有する第2レイヤの画像を符号化する際に、前記第1レイヤの画像から前記第2レイヤの画像を予測する予測部と、
前記予測部により使用される予測パラメータであって、前記第1レイヤの各色成分に乗算されるゲイン及びオフセットを含む当該予測パラメータを符号化する符号化部と、
を備える画像処理装置。
(14)
前記符号化部は、前記予測パラメータの過去の値からの差分を符号化する、前記(13)に記載の画像処理装置。
(15)
前記符号化部は、前記予測パラメータを用いて前記第2レイヤの画像が予測される場合に、予測モードとして適応パラメータモードを示す予測モードパラメータをさらに符号化する、前記(13)又は前記(14)に記載の画像処理装置。
(16)
前記符号化部は、重み付け予測関連パラメータと共用されるシンタックスを有するヘッダにおいて前記予測パラメータを符号化する、前記(13)~(15)のいずれか1項に記載の画像処理装置。
(17)
前記符号化部は、前記第2レイヤにおいて前記重み付け予測関連パラメータを符号化せず、
前記第2レイヤにおいて前記第1レイヤの前記重み付け予測関連パラメータが再利用される、
前記(16)に記載の画像処理装置。
(18)
前記予測部は、前記予測パラメータの第1のセット及び前記予測パラメータの第2のセットを、前記第2レイヤの画像を予測するために選択的に使用し、
前記符号化部は、前記予測パラメータの前記第1のセットを、前記重み付け予測関連パラメータと共用される前記シンタックスのL0参照フレーム用の部分へ符号化し、前記予測パラメータの前記第2のセットを、前記重み付け予測関連パラメータと共用される前記シンタックスのL1参照フレーム用の部分へ符号化する、
前記(16)又は前記(17)に記載の画像処理装置。
(19)
前記予測部は、画素値が属するバンドに応じて、前記予測パラメータの前記第1のセット及び前記予測パラメータの前記第2のセットのうち使用すべきセットを選択する、前記(18)に記載の画像処理装置。
(20)
前記予測部は、画素値が属する画像領域に応じて、前記予測パラメータの前記第1のセット及び前記予測パラメータの前記第2のセットのうち使用すべきセットを選択する、前記(18)に記載の画像処理装置。
(21)
前記符号化部は、前記第2レイヤのビット深度が前記第1レイヤのビット深度よりも大きい場合に、前記第2レイヤの画像を予測する際のビットシフトをダイナミックレンジ変換と同時に実行すべきかを示す制御パラメータ、をさらに符号化する、前記(13)~(20)のいずれか1項に記載の画像処理装置。
(22)
前記符号化部は、輝度成分及び色差成分について別々に前記制御パラメータを符号化する、前記(21)に記載の画像処理装置。
(23)
前記予測部は、前記第2レイヤの空間解像度が前記第1レイヤの空間解像度よりも高い場合に、前記第1レイヤの画像のダイナミックレンジを前記予測パラメータを用いて変換した後、変換された当該画像をアップサンプリングすることにより、前記第2レイヤの画像を予測する、前記(13)~(22)のいずれか1項に記載の画像処理装置。
(24)
第1レイヤよりも大きい輝度ダイナミックレンジを有する第2レイヤの画像を符号化する際に、前記第1レイヤの画像から前記第2レイヤの画像を予測することと、
前記予測において使用される予測パラメータであって、前記第1レイヤの各色成分に乗算されるゲイン及びオフセットを含む当該予測パラメータを符号化することと、
を含む画像処理方法。
16 可逆符号化部
40 DR予測部
60,60v 画像復号装置(画像処理装置)
62 可逆復号部
90 DR予測部
Claims (20)
- 第1レイヤよりも大きい輝度ダイナミックレンジを有する第2レイヤの画像を前記第1レイヤの画像から予測する際に使用される予測パラメータであって、前記第1レイヤの各色成分に乗算されるゲイン及びオフセットを含む当該予測パラメータを復号する復号部と、
前記復号部により復号される前記予測パラメータを用いて、前記第1レイヤの画像から前記第2レイヤの画像を予測する予測部と、
を備える画像処理装置。 - 前記復号部は、前記予測パラメータの過去の値からの差分を復号し、
前記予測部は、前記復号部により復号される前記差分を用いて前記予測パラメータを設定する、
請求項1に記載の画像処理装置。 - 前記復号部は、重み付け予測関連パラメータと共用されるシンタックスを有するヘッダから、前記予測パラメータを復号する、請求項1に記載の画像処理装置。
- 前記復号部は、前記第2レイヤにおいて前記重み付け予測関連パラメータを復号せず、
前記第2レイヤにおいて前記第1レイヤの前記重み付け予測関連パラメータが再利用される、
請求項3に記載の画像処理装置。 - 前記予測パラメータの第1のセットが、前記重み付け予測関連パラメータと共用される前記シンタックスのL0参照フレーム用の部分から復号され、
前記予測パラメータの第2のセットが、前記重み付け予測関連パラメータと共用される前記シンタックスのL1参照フレーム用の部分から復号され、
前記予測部は、前記予測パラメータの前記第1のセット及び前記予測パラメータの前記第2のセットを、前記第2レイヤの画像を予測するために選択的に使用する、
請求項3に記載の画像処理装置。 - 前記予測部は、画素値が属するバンドに応じて、前記予測パラメータの前記第1のセット及び前記予測パラメータの前記第2のセットのうち使用すべきセットを選択する、請求項5に記載の画像処理装置。
- 前記予測部は、画素値が属する画像領域に応じて、前記予測パラメータの前記第1のセット及び前記予測パラメータの前記第2のセットのうち使用すべきセットを選択する、請求項5に記載の画像処理装置。
- 前記復号部は、前記第2レイヤのビット深度が前記第1レイヤのビット深度よりも大きい場合に、前記第2レイヤの画像を予測する際のビットシフトをダイナミックレンジ変換と同時に実行すべきかを示す制御パラメータ、をさらに復号し、
前記予測部は、前記第2レイヤの画像を予測する際の前記ビットシフトをダイナミックレンジ変換と同時に実行すべきであることを前記制御パラメータが示す場合に、前記ビットシフトをアップサンプリングではなくダイナミックレンジ変換と同時に実行する、
請求項1に記載の画像処理装置。 - 前記予測部は、前記第2レイヤの空間解像度が前記第1レイヤの空間解像度よりも高い場合に、前記第1レイヤの画像のダイナミックレンジを前記予測パラメータを用いて変換した後、変換された当該画像をアップサンプリングすることにより、前記第2レイヤの画像を予測する、請求項1に記載の画像処理装置。
- 第1レイヤよりも大きい輝度ダイナミックレンジを有する第2レイヤの画像を前記第1レイヤの画像から予測する際に使用される予測パラメータであって、前記第1レイヤの各色成分に乗算されるゲイン及びオフセットを含む当該予測パラメータを復号することと、
復号される前記予測パラメータを用いて、前記第1レイヤの画像から前記第2レイヤの画像を予測することと、
を含む画像処理方法。 - 第1レイヤよりも大きい輝度ダイナミックレンジを有する第2レイヤの画像を符号化する際に、前記第1レイヤの画像から前記第2レイヤの画像を予測する予測部と、
前記予測部により使用される予測パラメータであって、前記第1レイヤの各色成分に乗算されるゲイン及びオフセットを含む当該予測パラメータを符号化する符号化部と、
を備える画像処理装置。 - 前記符号化部は、前記予測パラメータの過去の値からの差分を符号化する、請求項11に記載の画像処理装置。
- 前記符号化部は、重み付け予測関連パラメータと共用されるシンタックスを有するヘッダにおいて前記予測パラメータを符号化する、請求項11に記載の画像処理装置。
- 前記符号化部は、前記第2レイヤにおいて前記重み付け予測関連パラメータを符号化せず、
前記第2レイヤにおいて前記第1レイヤの前記重み付け予測関連パラメータが再利用される、
請求項13に記載の画像処理装置。 - 前記予測部は、前記予測パラメータの第1のセット及び前記予測パラメータの第2のセットを、前記第2レイヤの画像を予測するために選択的に使用し、
前記符号化部は、前記予測パラメータの前記第1のセットを、前記重み付け予測関連パラメータと共用される前記シンタックスのL0参照フレーム用の部分へ符号化し、前記予測パラメータの前記第2のセットを、前記重み付け予測関連パラメータと共用される前記シンタックスのL1参照フレーム用の部分へ符号化する、
請求項13に記載の画像処理装置。 - 前記予測部は、画素値が属するバンドに応じて、前記予測パラメータの前記第1のセット及び前記予測パラメータの前記第2のセットのうち使用すべきセットを選択する、請求項15に記載の画像処理装置。
- 前記予測部は、画素値が属する画像領域に応じて、前記予測パラメータの前記第1のセット及び前記予測パラメータの前記第2のセットのうち使用すべきセットを選択する、請求項15に記載の画像処理装置。
- 前記符号化部は、前記第2レイヤのビット深度が前記第1レイヤのビット深度よりも大きい場合に、前記第2レイヤの画像を予測する際のビットシフトをダイナミックレンジ変換と同時に実行すべきかを示す制御パラメータ、をさらに符号化する、請求項11に記載の画像処理装置。
- 前記予測部は、前記第2レイヤの空間解像度が前記第1レイヤの空間解像度よりも高い場合に、前記第1レイヤの画像のダイナミックレンジを前記予測パラメータを用いて変換した後、変換された当該画像をアップサンプリングすることにより、前記第2レイヤの画像を予測する、請求項11に記載の画像処理装置。
- 第1レイヤよりも大きい輝度ダイナミックレンジを有する第2レイヤの画像を符号化する際に、前記第1レイヤの画像から前記第2レイヤの画像を予測することと、
前記予測において使用される予測パラメータであって、前記第1レイヤの各色成分に乗算されるゲイン及びオフセットを含む当該予測パラメータを符号化することと、
を含む画像処理方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP14875507.7A EP2938084A4 (en) | 2013-12-27 | 2014-11-07 | IMAGE PROCESSING DEVICE AND IMAGE PROCESSING METHOD |
JP2015554656A JP6094688B2 (ja) | 2013-12-27 | 2014-11-07 | 画像処理装置及び画像処理方法 |
CN201480006074.4A CN104956679B (zh) | 2013-12-27 | 2014-11-07 | 图像处理装置和图像处理方法 |
US14/763,282 US9571838B2 (en) | 2013-12-27 | 2014-11-07 | Image processing apparatus and image processing method |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013271483 | 2013-12-27 | ||
JP2013-271483 | 2013-12-27 | ||
JP2014-050517 | 2014-03-13 | ||
JP2014050517 | 2014-03-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015098315A1 true WO2015098315A1 (ja) | 2015-07-02 |
Family
ID=53478193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/079622 WO2015098315A1 (ja) | 2013-12-27 | 2014-11-07 | 画像処理装置及び画像処理方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9571838B2 (ja) |
EP (1) | EP2938084A4 (ja) |
JP (1) | JP6094688B2 (ja) |
CN (1) | CN104956679B (ja) |
WO (1) | WO2015098315A1 (ja) |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111343460B (zh) * | 2014-02-07 | 2023-10-20 | 索尼公司 | 接收装置、显示设备及接收方法 |
RU2686642C2 (ru) * | 2014-08-08 | 2019-04-29 | Конинклейке Филипс Н.В. | Способы и устройства для кодирования hdr-изображений |
US10609327B2 (en) * | 2014-12-29 | 2020-03-31 | Sony Corporation | Transmission device, transmission method, reception device, and reception method |
JP6986670B2 (ja) | 2015-09-11 | 2021-12-22 | パナソニックIpマネジメント株式会社 | 映像受信方法及び映像受信装置 |
EP3349474A4 (en) * | 2015-09-11 | 2018-07-25 | Panasonic Intellectual Property Management Co., Ltd. | Video reception method, video transmission method, video reception apparatus, and video transmission apparatus |
JP6830190B2 (ja) * | 2015-10-07 | 2021-02-17 | パナソニックIpマネジメント株式会社 | 映像送信方法、映像受信方法、映像送信装置及び映像受信装置 |
CN106937121B (zh) * | 2015-12-31 | 2021-12-10 | 中兴通讯股份有限公司 | 图像解码和编码方法、解码和编码装置、解码器及编码器 |
CN109076252B (zh) | 2016-02-16 | 2022-07-01 | 弗劳恩霍夫应用研究促进协会 | 使用自适应流传输协议取回视频的设备及其方法 |
GB2547442B (en) * | 2016-02-17 | 2022-01-12 | V Nova Int Ltd | Physical adapter, signal processing equipment, methods and computer programs |
EP3244616A1 (en) * | 2016-05-13 | 2017-11-15 | Thomson Licensing | A method for encoding an input video comprising a luma component and two chroma components, the method comprising reshaping of said input video based on reshaping functions |
JP6852411B2 (ja) * | 2017-01-19 | 2021-03-31 | ソニー株式会社 | 映像信号処理装置、映像信号処理方法およびプログラム |
CN106886804B (zh) * | 2017-01-22 | 2020-04-28 | 陕西外号信息技术有限公司 | 一种光标签的自增强方法 |
CN111801948B (zh) * | 2018-03-01 | 2023-01-03 | 索尼公司 | 图像处理装置和方法、成像元件和成像装置 |
US11533512B2 (en) * | 2020-04-10 | 2022-12-20 | Qualcomm Incorporated | Dynamic range adjustment parameter signaling and enablement of variable bit depth support |
US11652996B2 (en) * | 2021-05-25 | 2023-05-16 | Tencent America LLC | Method and apparatus for video coding |
CN116229870B (zh) * | 2023-05-10 | 2023-08-15 | 苏州华兴源创科技股份有限公司 | 补偿数据的压缩、解压缩方法及显示面板补偿方法 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011509536A (ja) * | 2008-01-04 | 2011-03-24 | シャープ株式会社 | レイヤー間(inter−layer)画像予測パラメータを決定するための方法及び装置 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080040543A (ko) * | 2006-11-02 | 2008-05-08 | 엘지전자 주식회사 | 위상천이 기반 프리코딩을 이용한 데이터 전송 방법 및이를 지원하는 송수신기 |
BR122020013607B1 (pt) * | 2011-03-11 | 2023-10-24 | Sony Corporation | Aparelho e método de processamento de imagem |
CN102625102B (zh) * | 2011-12-22 | 2014-02-12 | 北京航空航天大学 | 一种面向h.264/svc mgs编码的率失真模式选择方法 |
CN102740078B (zh) * | 2012-07-12 | 2014-10-22 | 北方工业大学 | 基于hevc标准的自适应空间可伸缩编码 |
WO2014053519A1 (en) * | 2012-10-01 | 2014-04-10 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Scalable video coding using inter-layer prediction of spatial intra prediction parameters |
MX344000B (es) * | 2013-04-05 | 2016-12-02 | Sony Corp | Aparato de procesamiento de imagenes y metodo de procesamiento de imagenes. |
US10368097B2 (en) * | 2014-01-07 | 2019-07-30 | Nokia Technologies Oy | Apparatus, a method and a computer program product for coding and decoding chroma components of texture pictures for sample prediction of depth pictures |
-
2014
- 2014-11-07 JP JP2015554656A patent/JP6094688B2/ja not_active Expired - Fee Related
- 2014-11-07 WO PCT/JP2014/079622 patent/WO2015098315A1/ja active Application Filing
- 2014-11-07 US US14/763,282 patent/US9571838B2/en not_active Expired - Fee Related
- 2014-11-07 EP EP14875507.7A patent/EP2938084A4/en not_active Withdrawn
- 2014-11-07 CN CN201480006074.4A patent/CN104956679B/zh not_active Expired - Fee Related
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011509536A (ja) * | 2008-01-04 | 2011-03-24 | シャープ株式会社 | レイヤー間(inter−layer)画像予測パラメータを決定するための方法及び装置 |
Non-Patent Citations (10)
Title |
---|
BENJAMIN BROSS; WOO-JIN HAN; GARY J. SULLIVAN; JENS-RAINER OHM; GARY J. SULLIVAN; YE-KUI WANG; THOMAS WIEGAND: "High Efficiency Video Coding (HEVC) text specification draft 10 (for FDIS & Consent", JCTVC-L1003, vol. 4, 14 January 2013 (2013-01-14) |
BENJAMIN BROSS; WOO-JIN HAN; GARY J. SULLIVAN; JENS-RAINER OHM; GARY J. SULLIVAN; YE-KUI WANG; THOMAS WIEGAND: "High Efficiency Video Coding (HEVC) text specification draft 10 (for FDIS & Consent", JCTVC-LL003, vol. 4, 14 January 2013 (2013-01-14) |
CHEUNG AUYEUNG: "Non-SCE4: Picture and region adaptive gain-offset prediction for color space scalability", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 15TH MEETING, 23 October 2013 (2013-10-23), GENEVA, CH, XP030115235 * |
CHEUNG AUYEUNG: "SCE4: Results of test 5.4 -modell on piecewise linear color space predictor", J OINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 15TH MEETING, 23 October 2013 (2013-10-23), GENEVA, CH, XP030115236 * |
DAVID TOUZE: "High Dynamic Range Video Distribution Using Existing Video Codecs", 30TH PICTURE CODING SYMPOSIUM, 6 December 2013 (2013-12-06) |
DO-KYOUNG KWON ET AL.: "Inter-layer slice header syntax elements prediction in SHVC and MV-HEVC", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 13TH MEETING, 18 April 2013 (2013-04-18), INCHEON, KR, XP030057208 * |
JIANLE CHEN: "Description of scalable video coding technology proposal by Qualcomm (configuration 2", JCTVC-K0036, 10 October 2012 (2012-10-10) |
See also references of EP2938084A4 |
XIANG LI ET AL.: "Non-SCE4: Weighted Prediction Based Color Gamut Scalability", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT-VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 15TH MEETING, 23 October 2013 (2013-10-23), GENEVA, CH, XP030115215 * |
YUWEN HE ET AL.: "Non-SCE4/AHG14: Combined bit-depth and color gamut conversion with 3D LUT for SHVC color gamut scalability", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT- VC)OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11 15TH MEETING, 23 October 2013 (2013-10-23), GENEVA, CH, XP030115186 * |
Also Published As
Publication number | Publication date |
---|---|
CN104956679B (zh) | 2019-07-12 |
US20150358617A1 (en) | 2015-12-10 |
EP2938084A1 (en) | 2015-10-28 |
CN104956679A (zh) | 2015-09-30 |
US9571838B2 (en) | 2017-02-14 |
JPWO2015098315A1 (ja) | 2017-03-23 |
EP2938084A4 (en) | 2016-10-26 |
JP6094688B2 (ja) | 2017-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6094688B2 (ja) | 画像処理装置及び画像処理方法 | |
US9743100B2 (en) | Image processing apparatus and image processing method | |
JP6345650B2 (ja) | 画像処理装置及び画像処理方法 | |
US20150016522A1 (en) | Image processing apparatus and image processing method | |
WO2015053001A1 (ja) | 画像処理装置及び画像処理方法 | |
US20150036744A1 (en) | Image processing apparatus and image processing method | |
CN105409217B (zh) | 图像处理装置、图像处理方法和计算机可读介质 | |
WO2015005025A1 (ja) | 画像処理装置及び画像処理方法 | |
WO2015146278A1 (ja) | 画像処理装置及び画像処理方法 | |
WO2015098561A1 (ja) | 復号装置および復号方法、並びに、符号化装置および符号化方法 | |
US20150043638A1 (en) | Image processing apparatus and image processing method | |
WO2014148070A1 (ja) | 画像処理装置及び画像処理方法 | |
WO2015005024A1 (ja) | 画像処理装置及び画像処理方法 | |
WO2014097703A1 (ja) | 画像処理装置及び画像処理方法 | |
WO2015053111A1 (ja) | 復号装置および復号方法、並びに、符号化装置および符号化方法 | |
WO2014050311A1 (ja) | 画像処理装置及び画像処理方法 | |
WO2015098231A1 (ja) | 画像処理装置及び画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2014875507 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14763282 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14875507 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015554656 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |