WO2014002901A1 - 画像処理装置および方法 - Google Patents
画像処理装置および方法 Download PDFInfo
- Publication number
- WO2014002901A1 WO2014002901A1 PCT/JP2013/067114 JP2013067114W WO2014002901A1 WO 2014002901 A1 WO2014002901 A1 WO 2014002901A1 JP 2013067114 W JP2013067114 W JP 2013067114W WO 2014002901 A1 WO2014002901 A1 WO 2014002901A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- unit
- dynamic range
- characteristic information
- range characteristic
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/90—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
- H04N19/98—Adaptive-dynamic-range coding [ADRC]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/80—Details of filtering operations specially adapted for video compression, e.g. for pixel interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
Definitions
- the present disclosure relates to an image processing apparatus and method, and more particularly, to an image processing apparatus and method capable of accurately reproducing the dynamic range of an image.
- H.264 / AVC High Efficiency Video Coding
- JCTVC Joint Collaboration Team-Video Coding
- tone mapping information is transmitted in SEI (Supplemental Enhancement Information) shown in FIG.
- the content of the tone mapping information is the same as that specified in AVC as shown in FIG. 2 (see Non-Patent Document 2).
- Non-Patent Document 1 the dynamic range for the decoded image is defined. Was not.
- the present disclosure has been made in view of such a situation, and makes it possible to accurately reproduce the dynamic range of an image.
- An image processing apparatus includes an encoding unit that encodes an image to generate a bitstream, and dynamic range characteristic information indicating characteristics of a dynamic range assigned to the developed image with respect to the captured image And a transmission unit for transmitting the bitstream generated by the encoding unit and the dynamic range characteristic information set by the setting unit.
- the setting unit can set, as the dynamic range characteristic information, code information indicating a dynamic range code assigned to the developed image with respect to the captured image.
- the setting unit can set, as the dynamic range characteristic information, code information indicating a code assigned to the developed image with respect to the white level of the captured image.
- the setting unit can set white level code information indicating a code assigned to the developed image with respect to the white level of the captured image as the dynamic range characteristic information.
- the setting unit can set the maximum white level code information indicating the maximum value of the code assigned to the white level of the developed image as the dynamic range characteristic information.
- the setting unit can set black level code information indicating the black level code of the developed image as the dynamic range characteristic information.
- the setting unit can set gray level code information indicating the gray level code of the developed image as the dynamic range characteristic information.
- the setting unit can set maximum white level information indicating the maximum value of the white level of the captured image as the dynamic range characteristic information.
- the setting unit can set information indicating a luminance range of a region of interest of an image obtained by decoding the bitstream as the dynamic range characteristic information.
- the setting unit can set information indicating the position and offset of a region of interest of an image obtained by decoding the bitstream as the dynamic range characteristic information.
- the transmission unit can transmit the dynamic range characteristic information as auxiliary information used when displaying an image obtained by decoding the bit stream.
- the transmission unit can transmit the dynamic range characteristic information as extended auxiliary information obtained by extending existing auxiliary information.
- the transmission unit can transmit the dynamic range characteristic information as tone_mapping_information SEI (Supplemental enhancement information).
- the transmission unit can transmit the dynamic range characteristic information as SEI by extending the model_id used when transmitting the dynamic range characteristic information for tone_mapping_information SEI.
- the transmission unit can transmit the dynamic range characteristic information as VUI (Video Usability Information) indicating the usability of the image in sequence units.
- VUI Video Usability Information
- the encoding unit can encode the image in accordance with an encoding method conforming to the AVC / H.264 standard.
- the image processing method generates a bitstream by encoding an image, sets dynamic range characteristic information indicating characteristics of a dynamic range assigned to the developed image for the captured image, The generated bit stream and the set dynamic range characteristic information are transmitted.
- An image processing apparatus includes a decoding unit that decodes a bitstream to generate an image, and dynamic range characteristic information indicating characteristics of a dynamic range assigned to the developed image with respect to the captured image. And an image adjusting unit that adjusts a dynamic range for the image generated by the decoding unit.
- the image processing apparatus further includes a receiving unit that receives the bit stream and the characteristic information, wherein the decoding unit decodes the bit stream received by the receiving unit, and the image adjustment unit is a dynamic unit that is received by the receiving unit.
- the dynamic range for the image generated by the decoding unit can be adjusted using the range characteristic information.
- the image processing method decodes a bitstream to generate an image, and uses dynamic range characteristic information indicating dynamic range characteristics assigned to a developed image with respect to a captured image. Adjust the dynamic range for the generated image.
- a bit stream is generated by encoding an image, and dynamic range characteristic information indicating a dynamic range characteristic assigned to the developed image is set for the captured image. Then, the generated bit stream and the set dynamic range characteristic information are transmitted.
- an image is generated by decoding the bitstream. And the dynamic range with respect to the produced
- the above-described image processing apparatus may be an independent apparatus, or may be an internal block constituting one image encoding apparatus or image decoding apparatus.
- an image can be encoded.
- the dynamic range of the image can be accurately reproduced.
- an image can be decoded.
- the dynamic range of the image can be accurately reproduced.
- FIG. 17 is a flowchart describing details of the encoding process of FIG. 16.
- FIG. 17 is a flowchart describing details of the encoding process of FIG. 16.
- FIG. 24 is a block diagram illustrating a configuration example of an encoding unit in FIG. 23. It is a block diagram which shows the structural example of 2nd Embodiment of the decoding apparatus to which this technique is applied. It is a block diagram which shows the structural example of the decoding part of FIG.
- FIG. 20 is a block diagram illustrating a main configuration example of a computer. It is a block diagram which shows an example of a schematic structure of a television apparatus.
- FIG. 3 is a block diagram illustrating a configuration example of the first embodiment of the encoding device as an image processing device to which the present technology is applied.
- 3 includes an encoding unit 2, a setting unit 3, and a transmission unit 4, and encodes an image such as a captured image using the HEVC method.
- an image such as a captured image in units of frames is input to the encoding unit 2 of the encoding device 1 as an input signal.
- the encoding unit 2 encodes the input signal by the HEVC method, and supplies the encoded data obtained as a result to the setting unit 3.
- the setting unit 3 includes SPS (Sequence Parameter Set), PPS (Picture Parameter Set), VUI (Video Usability Information) indicating characteristics (usability) of an image corresponding to encoded data for each sequence, SEI (Supplemental Enhancement Information), and the like. Set.
- the setting unit 3 generates an encoded stream from the set SPS, PPS, VUI, and SEI and the encoded data supplied from the encoding unit 2.
- the setting unit 3 supplies the encoded stream to the transmission unit 4.
- the transmission unit 4 transmits the encoded stream supplied from the setting unit 3 to a decoding device to be described later.
- FIG. 4 is a block diagram illustrating a configuration example of the encoding unit 2 of FIG.
- a / D conversion unit 11 includes an A / D conversion unit 11, a screen rearrangement buffer 12, a calculation unit 13, an orthogonal transformation unit 14, a quantization unit 15, a lossless encoding unit 16, a storage buffer 17, and an inverse quantization unit.
- an inverse orthogonal transform unit 19 includes an addition unit 20, a deblock filter 21, a frame memory 22, a switch 23, an intra prediction unit 24, a motion prediction / compensation unit 25, a predicted image selection unit 26, and a rate control unit 27. Configured.
- an adaptive offset filter 41 and an adaptive loop filter 42 are provided between the deblock filter 21 and the frame memory 22.
- the A / D conversion unit 11 of the encoding unit 2 performs A / D conversion on an image in frame units input as an input signal, and outputs and stores the image in the screen rearrangement buffer 12.
- the screen rearrangement buffer 12 rearranges the stored frame-by-frame images in the order for encoding in accordance with the GOP (Group of Picture) structure, the arithmetic unit 13, the intra prediction unit 24, and Output to the motion prediction / compensation unit 25.
- the calculation unit 13 performs encoding by calculating the difference between the predicted image supplied from the predicted image selection unit 26 and the encoding target image output from the screen rearrangement buffer 12. Specifically, the calculation unit 13 performs encoding by subtracting the prediction image supplied from the prediction image selection unit 26 from the encoding target image output from the screen rearrangement buffer 12. The calculation unit 13 outputs the image obtained as a result to the orthogonal transform unit 14 as residual information. When the predicted image is not supplied from the predicted image selection unit 26, the calculation unit 13 outputs the image read from the screen rearrangement buffer 12 to the orthogonal transform unit 14 as residual information as it is.
- the orthogonal transform unit 14 performs orthogonal transform on the residual information from the calculation unit 13 and supplies a coefficient obtained as a result of the orthogonal transform to the quantization unit 15.
- the quantization unit 15 quantizes the coefficient supplied from the orthogonal transform unit 14.
- the quantized coefficient is input to the lossless encoding unit 16.
- the lossless encoding unit 16 acquires information indicating the optimal intra prediction mode (hereinafter referred to as intra prediction mode information) from the intra prediction unit 24.
- information indicating the optimal inter prediction mode hereinafter referred to as inter prediction mode information
- inter prediction mode information information indicating the optimal inter prediction mode
- a motion vector information for specifying a reference image, and the like are acquired from the motion prediction / compensation unit 25.
- the lossless encoding unit 16 acquires the storage flag, index or offset, and type information from the adaptive offset filter 41 as offset filter information, and acquires filter coefficients from the adaptive loop filter 42.
- the lossless encoding unit 16 performs variable length encoding (for example, CAVLC (Context-Adaptive Variable Length Coding)), arithmetic encoding (for example, CABAC) on the quantized coefficients supplied from the quantization unit 15. (Context-Adaptive Binary Arithmetic Coding) etc.) is performed.
- variable length encoding for example, CAVLC (Context-Adaptive Variable Length Coding)
- CABAC arithmetic encoding
- CABAC Context-Adaptive Binary Arithmetic Coding
- the lossless encoding unit 16 uses the intra prediction mode information or the inter prediction mode information, the motion vector, the information specifying the reference image, the offset filter information, the filter coefficient, and the like as the encoding information related to the encoding. Turn into.
- the lossless encoding unit 16 supplies the encoding information and the coefficients that have been losslessly encoded to the accumulation buffer 17 as encoded data and accumulates them.
- the losslessly encoded information may be the header information of the losslessly encoded coefficient.
- the accumulation buffer 17 temporarily stores the encoded data supplied from the lossless encoding unit 16. Further, the accumulation buffer 17 supplies the stored encoded data to the setting unit 3 in FIG.
- the quantized coefficient output from the quantization unit 15 is also input to the inverse quantization unit 18, subjected to inverse quantization, and then supplied to the inverse orthogonal transform unit 19.
- the inverse orthogonal transform unit 19 performs inverse orthogonal transform on the coefficients supplied from the inverse quantization unit 18 and supplies residual information obtained as a result to the addition unit 20.
- the adding unit 20 adds the residual information as the decoding target image supplied from the inverse orthogonal transform unit 19 and the predicted image supplied from the predicted image selecting unit 26 to obtain a locally decoded image. .
- the adding unit 20 sets the residual information supplied from the inverse orthogonal transform unit 19 as a locally decoded image.
- the adder 20 supplies the locally decoded image to the deblocking filter 21 and also supplies it to the frame memory 22 for storage.
- the deblocking filter 21 removes block distortion by filtering the locally decoded image supplied from the adding unit 20.
- the deblocking filter 21 supplies an image obtained as a result to the adaptive offset filter 41.
- the adaptive offset filter 41 performs an adaptive offset filter (SAO: Sample adaptive offset) process that mainly removes ringing on the image after the adaptive deblock filter process by the deblock filter 21.
- SAO Sample adaptive offset
- the adaptive offset filter 41 determines the type of adaptive offset filter processing for each LCU (Largest Coding Unit) which is the maximum coding unit, and obtains an offset used in the adaptive offset filter processing.
- the adaptive offset filter 41 performs the determined type of adaptive offset filter processing on the image after the adaptive deblocking filter processing, using the obtained offset. Then, the adaptive offset filter 41 supplies the image after the adaptive offset filter processing to the adaptive loop filter 42.
- the adaptive offset filter 41 has a buffer for storing the offset.
- the adaptive offset filter 41 determines whether the offset used for the adaptive deblocking filter processing is already stored in the buffer for each LCU.
- the adaptive offset filter 41 determines that the offset used for the adaptive deblocking filter processing is already stored in the buffer, the adaptive offset filter 41 stores a storage flag indicating whether the offset is stored in the buffer, and the offset is stored in the buffer. Is set to a value (1 in this case) indicating that the
- the adaptive offset filter 41 stores, for each LCU, a storage flag that is set to 1, an index that indicates the storage position of the offset in the buffer, and type information that indicates the type of adaptive offset filter processing that has been performed. 16 is supplied.
- the adaptive offset filter 41 stores the offset in order in the buffer. Further, the adaptive offset filter 41 sets the storage flag to a value (here, 0) indicating that the offset is not stored in the buffer. Then, the adaptive offset filter 41 supplies the storage flag, offset, and type information set to 0 to the lossless encoding unit 16 for each LCU.
- the adaptive loop filter 42 performs an adaptive loop filter (ALF: Adaptive Loop Filter) process on the image after the adaptive offset filter process supplied from the adaptive offset filter 41, for example, for each LCU.
- ALF Adaptive Loop Filter
- the adaptive loop filter process for example, a process using a two-dimensional Wiener filter is used. Of course, filters other than the Wiener filter may be used.
- the adaptive loop filter 42 is configured so that the residual of the original image that is the image output from the screen rearrangement buffer 12 and the image after the adaptive loop filter processing is minimized for each LCU. A filter coefficient used in the processing is calculated. Then, the adaptive loop filter 42 performs an adaptive loop filter process for each LCU using the calculated filter coefficient on the image after the adaptive offset filter process.
- the adaptive loop filter 42 supplies the image after the adaptive loop filter processing to the frame memory 22.
- the adaptive loop filter 42 supplies the filter coefficient to the lossless encoding unit 16.
- the adaptive loop filter processing is performed for each LCU, but the processing unit of the adaptive loop filter processing is not limited to the LCU. However, the processing can be efficiently performed by combining the processing units of the adaptive offset filter 41 and the adaptive loop filter 42.
- the image stored in the frame memory 22 is output as a reference image to the intra prediction unit 24 or the motion prediction / compensation unit 25 via the switch 23.
- the intra prediction unit 24 uses the reference image that has not been filtered by the deblocking filter 21 that is read from the frame memory 22 via the switch 23, and is used for all intra prediction modes in units of tiles and slices. Perform prediction processing.
- the intra prediction unit 24 calculates cost function values for all candidate intra prediction modes based on the image read from the screen rearrangement buffer 12 and the prediction image generated as a result of the intra prediction process. (Details will be described later). Then, the intra prediction unit 24 determines the intra prediction mode that minimizes the cost function value as the optimal intra prediction mode.
- the intra prediction unit 24 supplies the predicted image generated in the optimal intra prediction mode and the corresponding cost function value to the predicted image selection unit 26.
- the intra prediction unit 24 supplies the intra prediction mode information to the lossless encoding unit 16 when the prediction image selection unit 26 is notified of selection of a prediction image generated in the optimal intra prediction mode.
- the cost function value is also called RD (Rate Distortion) cost. It is calculated based on a method of either High Complexity mode or Low Complexity mode as defined by JM (Joint Model) which is reference software in the H.264 / AVC format.
- D is the difference (distortion) between the original image and the decoded image
- R is the amount of generated code including up to the coefficient of orthogonal transform
- ⁇ is the Lagrange multiplier given as a function of the quantization parameter QP.
- D is the difference (distortion) between the original image and the decoded image
- Header_Bit is the header bit for the prediction mode
- QPtoQuant is a function given as a function of the quantization parameter QP.
- the motion prediction / compensation unit 25 performs motion prediction / compensation processing for all candidate inter prediction modes in units of tiles and slices. Specifically, the motion prediction / compensation unit 25 is based on the image supplied from the screen rearrangement buffer 12 and the filtered reference image read from the frame memory 22 via the switch 23 in units of tiles and slices. , Detecting motion vectors of all candidate inter prediction modes. Then, the motion prediction / compensation unit 25 performs compensation processing on the reference image based on the motion vector in units of tiles and slices, and generates a predicted image.
- the motion prediction / compensation unit 25 calculates a cost function value for all candidate inter prediction modes based on the image and the predicted image supplied from the screen rearrangement buffer 12, and calculates the cost function value.
- the inter prediction mode that minimizes is determined as the optimal inter measurement mode.
- the motion prediction / compensation unit 25 supplies the cost function value of the optimal inter prediction mode and the corresponding predicted image to the predicted image selection unit 26.
- the motion prediction / compensation unit 25 specifies inter prediction mode information, a corresponding motion vector, and information for specifying a reference image. Are output to the lossless encoding unit 16.
- the predicted image selection unit 26 Based on the cost function values supplied from the intra prediction unit 24 and the motion prediction / compensation unit 25, the predicted image selection unit 26 has a smaller corresponding cost function value of the optimal intra prediction mode and the optimal inter prediction mode. Are determined as the optimum prediction mode. Then, the predicted image selection unit 26 supplies the predicted image in the optimal prediction mode to the calculation unit 13 and the addition unit 20. Further, the predicted image selection unit 26 notifies the intra prediction unit 24 or the motion prediction / compensation unit 25 of selection of the predicted image in the optimal prediction mode.
- the rate control unit 27 controls the quantization operation rate of the quantization unit 15 based on the encoded data stored in the storage buffer 17 so that overflow or underflow does not occur.
- the horizontal axis represents the white level of the captured image.
- the vertical axis represents a digital code assigned to a developed image.
- the developed image is an image having a bit gradation or the like.
- 800 800% on the horizontal axis is the value of camera sensitivity and optimum exposure (during shooting and development), and the maximum brightness during shooting. This value is set and transmitted as camera_iso_sensitivity and max_image_white_level, which are one of dynamic range characteristic information.
- the camera sensitivity and optimum exposure values are the same as the maximum brightness at the time of shooting, but they may be different.
- the vertical axis value (940) corresponding to the maximum white level is a digital value to which the maximum white level is assigned to the developed image, and this value is set as max_white_level_code_value, which is one of dynamic range characteristic information. Transmitted to the decoding side.
- white_level_code_value is one of dynamic range characteristic information, and is decoded. Is transmitted to the side.
- the 20% on the horizontal axis is a level (gray level) that is generally used as a standard exposure indicating Gray, and is often set on the camera side.
- the value of the vertical axis for this gray level is a digital value to which a gray level (20% white) is assigned to the developed image, and this value is set as gray_level_code_value, which is one of dynamic range characteristic information, and is decoded. Is transmitted to the side.
- 0 0% on the horizontal axis is the black level.
- the value (64) on the vertical axis for this black level is a digital value to which a black level (0% white) is assigned to the developed image, and this value is set as black_level_code_value, which is one of dynamic range characteristic information. And transmitted to the decoding side.
- code information indicating a dynamic range code assigned to a developed image is set as dynamic range characteristic information for a captured image and transmitted to the decoding side. That is, for the captured image, dynamic range characteristic information indicating the dynamic range characteristic assigned to the developed image is set and transmitted to the decoding side.
- Such dynamic range characteristic information is included in the information and content indicating the quality of the content (information indicating that the quality of the image information related to the white level is high, such as the dynamic range being wider than the existing content) Information indicating that the current potential is high) is clearly transmitted from the content creation side and transmitted to the display side (decoding side).
- the content creation side has a motivation to provide with the image intended by the creator.
- the dynamic range is expanded (increased) or narrowed (decreased) based on this information. Further, by referring to this information, the following processing can be accurately performed on the display side.
- the range can be lowered using tone mapping or the like in accordance with its own display capability.
- a plurality of white_level_code_values can be set and transmitted in addition to white_level_code_value between black_level_code_value and max_white_level_code_value.
- FIG. 6 shows an example in which white_level_code_value_0 to white_level_code_value_4 are set and transmitted when the white level of the captured image is between 0% and 800%.
- FIG. 7 is a diagram showing an example of dynamic range characteristic information.
- the dynamic range characteristic information is configured to include camera_iso_sensitivity, output_exposure_index, screen_lw, black_level_code_value, gray_level_code_value, white_level_code_value, and max_white_level_code_value.
- Camera_iso_sensitivity specifies the sensitivity of the camera when an image is taken, as described above with reference to FIG.
- the output_exposure_index specifies an exposure index that is set to be used in the image development process (that is, an exposure index at the time of development).
- ref_screen_lw specifies the reference display brightness of the white level set to be used in the image development process.
- black_level_code_value, gray_level_code_value, white_level_code_value, and max_white_level_code_value specify the code data of the luminance to which the black level, white level, gray level, and maximum white level are assigned.
- the dynamic range characteristic information includes the maximum brightness during capture (captured image), the optimal exposure value during capture, the optimal exposure value during development (developed image), and the maximum white level after development.
- Digital values that are assigned a white level after development (100% white) digital values that are assigned a gray level after development, digital values that are assigned a black level after development, and development It is desirable to include digital values between the next 100% white and the maximum white 0%.
- the above-described dynamic range characteristic information is transmitted to the decoding side by any one of the transmission methods 1 to 4 described below.
- FIG. 8 is a diagram illustrating an example of tone mapping SEI (tone_mapping_information SEI).
- SEI is auxiliary information used when displaying an image obtained by decoding an encoded stream.
- camera_iso_sensitivity and output_exposure_index that are not hatched in the frame are existing information (conventional technology) as camera setting parameters.
- this is different from the prior art in that the information is transmitted in an encoded bit stream or the dynamic range is adjusted using the information.
- TBD is To BE Determined Value and represents a preset value or a parameter set when content is created.
- FIG. 9 is a diagram showing another example of tone mapping SEI.
- Camera_iso_sensitivity_idc specifies a code indicating the sensitivity that can be acquired by the camera. The meaning of this code is shown in the table of FIG. When camera_iso_sensitivity_idc points to Extended_ISO, ISO_numner is represented by camera_iso_sensitivity on the next line. That is, by setting camera_iso_sensitivity_idc to Extended_ISO, camera_iso_sensitivity_idc can be set to any value.
- Exposure_index_idc specifies a code indicating an exposure index at the time of shooting. The meaning of this code is shown in the table of FIG. When exposure_index_idc points to Extended_ISO, ISO_numner is represented by exposure_index_rating in the next line. That is, by setting exposure_index_idc to Extended_ISO, exposure_index_idc can be set to any value.
- Sign_image_exposure_value specifies the sign of the exposure at the time of development relative to the exposure value at the time of shooting.
- image_expoure_value0 specifies the numerator value of the relative value of the exposure at the time of development with respect to the value of the exposure at the time of photographing.
- image_expoure_value1 specifies the value of the denominator of the relative value of the exposure at the time of development with respect to the value of the exposure at the time of photographing.
- the value of exposure at the time of development (output_exposure_index in FIG. ).
- the value of exposure at the time of development can be expressed by a decimal number.
- Ref_screen_lw is a content that is supposed to be displayed in white of what cd / m2 (candela), and clearly indicates that it should be displayed in white.
- Max_image_white_level specifies the dynamic range of the brightness of the image expressed as a percentage of an integer with reference to the standard white level.
- Black_level_code_value, white_level_code_value, and max_white_level_code_value specify the code data of the luminance to which the black level, the white level, and the maximum white level are assigned, as in the example of FIG.
- camera_iso_sensitivity, exposure_index_idc, sign_image_exposure, image_expoure_value0, and image_expoure_value1 that are not hatched in the frame are existing information ( Prior art).
- Prior art the information is transmitted in an encoded bit stream or the dynamic range is adjusted using the information.
- FIG. 10 is a diagram showing a table of meanings of values indicated by the sensitivity of the camera and values indicated by the exposure index.
- the ISO number is not specified.
- the value indicated is 1
- the ISO number is 10.
- the indicated value is 2 to 30, the illustration is omitted, but the ISO number is specified.
- the ISO number is Reserved.
- the value to be indicated is 255, it is specified that ISO number is Extended_ISO. If ISO_number is Extended_ISO, as described above with reference to FIG. 9, both camera_iso_sensitivity_idc and exposure_index_idc can indicate desired values.
- FIG. 11 is a diagram illustrating an example of luminance dynamic range SEI (luminance_dynamic_range_information SEI).
- a luminance dynamic range SEI luminance_dynamic_range_info
- luminance_dynamic_range_info luminance_dynamic_range_info
- VUI Video Usability Information
- FIG. 12 is a diagram illustrating an example of the VUI syntax when linked with the transmission method 1.
- tone_mapping_flag tone mapping flag
- tone mapping flag is a flag indicating presence / absence information indicating the presence / absence of tone mapping SEI.
- the tone mapping flag is 1 when indicating that there is tone mapping SEI, and is 0 when indicating that there is no tone mapping SEI.
- FIG. 13 is a diagram illustrating an example of VUI syntax when linked with the transmission method 2.
- luminance_dynamic_range_flag luminance dynamic range flag
- luminance dynamic range flag is a flag indicating presence / absence information indicating the presence / absence of the luminance dynamic range SEI.
- the luminance dynamic range flag is set to 1 when indicating that the luminance dynamic range SEI is present, and is set to 0 when indicating that there is no luminance dynamic range SEI.
- the dynamic range characteristic information may be transmitted as the above-described VUI parameters. That is, in this case, instead of the flag shown in FIG. 12 or 13 (or in addition to the flag), the dynamic range characteristic information itself is transmitted as a VUI parameter.
- dynamic range characteristic information when dynamic range characteristic information is included in SEI, it can be applied not only to the HEVC method but also to the AVC method. On the other hand, since many values used on the display side are included in the VUI, when dynamic range characteristic information is included in the VUI, the information can be collected.
- FIG. 14 is a diagram illustrating an example of VUI syntax in the case of the transmission method 4.
- tone_mapping_flag tone mapping flag of FIG. 12 is described as shown above the frame, and the tone mapping flag does not have to be immediately after (if it is within the VUI). ) Is 1 when dynamic range characteristic information is described, and 0 when dynamic range characteristic information is not described.
- the dynamic range characteristic information shown in the frame of FIG. 14 is referred to.
- FIG. 15 is a diagram showing an example of dynamic range characteristic information.
- the dynamic range characteristic information is information described in tone mapping SEI, luminance dynamic range SEI, VUI, or the like. In the example of FIG. "Xxxxx ()" is described at the beginning of the tax.
- the dynamic range characteristic information in FIG. 15 is different from the dynamic range characteristic information in FIG. 9 in that information indicating the luminance range of the attention area and / or the position and offset of the attention area is added below max_white_level_code_value. ing.
- roi_luminance_range_flag is a flag indicating whether or not information indicating the luminance range of the region of interest (region of interest) and / or the position and offset of the region of interest is described.
- roi_luminance_range_flag When the value of roi_luminance_range_flag is 1, min_roi_luminance_range, max_roi_luminance_range, roi_region_x, roi_region_y, roi_region_x_offset, and roi_region_y_offset are described as shown in the black solid part.
- Min_roi_luminance_range specifies the minimum value of the luminance range of the region of interest.
- max_roi_luminance_range specifies the maximum value of the luminance range of the region of interest.
- roi_region_x and roi_region_y specify x-coordinate and y-coordinate at the upper left of the attention area, respectively.
- Roi_region_x offset and “roi_region_y offset” represent offset values from the upper left roi_region_x and roi_region_y, respectively. Thereby, the attention area from roi_region_x and roi_region_y can be pointed out.
- the luminance range of the region of interest and / or the position of the region of interest and its offset are included in the dynamic range characteristic information, so that the intention to perform tone mapping that matches the region of interest is decoded. I can tell the side.
- a flag may be added.
- the video that had only been 100% white so far has more white, and since the display ability is varied, it can be output as a video suitable for yourself. It was made to give such information.
- FIG. 16 is a flowchart illustrating the generation process of the encoding device 1 of FIG. In the example of FIG. 16, an example of the transmission method 3 described above will be described.
- step S1 of FIG. 16 the encoding unit 2 of the encoding device 1 performs an encoding process for encoding an image such as a captured image in units of frames input as an input signal from the outside using the HEVC method. Details of this encoding process will be described with reference to FIGS. 17 and 18 to be described later.
- step S2 the setting unit 3 sets SPS.
- step S3 the setting unit 3 sets the PPS.
- step S4 the setting unit 3 determines whether the image to be encoded is an HDR (High Dynamic Range) image based on an operation of an input unit (not shown) by the user.
- the image having the dynamic range characteristic information described above will be described below as an HDR image.
- step S5 When it is determined in step S4 that the image to be encoded is an HDR image, in step S5, the setting unit 3 sets a VUI including 1 as the HDR image flag. In step S6, the setting unit 3 sets the SEI such as the HDR image SEI and advances the process to step S8.
- the HDR image flag is tone_mapping_flag described above with reference to FIG. 12 or luminance_dynamic_range_flag described above with reference to FIG.
- the HDR image SEI is the tone mapping SEI described above with reference to FIG. 8 or the luminance dynamic range SEI described above with reference to FIG.
- step S7 when it is determined in step S4 that the image to be encoded is not an HDR image, in step S7, the setting unit 3 sets a VUI including 0 as the HDR image flag. In addition, the setting unit 3 sets SEI other than the HDR image SEI as necessary, and advances the processing to step S8.
- step S8 the setting unit 3 generates an encoded stream from the set SPS, PPS, VUI, and SEI and the encoded data supplied from the encoding unit 2.
- the setting unit 3 supplies the encoded stream to the transmission unit 4.
- step S9 the transmission unit 4 transmits the encoded stream supplied from the setting unit 3 to a decoding device to be described later, and ends the process.
- 17 and 18 are flowcharts illustrating details of the encoding process in step S1 of FIG.
- the A / D conversion unit 11 of the encoding unit 2 performs A / D conversion on the frame unit image input as the input signal, and outputs and stores the image in the screen rearrangement buffer 12.
- step S12 the screen rearrangement buffer 12 rearranges the stored frame images in the display order in the order for encoding according to the GOP structure.
- the screen rearrangement buffer 12 supplies the rearranged frame-unit images to the calculation unit 13, the intra prediction unit 24, and the motion prediction / compensation unit 25.
- or S31 is performed per CU (Coding
- the intra prediction unit 24 performs intra prediction processing for all candidate intra prediction modes. Further, the intra prediction unit 24 calculates cost function values for all candidate intra prediction modes based on the image read from the screen rearrangement buffer 12 and the prediction image generated as a result of the intra prediction process. Is calculated. Then, the intra prediction unit 24 determines the intra prediction mode that minimizes the cost function value as the optimal intra prediction mode. The intra prediction unit 24 supplies the predicted image generated in the optimal intra prediction mode and the corresponding cost function value to the predicted image selection unit 26.
- the motion prediction / compensation unit 25 performs motion prediction / compensation processing for all candidate inter prediction modes.
- the motion prediction / compensation unit 25 calculates cost function values for all candidate inter prediction modes based on the images and prediction images supplied from the screen rearrangement buffer 12, and the cost function values are calculated. The minimum inter prediction mode is determined as the optimum inter measurement mode. Then, the motion prediction / compensation unit 25 supplies the cost function value of the optimal inter prediction mode and the corresponding predicted image to the predicted image selection unit 26.
- step S14 the predicted image selection unit 26 selects one of the optimal intra prediction mode and the optimal inter prediction mode based on the cost function values supplied from the intra prediction unit 24 and the motion prediction / compensation unit 25 by the process of step S13. The one with the smallest cost function value is determined as the optimum prediction mode. Then, the predicted image selection unit 26 supplies the predicted image in the optimal prediction mode to the calculation unit 13 and the addition unit 20.
- step S15 the predicted image selection unit 26 determines whether or not the optimal prediction mode is the optimal inter prediction mode.
- the predicted image selection unit 26 notifies the motion prediction / compensation unit 25 of the selection of the predicted image generated in the optimal inter prediction mode.
- step S16 the motion prediction / compensation unit 25 supplies the inter prediction mode information, the corresponding motion vector, and information for specifying the reference image to the lossless encoding unit 16. Then, the process proceeds to step S18.
- step S15 when it is determined in step S15 that the optimal prediction mode is not the optimal inter prediction mode, that is, when the optimal prediction mode is the optimal intra prediction mode, the predicted image selection unit 26 performs the prediction generated in the optimal intra prediction mode.
- the intra prediction unit 24 is notified of image selection.
- step S17 the intra prediction unit 24 supplies the intra prediction mode information to the lossless encoding unit 16. Then, the process proceeds to step S18.
- step S18 the calculation unit 13 performs encoding by subtracting the prediction image supplied from the prediction image selection unit 26 from the image supplied from the screen rearrangement buffer 12.
- the calculation unit 13 outputs the image obtained as a result to the orthogonal transform unit 14 as residual information.
- step S ⁇ b> 19 the orthogonal transformation unit 14 performs orthogonal transformation on the residual information from the calculation unit 13 and supplies the coefficient obtained as a result to the quantization unit 15.
- step S20 the quantization unit 15 quantizes the coefficient supplied from the orthogonal transform unit.
- the quantized coefficient is input to the lossless encoding unit 16 and the inverse quantization unit 18.
- step S21 the inverse quantization unit 18 inversely quantizes the quantized coefficient supplied from the quantization unit 15.
- step S22 the inverse orthogonal transform unit 19 performs inverse orthogonal transform on the coefficient supplied from the inverse quantization unit 18, and supplies the residual information obtained as a result to the addition unit 20.
- step S23 the addition unit 20 adds the residual information supplied from the inverse orthogonal transform unit 19 and the prediction image supplied from the prediction image selection unit 26, and obtains a locally decoded image.
- the adding unit 20 supplies the obtained image to the deblocking filter 21 and also supplies it to the frame memory 22.
- step S24 the deblocking filter 21 performs a deblocking filtering process on the locally decoded image supplied from the adding unit 20.
- the deblocking filter 21 supplies an image obtained as a result to the adaptive offset filter 41.
- step S25 the adaptive offset filter 41 performs an adaptive offset filter process on the image supplied from the deblocking filter 21 for each LCU.
- the adaptive offset filter 41 supplies the resulting image to the adaptive loop filter 42. Further, the adaptive offset filter 41 supplies the storage flag, index or offset, and type information to the lossless encoding unit 16 as offset filter information for each LCU.
- step S26 the adaptive loop filter 42 performs an adaptive loop filter process for each LCU on the image supplied from the adaptive offset filter 41.
- the adaptive loop filter 42 supplies the resulting image to the frame memory 22.
- the adaptive loop filter 42 supplies the filter coefficient used in the adaptive loop filter process to the lossless encoding unit 16.
- the frame memory 22 stores the images before and after filtering. Specifically, the frame memory 22 stores the image supplied from the adder 20 and the image supplied from the adaptive loop filter 42. The image stored in the frame memory 22 is output as a reference image to the intra prediction unit 24 or the motion prediction / compensation unit 25 via the switch 23.
- step S28 the lossless encoding unit 16 performs lossless encoding on the intra prediction mode information, the inter prediction mode information, the motion vector, the information specifying the reference image, and the like, and the offset filter information and the filter coefficient as the encoding information. To do.
- step S29 the lossless encoding unit 16 performs lossless encoding on the quantized coefficient supplied from the quantization unit 15. Then, the lossless encoding unit 16 generates encoded data from the encoding information that has been losslessly encoded in the process of step S28 and the losslessly encoded coefficient.
- step S30 the lossless encoding unit 16 supplies the encoded data to the storage buffer 17 and stores it.
- step S31 the accumulation buffer 17 outputs the accumulated encoded data to the setting unit 3 in FIG. And a process returns to step S1 of FIG. 16, and progresses to step S2.
- the intra prediction processing and the motion prediction / compensation processing are always performed, but in actuality, either one depends on the picture type or the like. Sometimes only.
- the encoding apparatus 1 sets an HDR image SEI (tone mapping SEI or luminance dynamic range SEI) and an HDR image flag (tone_mapping_flag or luminance_dynamic_range_flag), and transmits the HDR image with encoded data obtained by encoding the HDR image.
- HDR image SEI tone mapping SEI or luminance dynamic range SEI
- HDR image flag tone_mapping_flag or luminance_dynamic_range_flag
- the decoding device that decodes and displays the encoded stream of the HDR image uses the HDR image SEI preferentially and reliably reproduces and displays the dynamic range of the HDR image. be able to. Therefore, the encoding apparatus 1 generates an HDR image encoded stream so that the dynamic range of the HDR image can be reliably reproduced and displayed when the encoded image stream of the HDR image is decoded and displayed. I can say that.
- FIG. 19 is a block diagram illustrating a configuration example of a first embodiment of a decoding device as an image processing device to which the present technology is applied, which decodes an encoded stream transmitted from the encoding device 1 in FIG. 3. .
- 19 includes a reception unit 51, an extraction unit 52, a decoding unit 53, an image adjustment unit 54, a display control unit 55, and a display unit 56.
- the receiving unit 51 of the decoding device 50 receives the encoded stream transmitted from the encoding device 1 in FIG. 3 and supplies it to the extracting unit 52.
- the extraction unit 52 extracts SPS, PPS, VUI, SEI, encoded data, and the like from the encoded stream supplied from the receiving unit 51.
- the extraction unit 52 supplies the encoded data to the decoding unit 53.
- the extraction unit 52 also supplies SPS, PPS, VUI, SEI, and the like to the decoding unit 53 and the image adjustment unit 54 as necessary.
- the decoding unit 53 refers to SPS, PPS, VUI, SEI, etc. supplied from the extraction unit 52 as necessary, and decodes the encoded data supplied from the extraction unit 52 by the HEVC method.
- the decoding unit 53 supplies an image such as an HDR image obtained as a result of decoding to the image adjustment unit 54 as an output signal.
- the image adjustment unit 54 adjusts the dynamic range of the HDR image supplied as an output signal from the decoding unit 53 based on SPS, PPS, VUI, SEI and the like supplied from the extraction unit 52 as necessary. For example, the image adjustment unit 54 adjusts the dynamic range of the image in accordance with the display dynamic range. Then, the image adjustment unit 54 supplies the HDR image as an output signal to the display control unit 55.
- the display control unit 55 generates a display image based on the HDR image supplied from the image adjustment unit 54 (a display method notified from the display unit 56 as necessary).
- the display control unit 55 supplies the generated display image to the display unit 56 for display.
- the display unit 56 displays the display image supplied from the display control unit 55.
- the display unit 56 notifies the display control unit 55 of a display method set in advance or a display method designated by the user among the display methods set in advance.
- FIG. 20 is a block diagram illustrating a configuration example of the decoding unit 53 of FIG.
- a storage buffer 101 includes a storage buffer 101, a lossless decoding unit 102, an inverse quantization unit 103, an inverse orthogonal transform unit 104, an addition unit 105, a deblock filter 106, a screen rearrangement buffer 107, and a D / A conversion unit 108.
- Frame memory 109 switch 110, intra prediction unit 111, motion compensation unit 112, and switch 113.
- an adaptive offset filter 141 and an adaptive loop filter 142 are provided between the deblock filter 106 and the screen rearrangement buffer 107 and the frame memory 109.
- the accumulation buffer 101 of the decoding unit 53 receives and accumulates the encoded data from the extraction unit 52 of FIG.
- the accumulation buffer 101 supplies the accumulated encoded data to the lossless decoding unit 102.
- the lossless decoding unit 102 obtains quantized coefficients and encoded information by performing lossless decoding such as variable length decoding and arithmetic decoding on the encoded data from the accumulation buffer 101.
- the lossless decoding unit 102 supplies the quantized coefficient to the inverse quantization unit 103.
- the lossless decoding unit 102 supplies intra prediction mode information or the like as encoded information to the intra prediction unit 111, and provides motion vectors, information for identifying reference images, inter prediction mode information, and the like to the motion compensation unit 112. Supply. Further, the lossless decoding unit 102 supplies intra prediction mode information or inter prediction mode information as encoded information to the switch 113.
- the lossless decoding unit 102 supplies offset filter information as encoded information to the adaptive offset filter 141 and supplies filter coefficients to the adaptive loop filter 142.
- the inverse quantization unit 103, the inverse orthogonal transform unit 104, the addition unit 105, the deblock filter 106, the frame memory 109, the switch 110, the intra prediction unit 111, and the motion compensation unit 112 are the same as the inverse quantization unit 18, FIG.
- the same processing as that performed by the inverse orthogonal transform unit 19, the addition unit 20, the deblock filter 21, the frame memory 22, the switch 23, the intra prediction unit 24, and the motion prediction / compensation unit 25 is performed, whereby the image is decoded.
- the inverse quantization unit 103 inversely quantizes the quantized coefficient from the lossless decoding unit 102 and supplies the resulting coefficient to the inverse orthogonal transform unit 104.
- the inverse orthogonal transform unit 104 performs inverse orthogonal transform on the coefficient from the inverse quantization unit 103, and supplies residual information obtained as a result to the addition unit 105.
- the addition unit 105 performs decoding by adding the residual information as the decoding target image supplied from the inverse orthogonal transform unit 104 and the prediction image supplied from the switch 113.
- the adding unit 105 supplies the image obtained as a result of decoding to the deblocking filter 106 and also supplies it to the frame memory 109.
- the addition unit 105 supplies the image, which is residual information supplied from the inverse orthogonal transform unit 104, to the deblocking filter 106 as an image obtained as a result of decoding. It is supplied to the frame memory 109 and accumulated.
- the deblock filter 106 removes block distortion by filtering the image supplied from the addition unit 105.
- the deblocking filter 106 supplies the resulting image to the adaptive offset filter 141.
- the adaptive offset filter 141 has a buffer for sequentially storing offsets supplied from the lossless decoding unit 102. Further, the adaptive offset filter 141 performs adaptive offset filter processing on the image after the adaptive deblocking filter processing by the deblocking filter 106 based on the offset filter information supplied from the lossless decoding unit 102 for each LCU. .
- the adaptive offset filter 141 uses the offset included in the offset filter information for the image after the deblocking filter processing in units of LCUs.
- the type of adaptive offset filter processing indicated by the type information is performed.
- the adaptive offset filter 141 is stored at the position indicated by the index included in the offset filter information with respect to the image after the deblocking filter processing in units of LCUs. Read the offset. Then, the adaptive offset filter 141 performs the type of adaptive offset filter processing indicated by the type information, using the read offset. The adaptive offset filter 141 supplies the image after the adaptive offset filter processing to the adaptive loop filter 142.
- the adaptive loop filter 142 performs an adaptive loop filter process for each LCU on the image supplied from the adaptive offset filter 141 using the filter coefficient supplied from the lossless decoding unit 102.
- the adaptive loop filter 142 supplies the image obtained as a result to the frame memory 109 and the screen rearrangement buffer 107.
- the image stored in the frame memory 109 is read as a reference image via the switch 110 and supplied to the motion compensation unit 112 or the intra prediction unit 111.
- the screen rearrangement buffer 107 stores the image supplied from the deblock filter 106 in units of frames.
- the screen rearrangement buffer 107 rearranges the stored frame-by-frame images for encoding in the original display order and supplies them to the D / A conversion unit 108.
- the D / A conversion unit 108 performs D / A conversion on the frame unit image supplied from the screen rearrangement buffer 107, and outputs it as an output signal to the image adjustment unit 54 in FIG.
- the intra prediction unit 111 uses the reference image not filtered by the deblocking filter 106 read out from the frame memory 109 via the switch 110 in units of tiles and slices, and is supplied from the lossless decoding unit 102. Intra prediction processing in the intra prediction mode indicated by the mode information is performed. The intra prediction unit 111 supplies the prediction image generated as a result to the switch 113.
- the motion compensation unit 112 is filtered by the deblocking filter 106 from the frame memory 109 via the switch 110 based on the information for specifying the reference image supplied from the lossless decoding unit 102 in units of tiles and slices. Read the reference image.
- the motion compensation unit 112 performs motion compensation processing in the optimal inter prediction mode indicated by the inter prediction mode information using the motion vector and the reference image.
- the motion compensation unit 112 supplies the predicted image generated as a result to the switch 113.
- the switch 113 supplies the prediction image supplied from the intra prediction unit 111 to the addition unit 105.
- the switch 113 supplies the predicted image supplied from the motion compensation unit 112 to the adding unit 105.
- FIG. 21 is a flowchart for explaining display processing by the decoding device 50 of FIG.
- the reception unit 51 of the decoding device 50 receives the encoded stream transmitted from the encoding device 1 of FIG. 3 and supplies the encoded stream to the extraction unit 52.
- step S51 the extraction unit 52 extracts SPS, PPS, VUI, SEI, encoded data, and the like from the encoded stream supplied from the receiving unit 51.
- the extraction unit 52 supplies the encoded data to the decoding unit 53.
- the extraction unit 52 also supplies SPS, PPS, VUI, SEI, and the like to the decoding unit 53 and the image adjustment unit 54 as necessary.
- step S52 the decoding unit 53 refers to SPS, PPS, VUI, SEI, and the like supplied from the extraction unit 52 as necessary, and decodes the encoded data supplied from the extraction unit 52 using the HEVC method. I do. Details of this decoding process will be described with reference to FIG.
- step S53 the image adjustment unit 54 determines whether the HDR image flag included in the VUI supplied from the extraction unit 52 is 1. As described above with reference to FIG. 16, the HDR image flag is tone_mapping_flag shown in FIG. 12 or luminance_dynamic_range_flag shown in FIG. When it is determined in step S53 that the HDR image flag is 1, the image adjustment unit 54 determines that the output signal supplied from the decoding unit 53 is an HDR image.
- step S54 the image adjustment unit 54 acquires dynamic range characteristic information included in the HDR image SEI supplied from the extraction unit 52. Specifically, as described above with reference to FIG. 16, dynamic range characteristic information is acquired from the tone mapping SEI shown in FIG. 8 or the luminance dynamic range SEI shown in FIG.
- step S55 the image adjusting unit 54 adjusts the dynamic range of the image according to the display dynamic range based on the dynamic range characteristic information acquired in step S54.
- This dynamic range adjustment processing includes, for example, tone mapping processing.
- the image adjustment unit 54 supplies the adjusted image to the display control unit 55.
- the image adjustment in step S55 can be roughly divided into two methods, and both processes are processes that match the display capability of the user.
- step S53 if it is determined in step S53 that the HDR image flag is not 1, steps S54 and S55 are skipped, and the process proceeds to step S56. That is, in this case, the image adjustment unit 54 supplies the image from the decoding unit 53 to the display control unit 55 as it is.
- step S56 the display control unit 55 generates a display image based on the HDR image supplied from the image adjustment unit 54, and supplies the generated display image to the display unit 56, whereby the display image is displayed on the display unit 56. Is displayed and the process is terminated.
- FIG. 22 is a flowchart for explaining details of the decoding process in step S52 of FIG.
- the accumulation buffer 101 of the decoding unit 53 receives and accumulates the encoded data in units of frames from the extraction unit 52 of FIG. 19.
- the accumulation buffer 101 supplies the accumulated encoded data to the lossless decoding unit 102. Note that the processing in the following steps S112 to S124 is performed, for example, in units of CUs.
- the lossless decoding unit 102 losslessly decodes the encoded data from the storage buffer 101 to obtain quantized coefficients and encoded information.
- the lossless decoding unit 102 supplies the quantized coefficient to the inverse quantization unit 103.
- the lossless decoding unit 102 supplies intra prediction mode information or the like as encoded information to the intra prediction unit 111, and provides motion vector, inter prediction mode information, information for specifying a reference image, and the like to the motion compensation unit 112. Supply. Further, the lossless decoding unit 102 supplies intra prediction mode information or inter prediction mode information as encoded information to the switch 113.
- the lossless decoding unit 102 supplies offset filter information as encoded information to the adaptive offset filter 141 and supplies filter coefficients to the adaptive loop filter 142.
- step S113 the inverse quantization unit 103 inversely quantizes the quantized coefficient from the lossless decoding unit 102, and supplies the coefficient obtained as a result to the inverse orthogonal transform unit 104.
- step S114 the motion compensation unit 112 determines whether or not the inter prediction mode information is supplied from the lossless decoding unit 102. If it is determined in step S114 that the inter prediction mode information has been supplied, the process proceeds to step S115.
- step S115 the motion compensation unit 112 outputs the reference image filtered by the deblocking filter 106 based on the motion vector supplied from the lossless decoding unit 102, the inter prediction mode information, and information for specifying the reference image. Read and motion compensation processing is performed.
- the motion compensation unit 112 supplies the predicted image generated as a result to the addition unit 105 via the switch 113, and the process proceeds to step S117.
- step S114 when it is determined in step S114 that the inter prediction mode information is not supplied, that is, when the intra prediction mode information is supplied to the intra prediction unit 111, the process proceeds to step S116.
- step S116 the intra prediction unit 111 uses the reference image read from the frame memory 109 via the switch 110 and not filtered by the deblocking filter 106, and the intra prediction mode information indicated by the intra prediction mode information. Perform prediction processing.
- the intra prediction unit 111 supplies the prediction image generated as a result of the intra prediction process to the addition unit 105 via the switch 113, and the process proceeds to step S117.
- step S117 the inverse orthogonal transform unit 104 performs inverse orthogonal transform on the coefficient from the inverse quantization unit 103, and supplies the residual information obtained as a result to the addition unit 105.
- step S118 the adding unit 105 adds the residual information supplied from the inverse orthogonal transform unit 104 and the prediction image supplied from the switch 113.
- the adding unit 105 supplies the image obtained as a result to the deblocking filter 106 and also supplies it to the frame memory 109.
- step S119 the deblocking filter 106 performs filtering on the image supplied from the adding unit 105 to remove block distortion.
- the deblocking filter 106 supplies the resulting image to the adaptive offset filter 141.
- step S120 the adaptive offset filter 141 performs adaptive offset filter processing for each LCU on the image after the deblocking filter processing by the deblocking filter 106 based on the offset filter information supplied from the lossless decoding unit 102. .
- the adaptive offset filter 141 supplies the image after the adaptive offset filter processing to the adaptive loop filter 142.
- step S121 the adaptive loop filter 142 performs adaptive loop filter processing for each LCU using the filter coefficient supplied from the lossless decoding unit 102 on the image supplied from the adaptive offset filter 141.
- the adaptive loop filter 142 supplies the image obtained as a result to the frame memory 109 and the screen rearrangement buffer 107.
- step S122 the frame memory 109 stores the image before filtering supplied from the adding unit 105 and the image after filtering supplied from the deblocking filter 106.
- the image stored in the frame memory 109 is supplied to the motion compensation unit 112 or the intra prediction unit 111 via the switch 110 as a reference image.
- step S123 the screen rearrangement buffer 107 stores the image supplied from the deblocking filter 106 in units of frames, and rearranges the stored frame-by-frame images for encoding in the original display order. , And supplied to the D / A converter 108.
- step S124 the D / A conversion unit 108 D / A converts the frame unit image supplied from the screen rearrangement buffer 107, and supplies it as an output signal to the image adjustment unit 54 in FIG. And a process returns to step S52 of FIG. 21, and progresses to step S53.
- the decoding device 50 decodes the encoded data to generate an image, and when the HDR image flag is 1, the HDR image SEI is preferentially used to reliably reproduce the dynamic range of the HDR image. Can be displayed.
- HDR image flag may be included in other NAL units such as SPS instead of the VUI.
- the HEVC method is used as the encoding method.
- the present technology is a technology for performing display and is not related to the encoding method. Therefore, the present technology is not limited to the HEVC method, and other encoding / decoding methods can be applied.
- the present invention can also be applied to an apparatus that performs encoding / decoding processing based on the AVC method described below.
- FIG. 23 is a block diagram illustrating a configuration example of a second embodiment of an encoding device as an image processing device to which the present technology is applied.
- FIG. 23 differs from the configuration of FIG. 3 in that an encoding unit 211 is provided instead of the encoding unit 2.
- the configuration of the encoding apparatus 201 is common to the configuration of FIG. 3 in that the setting unit 3 and the transmission unit 4 are provided.
- An image such as a captured image in units of frames is input to the encoding unit 211 of the encoding device 201 as an input signal.
- the encoding unit 211 encodes the input signal by the AVC method, and supplies encoded data obtained as a result to the setting unit 3.
- the setting unit 3 sets the dynamic range characteristic information of the image in a format according to the AVC standard.
- the setting unit 3 generates an encoded stream from the set characteristic information and the encoded data supplied from the encoding unit 211.
- the setting unit 3 supplies the encoded stream to the transmission unit 4.
- the encoding apparatus 201 is different from the encoding apparatus 1 of FIG. 3 only in that the encoding process by the AVC method is performed.
- FIG. 24 is a block diagram illustrating a configuration example of the encoding unit 211 in FIG.
- a / D conversion unit 11 includes an A / D conversion unit 11, a screen rearrangement buffer 12, a calculation unit 13, an orthogonal transformation unit 14, a quantization unit 15, a lossless encoding unit 16, a storage buffer 17, and an inverse quantization unit. 18, an inverse orthogonal transform unit 19, an addition unit 20, a deblock filter 21, a frame memory 22, a switch 23, an intra prediction unit 24, a motion prediction / compensation unit 25, a predicted image selection unit 26, and a rate control unit 27. Configured.
- the configuration of the encoding unit 211 in FIG. 24 is only that the adaptive offset filter 41 and the adaptive loop filter 42 are removed, and that the lossless encoding unit 16 performs encoding using the AVC method instead of the HEVC method. Is different from the configuration of FIG. Therefore, the encoding unit 211 performs the encoding process not on a CU basis but on a block basis.
- the object of the encoding process of the lossless encoding unit 16 is basically the same as that of the lossless encoding unit 16 of FIG. 4 except for the parameters of the adaptive offset filter and the adaptive loop filter. That is, the lossless encoding unit 16 acquires the intra prediction mode information from the intra prediction unit 24 as in the case of the lossless encoding unit 16 of FIG. Also, inter prediction mode information, motion vectors, information for specifying a reference image, and the like are acquired from the motion prediction / compensation unit 25.
- the lossless encoding unit 16 applies variable length encoding (for example, CAVLC (Context-Adaptive ⁇ Variable Length) to the quantized coefficients supplied from the quantization unit 15. Coding)) and arithmetic coding (for example, CABAC (Context-Adaptive Binary Arithmetic Coding), etc.) are performed.
- variable length encoding for example, CAVLC (Context-Adaptive ⁇ Variable Length)
- CABAC Context-Adaptive Binary Arithmetic Coding
- the lossless encoding unit 16 similarly to the lossless encoding unit 16 of FIG. 4, intra prediction mode information, inter prediction mode information, motion vector, information for specifying a reference image, offset filter information, and filter coefficients Etc. are losslessly encoded as encoding information relating to encoding.
- the lossless encoding unit 16 supplies the encoding information and the coefficients that have been losslessly encoded to the accumulation buffer 17 as encoded data and accumulates them.
- the losslessly encoded information may be the header information of the losslessly encoded coefficient.
- the deblocking filter 21 removes block distortion by filtering the locally decoded image supplied from the adding unit 20.
- the deblocking filter 21 supplies the image obtained as a result to the frame memory 22 and accumulates it.
- the image stored in the frame memory 22 is output as a reference image to the intra prediction unit 24 or the motion prediction / compensation unit 25 via the switch 23.
- the present technology can also be applied to such an AVC encoding apparatus 201.
- FIG. 25 is a block diagram illustrating a configuration example of a second embodiment of a decoding device as an image processing device to which the present technology is applied, which decodes an encoded stream transmitted from the encoding device 201 in FIG. 23. .
- the configuration of the decoding device 251 is common to the configuration of FIG. 19 in that a receiving unit 51, an extracting unit 52, an image adjusting unit 54, a display control unit 55, and a display unit 56 are provided.
- the receiving unit 51 receives the encoded stream encoded by the AVC method transmitted from the encoding device 201 in FIG. 23 and supplies the encoded stream to the extracting unit 52.
- the extraction unit 52 extracts dynamic range characteristic information, encoded data, and the like set according to the AVC standard from the encoded stream supplied from the receiving unit 51.
- the extraction unit 52 supplies the encoded data to the decoding unit 261.
- the extraction unit 52 also supplies dynamic range characteristic information to the decoding unit 261 and the image adjustment unit 54 as necessary.
- the decoding unit 261 refers to SPS, PPS, VUI, SEI and the like supplied from the extraction unit 52 as necessary, and decodes the encoded data supplied from the extraction unit 52 by the AVC method.
- the decoding unit 261 supplies an image such as an HDR image obtained as a result of decoding to the image adjustment unit 54 as an output signal.
- the image adjustment unit 54 adjusts the dynamic range of the HDR image supplied as an output signal from the decoding unit 261 based on the dynamic range characteristic information supplied from the extraction unit 52 as necessary. Then, the image adjustment unit 54 supplies the HDR image as an output signal to the display control unit 55.
- the decoding device 251 is different from the decoding device 50 of FIG. 19 only in that the decoding process by the AVC method is performed.
- FIG. 26 is a block diagram illustrating a configuration example of the decoding unit 261 in FIG.
- 26 includes an accumulation buffer 101, a lossless decoding unit 102, an inverse quantization unit 103, an inverse orthogonal transform unit 104, an addition unit 105, a deblock filter 106, a screen rearrangement buffer 107, and a D / A conversion unit 108. , A frame memory 109, a switch 110, an intra prediction unit 111, a motion compensation unit 112, and a switch 113.
- the configuration of the decoding unit 261 in FIG. 26 is the configuration of FIG. 20 only in that the adaptive offset filter 141 and the adaptive loop filter 142 are removed and that the lossless decoding unit 102 performs decoding by the AVC method instead of the HEVC method. And different. Therefore, in the decoding unit 261, the decoding process is performed in units of blocks, not in units of CUs.
- the object of the decoding process of the lossless decoding unit 102 is basically the same as that of the lossless decoding unit 102 of FIG. 20 except for the parameters of the adaptive offset filter and the adaptive loop filter. That is, the lossless decoding unit 102 is quantized by performing lossless decoding such as variable-length decoding and arithmetic decoding on the encoded data from the storage buffer 101, similarly to the lossless decoding unit 102 of FIG. Coefficient and coding information are obtained. The lossless decoding unit 102 supplies the quantized coefficient to the inverse quantization unit 103.
- the lossless decoding unit 102 supplies intra prediction mode information or the like as encoded information to the intra prediction unit 111 in the same manner as the lossless decoding unit 102 of FIG. 20, and information for specifying a motion vector and a reference image. Inter prediction mode information and the like are supplied to the motion compensation unit 112. Further, the lossless decoding unit 102 supplies intra prediction mode information or inter prediction mode information as encoded information to the switch 113.
- the deblock filter 106 removes block distortion by filtering the image supplied from the addition unit 105.
- the deblocking filter 106 supplies the resulting image to the frame memory 109 and the screen rearrangement buffer 107.
- the present technology can also be applied to such an AVC decoding device 251.
- the present disclosure discloses, for example, image information (bitstream) compressed by orthogonal transformation such as discrete cosine transformation and motion compensation, such as HEVC, satellite broadcasting, cable television, the Internet, or a mobile phone.
- the present invention can be applied to an image encoding device and an image decoding device used when receiving via a network medium.
- the present disclosure can be applied to an image encoding device and an image decoding device that are used when processing on a storage medium such as an optical disk, a magnetic disk, and a flash memory.
- FIG. 27 shows an example of a multi-view image encoding method.
- the multi-viewpoint image includes a plurality of viewpoint images, and a predetermined one viewpoint image among the plurality of viewpoints is designated as the base view image.
- Each viewpoint image other than the base view image is treated as a non-base view image.
- dynamic range characteristic information can be set in each view (same view). In addition, in each view (different view), it is possible to share the dynamic range characteristic information set in the other views.
- the dynamic range characteristic information set in the base view is used in at least one non-base view.
- FIG. 28 is a diagram illustrating a multi-view image encoding apparatus that performs the above-described multi-view image encoding.
- the multi-view image encoding apparatus 600 includes an encoding unit 601, an encoding unit 602, and a multiplexing unit 603.
- the encoding unit 601 encodes the base view image and generates a base view image encoded stream.
- the encoding unit 602 encodes the non-base view image and generates a non-base view image encoded stream.
- the multiplexing unit 603 multiplexes the base view image encoded stream generated by the encoding unit 601 and the non-base view image encoded stream generated by the encoding unit 602 to generate a multi-view image encoded stream. To do.
- the encoding device 1 (FIG. 3) and the encoding device 201 (FIG. 23) can be applied to the encoding unit 601 and the encoding unit 602 of the multi-view image encoding device 600.
- the multi-view image encoding apparatus 600 sets and transmits the dynamic range characteristic information set by the encoding unit 601 and the dynamic range characteristic information set by the encoding unit 602.
- the dynamic range characteristic information set by the encoding unit 601 as described above may be set and transmitted so as to be shared by the encoding unit 601 and the encoding unit 602.
- the dynamic range characteristic information set collectively by the encoding unit 602 may be set and transmitted so as to be shared by the encoding unit 601 and the encoding unit 602.
- FIG. 29 is a diagram illustrating a multi-view image decoding apparatus that performs the above-described multi-view image decoding.
- the multi-view image decoding device 610 includes a demultiplexing unit 611, a decoding unit 612, and a decoding unit 613.
- the demultiplexing unit 611 demultiplexes the multi-view image encoded stream in which the base view image encoded stream and the non-base view image encoded stream are multiplexed, and the base view image encoded stream and the non-base view image The encoded stream is extracted.
- the decoding unit 612 decodes the base view image encoded stream extracted by the demultiplexing unit 611 to obtain a base view image.
- the decoding unit 613 decodes the non-base view image encoded stream extracted by the demultiplexing unit 611 to obtain a non-base view image.
- the decoding device 50 (FIG. 19) and the decoding device 251 (FIG. 25) can be applied to the decoding unit 612 and the decoding unit 613 of the multi-view image decoding device 610.
- the multi-viewpoint image decoding apparatus 610 has the dynamic range characteristic information set by the encoding unit 601 and decoded by the decoding unit 612, and the dynamic range characteristic set by the encoding unit 602 and decoded by the decoding unit 613. Process using information.
- the dynamic range characteristic information set by the encoding unit 601 (or the encoding unit 602) as described above is set and transmitted so as to be shared by the encoding unit 601 and the encoding unit 602. There may be.
- the multi-viewpoint image decoding apparatus 610 performs processing using the dynamic range characteristic information set by the encoding unit 601 (or encoding unit 602) and decoded by the decoding unit 612 (or decoding unit 613). Done.
- FIG. 30 shows an example of a multi-view image encoding method.
- the hierarchical image includes images of a plurality of layers (resolutions), and an image of a predetermined one layer among the plurality of resolutions is designated as the base layer image. Images in each layer other than the base layer image are treated as non-base layer images.
- dynamic range characteristic information can be set in each layer (same layer). In addition, in each layer (different layers), it is possible to share dynamic range characteristic information set in other layers.
- the dynamic range characteristic information set in the base layer is used in at least one non-base layer.
- FIG. 31 is a diagram illustrating a hierarchical image encoding apparatus that performs the hierarchical image encoding described above.
- the hierarchical image encoding device 620 includes an encoding unit 621, an encoding unit 622, and a multiplexing unit 623.
- the encoding unit 621 encodes the base layer image and generates a base layer image encoded stream.
- the encoding unit 622 encodes the non-base layer image and generates a non-base layer image encoded stream.
- the multiplexing unit 623 multiplexes the base layer image encoded stream generated by the encoding unit 621 and the non-base layer image encoded stream generated by the encoding unit 622 to generate a hierarchical image encoded stream. .
- the encoding device 1 (FIG. 3) and the encoding device 201 (FIG. 23) can be applied to the encoding unit 621 and the encoding unit 622 of the hierarchical image encoding device 620.
- the hierarchical image encoding device 620 sets and transmits the dynamic range characteristic information set by the encoding unit 621 and the dynamic range characteristic information set by the encoding unit 602.
- the dynamic range characteristic information set by the encoding unit 621 as described above may be set and transmitted so as to be shared by the encoding unit 621 and the encoding unit 622.
- the dynamic range characteristic information set by the encoding unit 622 may be set and transmitted so as to be shared by the encoding unit 621 and the encoding unit 622.
- FIG. 32 is a diagram illustrating a hierarchical image decoding apparatus that performs the hierarchical image decoding described above.
- the hierarchical image decoding device 630 includes a demultiplexing unit 631, a decoding unit 632, and a decoding unit 633.
- the demultiplexing unit 631 demultiplexes the hierarchical image encoded stream in which the base layer image encoded stream and the non-base layer image encoded stream are multiplexed, and the base layer image encoded stream and the non-base layer image code Stream.
- the decoding unit 632 decodes the base layer image encoded stream extracted by the demultiplexing unit 631 to obtain a base layer image.
- the decoding unit 633 decodes the non-base layer image encoded stream extracted by the demultiplexing unit 631 to obtain a non-base layer image.
- the decoding device 50 (FIG. 19) and the decoding device 251 (FIG. 25) can be applied to the decoding unit 632 and the decoding unit 633 of the hierarchical image decoding device 630.
- the hierarchical image decoding apparatus 630 has dynamic range characteristic information set by the encoding unit 621 and decoded by the decoding unit 632, and dynamic range characteristic information set by the encoding unit 622 and decoded by the decoding unit 633. Process using.
- the dynamic range characteristic information set by the encoding unit 621 (or the encoding unit 622) is set and transmitted so as to be shared by the encoding unit 621 and the encoding unit 622. There may be.
- the hierarchical image decoding apparatus 630 performs processing using the dynamic range characteristic information set by the encoding unit 621 (or encoding unit 622) and decoded by the decoding unit 632 (or decoding unit 633). Done.
- ⁇ Fifth embodiment> Computer configuration example
- the series of processes described above can be executed by hardware or can be executed by software.
- a program constituting the software is installed in the computer.
- the computer includes, for example, a general-purpose personal computer capable of executing various functions by installing various programs by installing a computer incorporated in dedicated hardware.
- FIG. 33 is a block diagram illustrating a configuration example of hardware of a computer that executes the above-described series of processes by a program.
- a CPU Central Processing Unit
- ROM Read Only Memory
- RAM Random Access Memory
- an input / output interface 805 is connected to the bus 804.
- An input unit 806, an output unit 807, a storage unit 808, a communication unit 809, and a drive 810 are connected to the input / output interface 805.
- the input unit 806 includes a keyboard, a mouse, a microphone, and the like.
- the output unit 807 includes a display, a speaker, and the like.
- the storage unit 808 includes a hard disk, a nonvolatile memory, and the like.
- the communication unit 809 includes a network interface or the like.
- the drive 810 drives a removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
- the CPU 801 loads the program stored in the storage unit 808 to the RAM 803 via the input / output interface 805 and the bus 804 and executes the program, for example. Is performed.
- the program executed by the computer 800 can be provided by being recorded in, for example, a removable medium 811 as a package medium or the like.
- the program can be provided via a wired or wireless transmission medium such as a local area network, the Internet, or digital satellite broadcasting.
- the program can be installed in the storage unit 808 via the input / output interface 805 by attaching the removable medium 811 to the drive 810.
- the program can be received by the communication unit 809 via a wired or wireless transmission medium and installed in the storage unit 808.
- the program can be installed in the ROM 802 or the storage unit 808 in advance.
- the program executed by the computer may be a program that is processed in time series in the order described in this specification, or in parallel or at a necessary timing such as when a call is made. It may be a program for processing.
- the step of describing the program recorded on the recording medium is not limited to the processing performed in chronological order according to the described order, but may be performed in parallel or It also includes processes that are executed individually.
- system represents the entire apparatus composed of a plurality of devices (apparatuses).
- the configuration described as one device (or processing unit) may be divided and configured as a plurality of devices (or processing units).
- the configurations described above as a plurality of devices (or processing units) may be combined into a single device (or processing unit).
- a configuration other than that described above may be added to the configuration of each device (or each processing unit).
- a part of the configuration of a certain device (or processing unit) may be included in the configuration of another device (or other processing unit). . That is, the present technology is not limited to the above-described embodiment, and various modifications can be made without departing from the gist of the present technology.
- An image encoding device and an image decoding device include a transmitter or a receiver in optical broadcasting, satellite broadcasting, cable broadcasting such as cable TV, distribution on the Internet, and distribution to terminals by cellular communication, etc.
- the present invention can be applied to various electronic devices such as a recording device that records an image on a medium such as a magnetic disk and a flash memory, or a playback device that reproduces an image from these storage media.
- a recording device that records an image on a medium such as a magnetic disk and a flash memory
- a playback device that reproduces an image from these storage media.
- FIG. 34 shows an example of a schematic configuration of a television apparatus to which the above-described embodiment is applied.
- the television apparatus 900 includes an antenna 901, a tuner 902, a demultiplexer 903, a decoder 904, a video signal processing unit 905, a display unit 906, an audio signal processing unit 907, a speaker 908, an external interface 909, a control unit 910, a user interface 911, And a bus 912.
- Tuner 902 extracts a signal of a desired channel from a broadcast signal received via antenna 901, and demodulates the extracted signal. Then, the tuner 902 outputs the encoded bit stream obtained by the demodulation to the demultiplexer 903. In other words, the tuner 902 serves as a transmission unit in the television apparatus 900 that receives an encoded stream in which an image is encoded.
- the demultiplexer 903 separates the video stream and audio stream of the viewing target program from the encoded bit stream, and outputs each separated stream to the decoder 904. Further, the demultiplexer 903 extracts auxiliary data such as EPG (Electronic Program Guide) from the encoded bit stream, and supplies the extracted data to the control unit 910. Note that the demultiplexer 903 may perform descrambling when the encoded bit stream is scrambled.
- EPG Electronic Program Guide
- the decoder 904 decodes the video stream and audio stream input from the demultiplexer 903. Then, the decoder 904 outputs the video data generated by the decoding process to the video signal processing unit 905. In addition, the decoder 904 outputs audio data generated by the decoding process to the audio signal processing unit 907.
- the video signal processing unit 905 reproduces the video data input from the decoder 904 and causes the display unit 906 to display the video.
- the video signal processing unit 905 may cause the display unit 906 to display an application screen supplied via a network.
- the video signal processing unit 905 may perform additional processing such as noise removal on the video data according to the setting.
- the video signal processing unit 905 may generate a GUI (Graphical User Interface) image such as a menu, a button, or a cursor, and superimpose the generated image on the output image.
- GUI Graphic User Interface
- the display unit 906 is driven by a drive signal supplied from the video signal processing unit 905, and displays an image on a video screen of a display device (for example, a liquid crystal display, a plasma display, or an OELD (Organic ElectroLuminescence Display) (organic EL display)). Or an image is displayed.
- a display device for example, a liquid crystal display, a plasma display, or an OELD (Organic ElectroLuminescence Display) (organic EL display)). Or an image is displayed.
- the audio signal processing unit 907 performs reproduction processing such as D / A conversion and amplification on the audio data input from the decoder 904, and outputs audio from the speaker 908.
- the audio signal processing unit 907 may perform additional processing such as noise removal on the audio data.
- the external interface 909 is an interface for connecting the television apparatus 900 to an external device or a network.
- a video stream or an audio stream received via the external interface 909 may be decoded by the decoder 904. That is, the external interface 909 also has a role as a transmission unit in the television apparatus 900 that receives an encoded stream in which an image is encoded.
- the control unit 910 includes a processor such as a CPU and memories such as a RAM and a ROM.
- the memory stores a program executed by the CPU, program data, EPG data, data acquired via a network, and the like.
- the program stored in the memory is read and executed by the CPU when the television apparatus 900 is activated.
- the CPU executes the program to control the operation of the television device 900 according to an operation signal input from the user interface 911, for example.
- the user interface 911 is connected to the control unit 910.
- the user interface 911 includes, for example, buttons and switches for the user to operate the television device 900, a remote control signal receiving unit, and the like.
- the user interface 911 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 910.
- the bus 912 connects the tuner 902, the demultiplexer 903, the decoder 904, the video signal processing unit 905, the audio signal processing unit 907, the external interface 909, and the control unit 910 to each other.
- the decoder 904 has the function of the image decoding apparatus according to the above-described embodiment.
- the dynamic range of the image can be accurately reproduced when the television device 900 decodes the image.
- FIG. 35 shows an example of a schematic configuration of a mobile phone to which the above-described embodiment is applied.
- a cellular phone 920 includes an antenna 921, a communication unit 922, an audio codec 923, a speaker 924, a microphone 925, a camera unit 926, an image processing unit 927, a demultiplexing unit 928, a recording / reproducing unit 929, a display unit 930, a control unit 931, an operation A portion 932 and a bus 933.
- the antenna 921 is connected to the communication unit 922.
- the speaker 924 and the microphone 925 are connected to the audio codec 923.
- the operation unit 932 is connected to the control unit 931.
- the bus 933 connects the communication unit 922, the audio codec 923, the camera unit 926, the image processing unit 927, the demultiplexing unit 928, the recording / reproducing unit 929, the display unit 930, and the control unit 931 to each other.
- the mobile phone 920 has various operation modes including a voice call mode, a data communication mode, a shooting mode, and a videophone mode, and is used for sending and receiving voice signals, sending and receiving e-mail or image data, taking images, and recording data. Perform the action.
- the analog voice signal generated by the microphone 925 is supplied to the voice codec 923.
- the audio codec 923 converts an analog audio signal into audio data, A / D converts the compressed audio data, and compresses it. Then, the audio codec 923 outputs the compressed audio data to the communication unit 922.
- the communication unit 922 encodes and modulates the audio data and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921. In addition, the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
- the communication unit 922 demodulates and decodes the received signal to generate audio data, and outputs the generated audio data to the audio codec 923.
- the audio codec 923 decompresses the audio data and performs D / A conversion to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
- the control unit 931 generates character data constituting the e-mail in response to an operation by the user via the operation unit 932.
- the control unit 931 causes the display unit 930 to display characters.
- the control unit 931 generates e-mail data in response to a transmission instruction from the user via the operation unit 932, and outputs the generated e-mail data to the communication unit 922.
- the communication unit 922 encodes and modulates email data and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
- the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
- the communication unit 922 demodulates and decodes the received signal to restore the email data, and outputs the restored email data to the control unit 931.
- the control unit 931 displays the content of the electronic mail on the display unit 930 and stores the electronic mail data in the storage medium of the recording / reproducing unit 929.
- the recording / reproducing unit 929 has an arbitrary readable / writable storage medium.
- the storage medium may be a built-in storage medium such as a RAM or a flash memory, or an externally mounted type such as a hard disk, magnetic disk, magneto-optical disk, optical disk, USB (Universal Serial Bus) memory, or memory card. It may be a storage medium.
- the camera unit 926 images a subject to generate image data, and outputs the generated image data to the image processing unit 927.
- the image processing unit 927 encodes the image data input from the camera unit 926 and stores the encoded stream in the storage medium of the storage / playback unit 929.
- the demultiplexing unit 928 multiplexes the video stream encoded by the image processing unit 927 and the audio stream input from the audio codec 923, and the multiplexed stream is the communication unit 922. Output to.
- the communication unit 922 encodes and modulates the stream and generates a transmission signal. Then, the communication unit 922 transmits the generated transmission signal to a base station (not shown) via the antenna 921.
- the communication unit 922 amplifies a radio signal received via the antenna 921 and performs frequency conversion to acquire a received signal.
- These transmission signal and reception signal may include an encoded bit stream.
- the communication unit 922 demodulates and decodes the received signal to restore the stream, and outputs the restored stream to the demultiplexing unit 928.
- the demultiplexing unit 928 separates the video stream and the audio stream from the input stream, and outputs the video stream to the image processing unit 927 and the audio stream to the audio codec 923.
- the image processing unit 927 decodes the video stream and generates video data.
- the video data is supplied to the display unit 930, and a series of images is displayed on the display unit 930.
- the audio codec 923 decompresses the audio stream and performs D / A conversion to generate an analog audio signal. Then, the audio codec 923 supplies the generated audio signal to the speaker 924 to output audio.
- the image processing unit 927 has the functions of the image encoding device and the image decoding device according to the above-described embodiment.
- the dynamic range of the image can be accurately reproduced when the mobile phone 920 encodes and decodes the image.
- FIG. 36 shows an example of a schematic configuration of a recording / reproducing apparatus to which the above-described embodiment is applied.
- the recording / reproducing device 940 encodes audio data and video data of a received broadcast program and records the encoded data on a recording medium.
- the recording / reproducing device 940 may encode audio data and video data acquired from another device and record them on a recording medium, for example.
- the recording / reproducing device 940 reproduces data recorded on the recording medium on a monitor and a speaker, for example, in accordance with a user instruction. At this time, the recording / reproducing device 940 decodes the audio data and the video data.
- the recording / reproducing apparatus 940 includes a tuner 941, an external interface 942, an encoder 943, an HDD (Hard Disk Drive) 944, a disk drive 945, a selector 946, a decoder 947, an OSD (On-Screen Display) 948, a control unit 949, and a user interface. 950.
- Tuner 941 extracts a signal of a desired channel from a broadcast signal received via an antenna (not shown), and demodulates the extracted signal. Then, the tuner 941 outputs the encoded bit stream obtained by the demodulation to the selector 946. That is, the tuner 941 has a role as a transmission unit in the recording / reproducing apparatus 940.
- the external interface 942 is an interface for connecting the recording / reproducing apparatus 940 to an external device or a network.
- the external interface 942 may be, for example, an IEEE1394 interface, a network interface, a USB interface, or a flash memory interface.
- video data and audio data received via the external interface 942 are input to the encoder 943. That is, the external interface 942 serves as a transmission unit in the recording / reproducing device 940.
- the encoder 943 encodes video data and audio data when the video data and audio data input from the external interface 942 are not encoded. Then, the encoder 943 outputs the encoded bit stream to the selector 946.
- the HDD 944 records an encoded bit stream in which content data such as video and audio is compressed, various programs, and other data on an internal hard disk. Further, the HDD 944 reads out these data from the hard disk when reproducing video and audio.
- the disk drive 945 performs recording and reading of data to and from the mounted recording medium.
- the recording medium mounted on the disk drive 945 is, for example, a DVD disk (DVD-Video, DVD-RAM, DVD-R, DVD-RW, DVD + R, DVD + RW, etc.) or a Blu-ray (registered trademark) disk. It may be.
- the selector 946 selects an encoded bit stream input from the tuner 941 or the encoder 943 when recording video and audio, and outputs the selected encoded bit stream to the HDD 944 or the disk drive 945. In addition, the selector 946 outputs the encoded bit stream input from the HDD 944 or the disk drive 945 to the decoder 947 during video and audio reproduction.
- the decoder 947 decodes the encoded bit stream and generates video data and audio data. Then, the decoder 947 outputs the generated video data to the OSD 948. The decoder 904 outputs the generated audio data to an external speaker.
- OSD 948 reproduces the video data input from the decoder 947 and displays the video. Further, the OSD 948 may superimpose a GUI image such as a menu, a button, or a cursor on the video to be displayed.
- the control unit 949 includes a processor such as a CPU and memories such as a RAM and a ROM.
- the memory stores a program executed by the CPU, program data, and the like.
- the program stored in the memory is read and executed by the CPU when the recording / reproducing apparatus 940 is activated, for example.
- the CPU controls the operation of the recording / reproducing apparatus 940 in accordance with an operation signal input from the user interface 950, for example, by executing the program.
- the user interface 950 is connected to the control unit 949.
- the user interface 950 includes, for example, buttons and switches for the user to operate the recording / reproducing device 940, a remote control signal receiving unit, and the like.
- the user interface 950 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 949.
- the encoder 943 has the function of the image encoding apparatus according to the above-described embodiment.
- the decoder 947 has the function of the image decoding apparatus according to the above-described embodiment.
- FIG. 37 shows an example of a schematic configuration of an imaging apparatus to which the above-described embodiment is applied.
- the imaging device 960 images a subject to generate an image, encodes the image data, and records it on a recording medium.
- the imaging device 960 includes an optical block 961, an imaging unit 962, a signal processing unit 963, an image processing unit 964, a display unit 965, an external interface 966, a memory 967, a media drive 968, an OSD 969, a control unit 970, a user interface 971, and a bus. 972.
- the optical block 961 is connected to the imaging unit 962.
- the imaging unit 962 is connected to the signal processing unit 963.
- the display unit 965 is connected to the image processing unit 964.
- the user interface 971 is connected to the control unit 970.
- the bus 972 connects the image processing unit 964, the external interface 966, the memory 967, the media drive 968, the OSD 969, and the control unit 970 to each other.
- the optical block 961 includes a focus lens and a diaphragm mechanism.
- the optical block 961 forms an optical image of the subject on the imaging surface of the imaging unit 962.
- the imaging unit 962 includes an image sensor such as a CCD (Charge-Coupled Device) or a CMOS (Complementary Metal-Oxide Semiconductor), and converts an optical image formed on the imaging surface into an image signal as an electrical signal by photoelectric conversion. Then, the imaging unit 962 outputs the image signal to the signal processing unit 963.
- CCD Charge-Coupled Device
- CMOS Complementary Metal-Oxide Semiconductor
- the signal processing unit 963 performs various camera signal processing such as knee correction, gamma correction, and color correction on the image signal input from the imaging unit 962.
- the signal processing unit 963 outputs the image data after the camera signal processing to the image processing unit 964.
- the image processing unit 964 encodes the image data input from the signal processing unit 963 and generates encoded data. Then, the image processing unit 964 outputs the generated encoded data to the external interface 966 or the media drive 968. The image processing unit 964 also decodes encoded data input from the external interface 966 or the media drive 968 to generate image data. Then, the image processing unit 964 outputs the generated image data to the display unit 965. In addition, the image processing unit 964 may display the image by outputting the image data input from the signal processing unit 963 to the display unit 965. Further, the image processing unit 964 may superimpose display data acquired from the OSD 969 on an image output to the display unit 965.
- the OSD 969 generates a GUI image such as a menu, a button, or a cursor, and outputs the generated image to the image processing unit 964.
- the external interface 966 is configured as a USB input / output terminal, for example.
- the external interface 966 connects the imaging device 960 and a printer, for example, when printing an image.
- a drive is connected to the external interface 966 as necessary.
- a removable medium such as a magnetic disk or an optical disk is attached to the drive, and a program read from the removable medium can be installed in the imaging device 960.
- the external interface 966 may be configured as a network interface connected to a network such as a LAN or the Internet. That is, the external interface 966 has a role as a transmission unit in the imaging device 960.
- the recording medium mounted on the media drive 968 may be any readable / writable removable medium such as a magnetic disk, a magneto-optical disk, an optical disk, or a semiconductor memory.
- a recording medium may be fixedly mounted on the media drive 968, and a non-portable storage unit such as an internal hard disk drive or an SSD (Solid State Drive) may be configured.
- the control unit 970 includes a processor such as a CPU and memories such as a RAM and a ROM.
- the memory stores a program executed by the CPU, program data, and the like.
- the program stored in the memory is read and executed by the CPU when the imaging device 960 is activated, for example.
- the CPU controls the operation of the imaging device 960 according to an operation signal input from the user interface 971 by executing the program.
- the user interface 971 is connected to the control unit 970.
- the user interface 971 includes, for example, buttons and switches for the user to operate the imaging device 960.
- the user interface 971 detects an operation by the user via these components, generates an operation signal, and outputs the generated operation signal to the control unit 970.
- the image processing unit 964 has the functions of the image encoding device and the image decoding device according to the above-described embodiment. Accordingly, the dynamic range of the image can be accurately reproduced when the image is encoded and decoded by the imaging device 960.
- the display control unit 55 and the display unit 56 of FIG. 19 may be provided outside the decoding device 50.
- the present technology can take a configuration of cloud computing in which one function is shared by a plurality of devices via a network and is jointly processed.
- each step described in the above flowchart can be executed by one device or can be shared by a plurality of devices.
- the plurality of processes included in the one step can be executed by being shared by a plurality of apparatuses in addition to being executed by one apparatus.
- the method for transmitting such information is not limited to such an example.
- these pieces of information may be transmitted or recorded as separate data associated with the encoded bitstream without being multiplexed into the encoded bitstream.
- the term “associate” means that an image (which may be a part of an image such as a slice or a block) included in the bitstream and information corresponding to the image can be linked at the time of decoding. Means. That is, information may be transmitted on a transmission path different from that of the image (or bit stream).
- Information may be recorded on a recording medium (or another recording area of the same recording medium) different from the image (or bit stream). Furthermore, the information and the image (or bit stream) may be associated with each other in an arbitrary unit such as a plurality of frames, one frame, or a part of the frame.
- the flag is not limited to alternatives such as presence / absence (0 1), but includes information that can identify a specific item from a plurality of options.
- this technique can also take the following structures.
- an encoding unit that encodes an image to generate a bitstream;
- a setting unit for setting dynamic range characteristic information indicating a dynamic range characteristic assigned to the developed image with respect to the captured image;
- An image processing apparatus comprising: a transmission unit configured to transmit the bit stream generated by the encoding unit and the dynamic range characteristic information set by the setting unit.
- the setting unit sets, as the dynamic range characteristic information, code information indicating a dynamic range code assigned to the developed image with respect to the captured image.
- the setting unit sets, as the dynamic range characteristic information, code information indicating a code assigned to the developed image with respect to a white level of the captured image.
- the setting unit sets white level code information indicating a code assigned to the developed image with respect to the white level of the captured image as the dynamic range characteristic information.
- the image processing apparatus described. The setting unit sets, as the dynamic range characteristic information, maximum white level code information indicating a maximum value of a code assigned to a white level of a developed image, according to any one of (1) to (4).
- Image processing apparatus. (6) The image processing device according to any one of (1) to (5), wherein the setting unit sets black level code information indicating a black level code of a developed image as the dynamic range characteristic information.
- the setting unit sets gray level code information indicating a gray level code of a developed image as the dynamic range characteristic information.
- the image processing apparatus sets, as the dynamic range characteristic information, maximum white level information indicating a maximum value of a white level of a captured image.
- the setting unit sets information indicating a luminance range of a region of interest of an image obtained by decoding the bitstream as the dynamic range characteristic information.
- the setting unit sets, as the dynamic range characteristic information, information indicating a position and an offset of a region of interest of an image obtained by decoding the bitstream.
- Image processing device (11) The image processing according to any one of (1) to (10), wherein the transmission unit transmits the dynamic range characteristic information as auxiliary information used when displaying an image obtained by decoding the bitstream.
- the image processing device according to any one of (1) to (10), wherein the transmission unit transmits the dynamic range characteristic information as extended auxiliary information obtained by extending existing auxiliary information.
- the transmission unit transmits the dynamic range characteristic information as tone_mapping_information SEI (Supplemental enhancement information).
- the transmission unit extends model_id used when transmitting the dynamic range characteristic information, and transmits the dynamic range characteristic information as SEI. Any of (1) to (10) Crab image processing device.
- the image processing device transmits the dynamic range characteristic information as VUI (Video Usability Information) indicating usability of the image in sequence units.
- VUI Video Usability Information
- the encoding unit performs an encoding process on the image in accordance with an encoding method conforming to the AVC / H.264 standard.
- An image is encoded to generate a bitstream, Set the dynamic range characteristic information indicating the dynamic range characteristic assigned to the developed image for the captured image, An image processing method for transmitting a generated bit stream and set dynamic range characteristic information.
- a receiving unit that receives a bit stream and dynamic range characteristic information indicating a dynamic range characteristic for an image obtained by decoding the bit stream;
- a decoding unit that decodes the bitstream received by the receiving unit to generate an image;
- An image processing apparatus comprising: an image adjusting unit that adjusts a dynamic range for an image generated by the decoding unit using the dynamic range characteristic information received by the receiving unit.
- a receiving unit that receives the bitstream and the dynamic range characteristic information is further provided, The decoding unit decodes the bitstream received by the receiving unit;
- the image processing device according to (18), wherein the image adjustment unit adjusts a dynamic range for an image generated by the decoding unit, using dynamic range characteristic information received by the receiving unit.
Abstract
Description
1.第1の実施の形態(HEVC方式の符号化・復号装置)
2.第2の実施の形態(AVC方式の符号化・復号装置)
3.第3の実施の形態(多視点画像符号化・多視点画像復号装置)
4.第4の実施の形態(階層画像符号化・階層画像復号装置)
5.第5の実施の形態(コンピュータ)
6.応用例
[符号化装置の第1実施の形態の構成例]
図3は、本技術を適用した画像処理装置としての、符号化装置の第1実施の形態の構成例を示すブロック図である。
図4は、図3の符号化部2の構成例を示すブロック図である。
次に、図5を参照して、図3の設定部3により設定されるダイナミックレンジの特性情報について説明する。なお、図5における縦軸や横軸の値は一例であり、それらの値に限定されない。
図16は、図3の符号化装置1の生成処理を説明するフローチャートである。なお、図16の例においては、上述した伝送方法3の例について説明する。
図19は、図3の符号化装置1から伝送される符号化ストリームを復号する、本技術を適用した画像処理装置としての、復号装置の第1実施の形態の構成例を示すブロック図である。
図20は、図19の復号部53の構成例を示すブロック図である。
図21は、図19の復号装置50による表示処理を説明するフローチャートである。
[符号化装置の第2実施の形態の構成例]
図23は、本技術を適用した画像処理装置としての、符号化装置の第2実施の形態の構成例を示すブロック図である。
図24は、図23の符号化部211の構成例を示すブロック図である。
図25は、図23の符号化装置201から伝送される符号化ストリームを復号する、本技術を適用した画像処理装置としての、復号装置の第2実施の形態の構成例を示すブロック図である。
図26は、図25の復号部261の構成例を示すブロック図である。
[多視画像点符号化・多視点画像復号への適用]
上述した一連の処理は、多視点画像符号化・多視点画像復号に適用することができる。図27は、多視点画像符号化方式の一例を示す。
図28は、上述した多視点画像符号化を行う多視点画像符号化装置を示す図である。図28に示されるように、多視点画像符号化装置600は、符号化部601、符号化部602、および多重化部603を有する。
図29は、上述した多視点画像復号を行う多視点画像復号装置を示す図である。図29に示されるように、多視点画像復号装置610は、逆多重化部611、復号部612、および復号部613を有する。
[階層画像点符号化・階層画像復号への適用]
上述した一連の処理は、階層画像符号化・階層画像復号に適用することができる。図30は、多視点画像符号化方式の一例を示す。
図31は、上述した階層画像符号化を行う階層画像符号化装置を示す図である。図31に示されるように、階層画像符号化装置620は、符号化部621、符号化部622、および多重化部623を有する。
図32は、上述した階層画像復号を行う階層画像復号装置を示す図である。図32に示されるように、階層画像復号装置630は、逆多重化部631、復号部632、および復号部633を有する。
[コンピュータの構成例]
上述した一連の処理は、ハードウエアにより実行することもできるし、ソフトウエアにより実行することもできる。一連の処理をソフトウエアにより実行する場合には、そのソフトウエアを構成するプログラムが、コンピュータにインストールされる。ここで、コンピュータには、専用のハードウエアに組み込まれているコンピュータや、各種のプログラムをインストールすることで、各種の機能を実行することが可能な、例えば汎用のパーソナルコンピュータなどが含まれる。
[第1の応用例:テレビジョン受像機]
図34は、上述した実施形態を適用したテレビジョン装置の概略的な構成の一例を示している。テレビジョン装置900は、アンテナ901、チューナ902、デマルチプレクサ903、デコーダ904、映像信号処理部905、表示部906、音声信号処理部907、スピーカ908、外部インタフェース909、制御部910、ユーザインタフェース911、及びバス912を備える。
図35は、上述した実施形態を適用した携帯電話機の概略的な構成の一例を示している。携帯電話機920は、アンテナ921、通信部922、音声コーデック923、スピーカ924、マイクロホン925、カメラ部926、画像処理部927、多重分離部928、記録再生部929、表示部930、制御部931、操作部932、及びバス933を備える。
図36は、上述した実施形態を適用した記録再生装置の概略的な構成の一例を示している。記録再生装置940は、例えば、受信した放送番組の音声データ及び映像データを符号化して記録媒体に記録する。また、記録再生装置940は、例えば、他の装置から取得される音声データ及び映像データを符号化して記録媒体に記録してもよい。また、記録再生装置940は、例えば、ユーザの指示に応じて、記録媒体に記録されているデータをモニタ及びスピーカ上で再生する。このとき、記録再生装置940は、音声データ及び映像データを復号する。
図37は、上述した実施形態を適用した撮像装置の概略的な構成の一例を示している。撮像装置960は、被写体を撮像して画像を生成し、画像データを符号化して記録媒体に記録する。
(1) 画像を符号化処理してビットストリームを生成する符号化部と、
Captured 画像に対してDeveloped画像に割り当てられるダイナミックレンジの特性を示すダイナミックレンジ特性情報を設定する設定部と、
前記符号化部により生成されたビットストリームと前記設定部により設定されたダイナミックレンジ特性情報とを伝送する伝送部と
を備える画像処理装置。
(2) 前記設定部は、Captured 画像に対してDeveloped 画像に割り当てられるダイナミックレンジのコードを示すコード情報を、前記ダイナミックレンジ特性情報として設定する
前記(1)に記載の画像処理装置。
(3) 前記設定部は、Captured 画像のホワイトレベルに対してDeveloped 画像に割り当てられるコードを示すコード情報を、前記ダイナミックレンジ特性情報として設定する
前記(1)または(2)に記載の画像処理装置。
(4) 前記設定部は、Captured 画像のホワイトレベルに対してDeveloped 画像に割り当てられるコードを示すホワイトレベルコード情報を、前記ダイナミックレンジ特性情報として設定する
前記(1)乃至(3)のいずれかに記載の画像処理装置。
(5) 前記設定部は、Developed 画像のホワイトレベルに割り当てられるコードの最大値を示す最大ホワイトレベルコード情報を、前記ダイナミックレンジ特性情報として設定する
前記(1)乃至(4)のいずれかに記載の画像処理装置。
(6) 前記設定部は、Developed 画像のブラックレベルのコードを示すブラックレベルコード情報を、前記ダイナミックレンジ特性情報として設定する
前記(1)乃至(5)のいずれかに記載の画像処理装置。
(7) 前記設定部は、Developed 画像のグレイレベルのコードを示すグレイレベルコード情報を、前記ダイナミックレンジ特性情報として設定する
前記(1)乃至(6)のいずれかに記載の画像処理装置。
(8) 前記設定部は、Captured 画像のホワイトレベルの最大値を示す最大ホワイトレベル情報を、前記ダイナミックレンジ特性情報として設定する
前記(1)乃至(7)のいずれかに記載の画像処理装置。
(9) 前記設定部は、前記ビットストリームを復号処理した画像の注目領域の輝度のレンジを示す情報を、前記ダイナミックレンジ特性情報として設定する
前記(1)乃至(8)のいずれかに記載の画像処理装置。
(10) 前記設定部は、前記ビットストリームを復号処理した画像の注目領域の位置およびオフセットを示す情報を、前記ダイナミックレンジ特性情報として設定する
前記(1)乃至(9)のいずれかに記載の画像処理装置。
(11) 前記伝送部は、前記ダイナミックレンジ特性情報を、前記ビットストリームを復号処理した画像を表示する際に用いる補助情報として伝送する
前記(1)乃至(10)のいずれかに記載の画像処理装置。
(12) 前記伝送部は、前記ダイナミックレンジ特性情報を、既存の補助情報を拡張した拡張補助情報として伝送する
前記(1)乃至(10)のいずれかに記載の画像処理装置。
(13) 前記伝送部は、前記ダイナミックレンジ特性情報を、tone_mapping_information SEI (Supplemental enhancement information)として伝送する
前記(1)乃至(10)のいずれかに記載の画像処理装置。
(14) 前記伝送部は、tone_mapping_information SEI対象として、前記ダイナミックレンジ特性情報を伝送する際に用いるmodel_idを拡張して、前記ダイナミックレンジ特性情報をSEIとして伝送する
前記(1)乃至(10)のいずれかに画像処理装置。
(15) 前記伝送部は、前記ダイナミックレンジ特性情報を、前記画像のユーザビリティをシーケンス単位で示すVUI(Video Usability Information)として伝送する
前記(1)乃至(10)のいずれかに記載の画像処理装置。
(16)前記符号化部は、AVC/H.264規格に準じた符号化方式に従って、前記画像を符号化処理する
前記(1)乃至(15)のいずれかに記載の画像処理装置。
(17) 画像を符号化処理してビットストリームを生成し、
Captured 画像に対してDeveloped画像に割り当てられるダイナミックレンジの特性を示すダイナミックレンジ特性情報を設定し、
生成されたビットストリームと設定されたダイナミックレンジ特性情報とを伝送する
画像処理方法。
(18) ビットストリームと、前記ビットストリームを復号処理した画像に対するダイナミックレンジの特性を示すダイナミックレンジ特性情報とを受け取る受け取り部と、
前記受け取り部により受け取られたビットストリームを復号処理して、画像を生成する復号部と、
前記受け取り部により受け取られたダイナミックレンジ特性情報を用いて、前記復号部により生成された画像に対するダイナミックレンジを調整する画像調整部と
を備える画像処理装置。
(19) 前記ビットストリームと、前記ダイナミックレンジ特性情報とを受け取る受け取り部を
さらに備え、
前記復号部は、前記受け取り部により受け取られたビットストリームを復号処理し、
前記画像調整部は、前記受け取り部により受け取られたダイナミックレンジ特性情報を用いて、前記復号部により生成された画像に対するダイナミックレンジを調整する
前記(18)に記載の画像処理装置。
(20) ビットストリームと、前記ビットストリームを復号処理した画像に対するダイナミックレンジの特性を示すダイナミックレンジ特性情報とを受け取り、
受け取られたビットストリームを復号処理して、画像を生成し、
受け取られたダイナミックレンジ特性情報を用いて、生成された画像に対するダイナミックレンジを調整する
画像処理方法。
Claims (20)
- 画像を符号化処理してビットストリームを生成する符号化部と、
Captured 画像に対してDeveloped画像に割り当てられるダイナミックレンジの特性を示すダイナミックレンジ特性情報を設定する設定部と、
前記符号化部により生成されたビットストリームと前記設定部により設定されたダイナミックレンジ特性情報とを伝送する伝送部と
を備える画像処理装置。 - 前記設定部は、Captured 画像に対してDeveloped 画像に割り当てられるダイナミックレンジのコードを示すコード情報を、前記ダイナミックレンジ特性情報として設定する
請求項1に記載の画像処理装置。 - 前記設定部は、Captured 画像のホワイトレベルに対してDeveloped 画像に割り当てられるコードを示すコード情報を、前記ダイナミックレンジ特性情報として設定する
請求項2に記載の画像処理装置。 - 前記設定部は、Captured 画像のホワイトレベルに対してDeveloped 画像に割り当てられるコードを示すホワイトレベルコード情報を、前記ダイナミックレンジ特性情報として設定する
請求項3に記載の画像処理装置。 - 前記設定部は、Developed 画像のホワイトレベルに割り当てられるコードの最大値を示す最大ホワイトレベルコード情報を、前記ダイナミックレンジ特性情報として設定する
請求項4に記載の画像処理装置。 - 前記設定部は、Developed 画像のブラックレベルのコードを示すブラックレベルコード情報を、前記ダイナミックレンジ特性情報として設定する
請求項1に記載の画像処理装置。 - 前記設定部は、Developed 画像のグレイレベルのコードを示すグレイレベルコード情報を、前記ダイナミックレンジ特性情報として設定する
請求項1に記載の画像処理装置。 - 前記設定部は、Captured 画像のホワイトレベルの最大値を示す最大ホワイトレベル情報を、前記ダイナミックレンジ特性情報として設定する
請求項1に記載の画像処理装置。 - 前記設定部は、前記ビットストリームを復号処理した画像の注目領域の輝度のレンジを示す情報を、前記ダイナミックレンジ特性情報として設定する
請求項1に記載の画像処理装置。 - 前記設定部は、前記ビットストリームを復号処理した画像の注目領域の位置およびオフセットを示す情報を、前記ダイナミックレンジ特性情報として設定する
請求項1に記載の画像処理装置。 - 前記伝送部は、前記ダイナミックレンジ特性情報を、前記ビットストリームを復号処理した画像を表示する際に用いる補助情報として伝送する
請求項1に記載の画像処理装置。 - 前記伝送部は、前記ダイナミックレンジ特性情報を、既存の補助情報を拡張した拡張補助情報として伝送する
請求項1に記載の画像処理装置。 - 前記伝送部は、前記ダイナミックレンジ特性情報を、tone_mapping_information SEI (Supplemental enhancement information)として伝送する
請求項1に記載の画像処理装置。 - 前記伝送部は、tone_mapping_information SEIを対象として、前記ダイナミックレンジ特性情報を伝送する際に用いるmodel_idを拡張して、前記ダイナミックレンジ特性情報をSEIとして伝送する
請求項13に記載の画像処理装置。 - 前記伝送部は、前記ダイナミックレンジ特性情報を、前記画像のユーザビリティをシーケンス単位で示すVUI(Video Usability Information)として伝送する
請求項1に記載の画像処理装置。 - 前記符号化部は、AVC/H.264 規格に準じた符号化方式に従って、前記画像を符号化処理する
請求項1に記載の画像処理装置。 - 画像処理装置が、
画像を符号化処理してビットストリームを生成し、
Captured 画像に対してDeveloped画像に割り当てられるダイナミックレンジの特性を示すダイナミックレンジ特性情報を設定し、
生成されたビットストリームと設定されたダイナミックレンジ特性情報とを伝送する
画像処理方法。 - ビットストリームを復号処理して、画像を生成する復号部と、
Captured 画像に対してDeveloped画像に割り当てられるダイナミックレンジの特性を示すダイナミックレンジ特性情報を用いて、前記復号部により生成された画像に対するダイナミックレンジを調整する画像調整部と
を備える画像処理装置。 - 前記ビットストリームと、前記ダイナミックレンジ特性情報とを受け取る受け取り部を
さらに備え、
前記復号部は、前記受け取り部により受け取られたビットストリームを復号処理し、
前記画像調整部は、前記受け取り部により受け取られたダイナミックレンジ特性情報を用いて、前記復号部により生成された画像に対するダイナミックレンジを調整する
請求項18に記載の画像処理装置。 - ビットストリームを復号処理して、画像を生成し、
Captured 画像に対してDeveloped画像に割り当てられるダイナミックレンジの特性を示すダイナミックレンジ特性情報を用いて、生成された画像に対するダイナミックレンジを調整する
画像処理方法。
Priority Applications (13)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
MX2014015818A MX344551B (es) | 2012-06-29 | 2013-06-21 | Dispositivo y metodo de procesamiento de imagenes. |
EP19158894.6A EP3512203B1 (en) | 2012-06-29 | 2013-06-21 | Image processing device and method |
AU2013281949A AU2013281949B2 (en) | 2012-06-29 | 2013-06-21 | Image processing device and method |
CA2875199A CA2875199C (en) | 2012-06-29 | 2013-06-21 | Image processing device and method |
JP2014522599A JP6219823B2 (ja) | 2012-06-29 | 2013-06-21 | 画像処理装置および方法、並びに記録媒体 |
EP13809522.9A EP2869558B1 (en) | 2012-06-29 | 2013-06-21 | Image processing device and method |
BR112014032224-4A BR112014032224B1 (pt) | 2012-06-29 | 2013-06-21 | Dispositivo e método de processamento de imagem, e, meio de armazenamento não transitório legível por computador |
CN201380033132.8A CN104380738B (zh) | 2012-06-29 | 2013-06-21 | 图像处理装置及方法 |
RU2014152106A RU2653464C2 (ru) | 2012-06-29 | 2013-06-21 | Устройство для обработки изображений и способ обработки изображений |
KR1020147035887A KR102161017B1 (ko) | 2012-06-29 | 2013-06-21 | 화상 처리 장치 및 적어도 하나의 컴퓨터 판독가능한 기억 매체 |
US14/497,249 US20150010059A1 (en) | 2012-06-29 | 2014-09-25 | Image processing device and method |
ZA2014/09126A ZA201409126B (en) | 2012-06-29 | 2014-12-19 | Image processing device and method |
US14/601,358 US20150131904A1 (en) | 2012-06-29 | 2015-01-21 | Image processing device and method |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2012-147885 | 2012-06-29 | ||
JP2012147885 | 2012-06-29 | ||
JP2012-183164 | 2012-08-22 | ||
JP2012183164 | 2012-08-22 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/497,249 Continuation US20150010059A1 (en) | 2012-06-29 | 2014-09-25 | Image processing device and method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014002901A1 true WO2014002901A1 (ja) | 2014-01-03 |
Family
ID=49783053
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/067114 WO2014002901A1 (ja) | 2012-06-29 | 2013-06-21 | 画像処理装置および方法 |
Country Status (13)
Country | Link |
---|---|
US (2) | US20150010059A1 (ja) |
EP (2) | EP2869558B1 (ja) |
JP (2) | JP6219823B2 (ja) |
KR (1) | KR102161017B1 (ja) |
CN (2) | CN104380738B (ja) |
AR (1) | AR091515A1 (ja) |
AU (1) | AU2013281949B2 (ja) |
BR (1) | BR112014032224B1 (ja) |
CA (1) | CA2875199C (ja) |
MX (1) | MX344551B (ja) |
RU (1) | RU2653464C2 (ja) |
TW (1) | TWI586150B (ja) |
WO (1) | WO2014002901A1 (ja) |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015118909A1 (ja) * | 2014-02-07 | 2015-08-13 | ソニー株式会社 | 送信装置、送信方法、受信装置、受信方法、表示装置および表示方法 |
WO2015125719A1 (ja) * | 2014-02-21 | 2015-08-27 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
JP2015192419A (ja) * | 2014-03-28 | 2015-11-02 | ソニー株式会社 | 再生装置、再生方法、およびプログラム |
WO2015190246A1 (ja) * | 2014-06-13 | 2015-12-17 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
WO2015198553A1 (ja) * | 2014-06-26 | 2015-12-30 | パナソニックIpマネジメント株式会社 | データ出力装置、データ出力方法及びデータ生成方法 |
WO2016021120A1 (ja) * | 2014-08-07 | 2016-02-11 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 再生装置、再生方法および記録媒体 |
JP2016034125A (ja) * | 2014-07-28 | 2016-03-10 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
JP2016039627A (ja) * | 2014-08-07 | 2016-03-22 | パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America | 再生装置、再生方法および記録媒体 |
JP2016048888A (ja) * | 2014-08-28 | 2016-04-07 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
JP2016058848A (ja) * | 2014-09-08 | 2016-04-21 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
JP2016082498A (ja) * | 2014-10-21 | 2016-05-16 | 三菱電機株式会社 | デジタル放送受信装置及び方法、並びにプログラム及び記録媒体 |
WO2016108268A1 (ja) * | 2014-12-29 | 2016-07-07 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
JP2016122985A (ja) * | 2014-12-25 | 2016-07-07 | 株式会社東芝 | 画像処理システム、画像処理装置および画像処理方法 |
JP2017509057A (ja) * | 2014-02-26 | 2017-03-30 | トムソン ライセンシングThomson Licensing | Hdr画像を符号化及び復号する方法及び装置 |
JP2017085481A (ja) * | 2015-10-30 | 2017-05-18 | キヤノン株式会社 | 映像処理装置、映像処理方法、及び映像処理プログラム |
JP2017143546A (ja) * | 2017-03-21 | 2017-08-17 | ソニー株式会社 | 再生装置、記録媒体、表示装置、および情報処理方法 |
JP2018530281A (ja) * | 2015-09-23 | 2018-10-11 | アリス エンタープライジズ エルエルシーArris Enterprises Llc | 高ダイナミックレンジ映像データの再形成および適応のためのシステムおよび方法 |
JP2018530237A (ja) * | 2015-09-23 | 2018-10-11 | アリス エンタープライジズ エルエルシーArris Enterprises Llc | トランスポートストリームにおける高ダイナミックレンジおよび広色域コンテンツの伝達 |
JP2018198458A (ja) * | 2018-08-28 | 2018-12-13 | ソニー株式会社 | 再生装置、表示装置、情報処理方法、および記録媒体 |
US20190075296A1 (en) * | 2014-06-27 | 2019-03-07 | Panasonic Intellectual Property Management Co., Ltd. | Data output apparatus, data output method, and data generation method |
JP2020025277A (ja) * | 2014-12-03 | 2020-02-13 | パナソニックIpマネジメント株式会社 | データ符号化方法、データ復号方法、データ符号化装置及びデータ復号装置 |
CN111899769A (zh) * | 2014-09-12 | 2020-11-06 | 松下电器(美国)知识产权公司 | 非暂时性计算机可读介质、再现装置以及再现方法 |
JP2021192558A (ja) * | 2020-01-16 | 2021-12-16 | ソニーグループ株式会社 | 送信装置、送信方法、受信装置および受信方法 |
JP2022009011A (ja) * | 2014-02-25 | 2022-01-14 | アップル インコーポレイテッド | ビデオ符号化及び復号のための適応的伝達関数 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6381215B2 (ja) * | 2014-01-29 | 2018-08-29 | キヤノン株式会社 | 画像処理装置、画像処理方法、表示装置、表示装置の制御方法、及び、プログラム |
US10547844B2 (en) * | 2014-12-04 | 2020-01-28 | Lg Electronics Inc. | Broadcasting signal transmission and reception method and device |
US20180070083A1 (en) * | 2015-03-24 | 2018-03-08 | Sony Corporation | Transmission device, transmission method, reception device, and reception method |
CN112040237A (zh) | 2015-07-16 | 2020-12-04 | 杜比实验室特许公司 | 用于hdr和宽色域信号的信号整形和编码 |
JP6986670B2 (ja) * | 2015-09-11 | 2021-12-22 | パナソニックIpマネジメント株式会社 | 映像受信方法及び映像受信装置 |
EP3349474A4 (en) * | 2015-09-11 | 2018-07-25 | Panasonic Intellectual Property Management Co., Ltd. | Video reception method, video transmission method, video reception apparatus, and video transmission apparatus |
US10129558B2 (en) | 2015-09-21 | 2018-11-13 | Qualcomm Incorporated | Supplement enhancement information (SEI) messages for high dynamic range and wide color gamut video coding |
US10244249B2 (en) * | 2015-09-21 | 2019-03-26 | Qualcomm Incorporated | Fixed point implementation of range adjustment of components in video coding |
JP6132006B1 (ja) * | 2015-12-02 | 2017-05-24 | 日本電気株式会社 | 映像符号化装置、映像システム、映像符号化方法、及び映像符号化プログラム |
BR112017016037A2 (pt) | 2015-12-17 | 2018-03-20 | Koninklijke Philips N.V. | decodificador e codificador de vídeo em hdr; método de decodificação de vídeo; método para codificação de vídeo em hdr; e memória legível por computador |
US10542296B2 (en) * | 2016-05-10 | 2020-01-21 | Dolby Laboratories Licensing Corporation | Chroma reshaping of HDR video signals |
US11102495B2 (en) * | 2016-05-17 | 2021-08-24 | Qualcomm Incorporated | Methods and systems for generating and processing content color volume messages for video |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH057351A (ja) * | 1991-06-26 | 1993-01-14 | Shimadzu Corp | 画像信号変換装置 |
JP2002542739A (ja) * | 1999-04-15 | 2002-12-10 | サーノフ コーポレイション | 画像領域のダイナミックレンジの拡大を伴う標準圧縮 |
JP2006013750A (ja) * | 2004-06-24 | 2006-01-12 | Canon Inc | 映像処理方法及び装置 |
JP2007257641A (ja) * | 2006-03-24 | 2007-10-04 | Sharp Corp | トーンマッピングのメッセージングのための方法、システム、画像受信装置、画像送信装置、およびプログラム |
JP2009538560A (ja) * | 2006-05-25 | 2009-11-05 | トムソン ライセンシング | 重み付け符号化する方法及びシステム |
Family Cites Families (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5956674A (en) * | 1995-12-01 | 1999-09-21 | Digital Theater Systems, Inc. | Multi-channel predictive subband audio coder using psychoacoustic adaptive bit allocation in frequency, time and over the multiple channels |
SG116400A1 (en) * | 1997-10-24 | 2005-11-28 | Matsushita Electric Ind Co Ltd | A method for computational graceful degradation inan audiovisual compression system. |
FR2812506B1 (fr) * | 2000-07-25 | 2002-12-20 | Canon Kk | Procede et dispositif d'alerte lors du decodage progressif d'une image numerique codee avec une region d'interet |
JP2004112048A (ja) * | 2002-09-13 | 2004-04-08 | Ricoh Co Ltd | 画像読取装置 |
JP2005143032A (ja) * | 2003-11-10 | 2005-06-02 | Fuji Photo Film Co Ltd | 撮影装置 |
JP2005208817A (ja) * | 2004-01-21 | 2005-08-04 | Konica Minolta Photo Imaging Inc | 画像処理方法、画像処理装置及び画像記録装置 |
US8155454B2 (en) * | 2006-07-20 | 2012-04-10 | Qualcomm Incorporated | Method and apparatus for encoder assisted post-processing |
RU2400815C2 (ru) * | 2006-10-09 | 2010-09-27 | Самсунг Электроникс Ко., Лтд. | Способ повышения качества цифрового фотоизображения |
US8106880B2 (en) * | 2007-03-27 | 2012-01-31 | Avaya Inc. | Sharing a video display between telephony and computing devices |
JP2008263547A (ja) * | 2007-04-13 | 2008-10-30 | Konica Minolta Holdings Inc | 撮像装置 |
US8144214B2 (en) * | 2007-04-18 | 2012-03-27 | Panasonic Corporation | Imaging apparatus, imaging method, integrated circuit, and storage medium |
JP5133085B2 (ja) * | 2007-04-18 | 2013-01-30 | パナソニック株式会社 | 撮像装置および撮像方法 |
US20090317017A1 (en) * | 2008-06-20 | 2009-12-24 | The Hong Kong University Of Science And Technology | Image characteristic oriented tone mapping for high dynamic range images |
JP4977573B2 (ja) * | 2007-10-11 | 2012-07-18 | オンセミコンダクター・トレーディング・リミテッド | 映像信号処理装置におけるオートゲインコントロール回路 |
JP2009223722A (ja) * | 2008-03-18 | 2009-10-01 | Sony Corp | 画像信号処理装置、画像信号処理方法、およびプログラム |
JP4618342B2 (ja) * | 2008-05-20 | 2011-01-26 | 日本テキサス・インスツルメンツ株式会社 | 固体撮像装置 |
CN102132566B (zh) * | 2008-10-07 | 2015-05-20 | 株式会社Ntt都科摩 | 图像加工装置及方法、动态图像编码装置及方法、动态图像解码装置及方法、以及编码解码系统和方法 |
JP5589006B2 (ja) * | 2009-03-13 | 2014-09-10 | ドルビー ラボラトリーズ ライセンシング コーポレイション | 高ダイナミックレンジ、視覚ダイナミックレンジ及び広色域のビデオの階層化圧縮 |
JP5119215B2 (ja) * | 2009-07-07 | 2013-01-16 | 株式会社エヌ・ティ・ティ・ドコモ | 通信端末および通信制御方法 |
JPWO2011033669A1 (ja) * | 2009-09-18 | 2013-02-07 | 株式会社東芝 | 画像表示装置 |
JP5588022B2 (ja) * | 2010-02-22 | 2014-09-10 | ドルビー ラボラトリーズ ライセンシング コーポレイション | 表示サブシステムにビデオデータを提供する方法、およびシステム |
US9509935B2 (en) * | 2010-07-22 | 2016-11-29 | Dolby Laboratories Licensing Corporation | Display management server |
CN103210418B (zh) * | 2010-11-23 | 2016-08-17 | 杜比实验室特许公司 | 高动态范围图像的内容元数据增强 |
GB2500835B (en) * | 2010-12-10 | 2014-02-12 | Ibm | High-dynamic range video tone mapping |
US9626730B2 (en) * | 2011-05-05 | 2017-04-18 | Arm Limited | Method of and apparatus for encoding and decoding data |
US8787454B1 (en) * | 2011-07-13 | 2014-07-22 | Google Inc. | Method and apparatus for data compression using content-based features |
RU2643485C2 (ru) * | 2011-09-27 | 2018-02-01 | Конинклейке Филипс Н.В. | Устройство и способ для преобразования динамического диапазона изображений |
CN104185991B (zh) * | 2011-11-09 | 2018-07-06 | 弗劳恩霍夫应用研究促进协会 | 不同动态采样值范围的层之间的层间预测 |
WO2013107939A1 (en) * | 2012-01-20 | 2013-07-25 | Nokia Corporation | Method for video coding and an apparatus, a computer-program product, a system, and a module for the same |
-
2013
- 2013-06-19 AR ARP130102183A patent/AR091515A1/es active IP Right Grant
- 2013-06-19 TW TW102121737A patent/TWI586150B/zh active
- 2013-06-21 MX MX2014015818A patent/MX344551B/es active IP Right Grant
- 2013-06-21 EP EP13809522.9A patent/EP2869558B1/en active Active
- 2013-06-21 CA CA2875199A patent/CA2875199C/en active Active
- 2013-06-21 RU RU2014152106A patent/RU2653464C2/ru active
- 2013-06-21 EP EP19158894.6A patent/EP3512203B1/en active Active
- 2013-06-21 BR BR112014032224-4A patent/BR112014032224B1/pt active IP Right Grant
- 2013-06-21 CN CN201380033132.8A patent/CN104380738B/zh active Active
- 2013-06-21 WO PCT/JP2013/067114 patent/WO2014002901A1/ja active Application Filing
- 2013-06-21 JP JP2014522599A patent/JP6219823B2/ja active Active
- 2013-06-21 CN CN201811073750.6A patent/CN108965893B/zh active Active
- 2013-06-21 KR KR1020147035887A patent/KR102161017B1/ko active IP Right Grant
- 2013-06-21 AU AU2013281949A patent/AU2013281949B2/en active Active
-
2014
- 2014-09-25 US US14/497,249 patent/US20150010059A1/en active Pending
-
2015
- 2015-01-21 US US14/601,358 patent/US20150131904A1/en not_active Abandoned
-
2017
- 2017-09-28 JP JP2017188174A patent/JP6580648B2/ja active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH057351A (ja) * | 1991-06-26 | 1993-01-14 | Shimadzu Corp | 画像信号変換装置 |
JP2002542739A (ja) * | 1999-04-15 | 2002-12-10 | サーノフ コーポレイション | 画像領域のダイナミックレンジの拡大を伴う標準圧縮 |
JP2006013750A (ja) * | 2004-06-24 | 2006-01-12 | Canon Inc | 映像処理方法及び装置 |
JP2007257641A (ja) * | 2006-03-24 | 2007-10-04 | Sharp Corp | トーンマッピングのメッセージングのための方法、システム、画像受信装置、画像送信装置、およびプログラム |
JP2009538560A (ja) * | 2006-05-25 | 2009-11-05 | トムソン ライセンシング | 重み付け符号化する方法及びシステム |
Non-Patent Citations (3)
Title |
---|
BENJAMIN BROSS; WOO-JIN HAN; JENS-RAINER OHM; GARY J. SULLIVAN; THOMAS WIEGAND: "High efficiency video coding (HEVC) text specification draft 7", JCTVC-I1003 VER5, 12 June 2012 (2012-06-12) |
GARY SULLIVAN ET AL.: "Joint Draft 6 of ''New profiles for professional applications'' amendment to ITU-T Rec. H.264 & ISO/IEC 14496- 10 (Amendment 2 to 2005 edition)", JOINT VIDEO TEAM (JVT) OF ISO/IEC MPEG & ITU-T VCEG (ISO/ IEC JTC 1/SC 29/WG 11 AND ITU-T SG16 Q.6), JVT- V204, 22ND MEETING, January 2007 (2007-01-01), MARRAKECH, MOROCCO, pages 1 - 4, 68-74, XP055176939 * |
SALLY HATTORI ET AL.: "Signalling of Luminance Dynamic Range in Tone mapping information SEI", JOINT COLLABORATIVE TEAM ON VIDEO CODING (JCT- VC) OF ITU-T SG 16 WP 3 AND ISO/IEC JTC 1/SC 29/WG 11, JCTVC-J0149, 10TH MEETING, July 2012 (2012-07-01), STOCKHOLM, SE, pages 1 - 7, XP030053806 * |
Cited By (104)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11418820B2 (en) | 2014-02-07 | 2022-08-16 | Sony Corporation | Transmission device, transmission method, reception device, reception method, display device, and display method |
JP2021083127A (ja) * | 2014-02-07 | 2021-05-27 | ソニーグループ株式会社 | 受信装置および受信方法 |
JPWO2015118909A1 (ja) * | 2014-02-07 | 2017-03-23 | ソニー株式会社 | 送信装置、送信方法、受信装置、受信方法、表示装置および表示方法 |
WO2015118909A1 (ja) * | 2014-02-07 | 2015-08-13 | ソニー株式会社 | 送信装置、送信方法、受信装置、受信方法、表示装置および表示方法 |
US11882320B2 (en) | 2014-02-07 | 2024-01-23 | Sony Corporation | Transmission device, transmission method, reception device, reception method, display device, and display method |
US11595704B2 (en) | 2014-02-07 | 2023-02-28 | Sony Group Corporation | Transmission device, transmission method, reception device, reception method, display device, and display method |
US11290754B2 (en) | 2014-02-07 | 2022-03-29 | Sony Corporation | Transmission device, transmission method, reception device, reception method, display device, and display method |
JP7047952B2 (ja) | 2014-02-07 | 2022-04-05 | ソニーグループ株式会社 | 受信装置および受信方法 |
JP2020127223A (ja) * | 2014-02-07 | 2020-08-20 | ソニー株式会社 | 受信装置、受信方法および表示装置 |
US11323752B2 (en) | 2014-02-07 | 2022-05-03 | Sony Corporation | Transmission device, transmission method, reception device, reception method, display device, and display method |
RU2667153C2 (ru) * | 2014-02-07 | 2018-09-17 | Сони Корпорейшн | Устройство передачи, способ передачи, устройство приема, способ приема, устройство отображения и способ отображения |
US20160345032A1 (en) | 2014-02-07 | 2016-11-24 | Sony Corporation | Transmission device, transmission method, reception device, reception method, display device, and display method |
JP7388465B2 (ja) | 2014-02-07 | 2023-11-29 | ソニーグループ株式会社 | 処理装置 |
US10313709B2 (en) | 2014-02-07 | 2019-06-04 | Sony Corporation | Transmission device, transmission method, reception device, reception method, display device, and display method |
US11716493B2 (en) | 2014-02-07 | 2023-08-01 | Sony Group Corporation | Transmission device, transmission method, reception device, reception method, display device, and display method |
JP2019135871A (ja) * | 2014-02-07 | 2019-08-15 | ソニー株式会社 | 送信装置、送信方法、受信装置、受信方法、表示装置および表示方法 |
JPWO2015125719A1 (ja) * | 2014-02-21 | 2017-03-30 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
US10674181B2 (en) | 2014-02-21 | 2020-06-02 | Sony Corporation | Transmission device, transmission method, reception device, and reception method |
WO2015125719A1 (ja) * | 2014-02-21 | 2015-08-27 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
US11330303B2 (en) | 2014-02-21 | 2022-05-10 | Sony Corporation | Transmission device, transmission method, reception device, and reception method |
US10735771B2 (en) | 2014-02-21 | 2020-08-04 | Sony Corporation | Transmission device, transmission method, reception device, and reception method |
JP7429676B2 (ja) | 2014-02-25 | 2024-02-08 | アップル インコーポレイテッド | ビデオ符号化及び復号のための適応的伝達関数 |
JP2022009011A (ja) * | 2014-02-25 | 2022-01-14 | アップル インコーポレイテッド | ビデオ符号化及び復号のための適応的伝達関数 |
JP2017509057A (ja) * | 2014-02-26 | 2017-03-30 | トムソン ライセンシングThomson Licensing | Hdr画像を符号化及び復号する方法及び装置 |
US11727548B2 (en) | 2014-02-26 | 2023-08-15 | Interdigital Vc Holdings, Inc. | Method and apparatus for encoding and decoding HDR images |
JP2015192419A (ja) * | 2014-03-28 | 2015-11-02 | ソニー株式会社 | 再生装置、再生方法、およびプログラム |
JPWO2015190246A1 (ja) * | 2014-06-13 | 2017-04-20 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
US11418753B2 (en) | 2014-06-13 | 2022-08-16 | Sony Corporation | Transmission device, transmission method, reception device, and reception method |
WO2015190246A1 (ja) * | 2014-06-13 | 2015-12-17 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
JP5995032B1 (ja) * | 2014-06-26 | 2016-09-21 | パナソニックIpマネジメント株式会社 | データ出力装置及びデータ出力方法 |
CN106165403A (zh) * | 2014-06-26 | 2016-11-23 | 松下知识产权经营株式会社 | 数据输出装置、数据输出方法以及数据生成方法 |
US11140354B2 (en) | 2014-06-26 | 2021-10-05 | Panasonic Intellectual Property Management Co., Ltd. | Method for generating control information based on characteristic data included in metadata |
JP2016208499A (ja) * | 2014-06-26 | 2016-12-08 | パナソニックIpマネジメント株式会社 | データ出力装置及びデータ出力方法 |
WO2015198553A1 (ja) * | 2014-06-26 | 2015-12-30 | パナソニックIpマネジメント株式会社 | データ出力装置、データ出力方法及びデータ生成方法 |
JP5906504B1 (ja) * | 2014-06-26 | 2016-04-20 | パナソニックIpマネジメント株式会社 | データ出力装置、データ出力方法及びデータ生成方法 |
US10666891B2 (en) | 2014-06-26 | 2020-05-26 | Panasonic Intellectual Property Management Co., Ltd. | Method for generating control information based on characteristic data included in metadata |
CN110708439A (zh) * | 2014-06-26 | 2020-01-17 | 松下知识产权经营株式会社 | 显示装置及数据输出方法 |
US10291874B2 (en) | 2014-06-26 | 2019-05-14 | Panasonic Intellectual Property Management Co., Ltd. | Method for generating control information based on characteristic data included in metadata |
CN106165403B (zh) * | 2014-06-26 | 2019-11-29 | 松下知识产权经营株式会社 | 数据输出装置、数据输出方法以及数据生成方法 |
US10306175B2 (en) | 2014-06-26 | 2019-05-28 | Panasonic Intellectual Property Management Co., Ltd. | Method for generating control information based on characteristic data included in metadata |
US11310507B2 (en) | 2014-06-27 | 2022-04-19 | Panasonic Intellectual Property Management Co., Ltd. | Data output apparatus, data output method, and data generation method |
US10645390B2 (en) * | 2014-06-27 | 2020-05-05 | Panasonic Intellectual Property Management Co., Ltd. | Data output apparatus, data output method, and data generation method |
US11856200B2 (en) | 2014-06-27 | 2023-12-26 | Panasonic Intellectual Property Management Co., Ltd. | Data output apparatus, data output method, and data generation method |
US20190075296A1 (en) * | 2014-06-27 | 2019-03-07 | Panasonic Intellectual Property Management Co., Ltd. | Data output apparatus, data output method, and data generation method |
JP2016034125A (ja) * | 2014-07-28 | 2016-03-10 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
US10565695B2 (en) | 2014-07-28 | 2020-02-18 | Sony Corporation | Apparatus and method for transmitting and receiving high dynamic range images |
US10832737B2 (en) | 2014-08-07 | 2020-11-10 | Panasonic Intellectual Property Corporation Of America | Playback device, playback method, and recording medium |
JP2020167724A (ja) * | 2014-08-07 | 2020-10-08 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | デコーダシステムおよびデコード方法 |
US11538502B2 (en) | 2014-08-07 | 2022-12-27 | Panasonic Intellectual Property Corporation Of America | Playback device, playback method, and recording medium |
JP6991279B2 (ja) | 2014-08-07 | 2022-01-12 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | デコーダシステムおよびデコード方法 |
US11929102B2 (en) | 2014-08-07 | 2024-03-12 | Panasonic Intellectual Property Corporation Of America | Playback device, playback method, and recording medium |
CN111276170A (zh) * | 2014-08-07 | 2020-06-12 | 松下电器(美国)知识产权公司 | 解码系统以及解码方法 |
WO2016021120A1 (ja) * | 2014-08-07 | 2016-02-11 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 再生装置、再生方法および記録媒体 |
CN105580081A (zh) * | 2014-08-07 | 2016-05-11 | 松下电器(美国)知识产权公司 | 再现装置、再现方法以及记录介质 |
EP3944241A3 (en) * | 2014-08-07 | 2022-05-04 | Panasonic Intellectual Property Corporation of America | Decoding system and decoding method |
JP2019075819A (ja) * | 2014-08-07 | 2019-05-16 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America | デコーダシステムおよびデコード方法 |
JP2016039627A (ja) * | 2014-08-07 | 2016-03-22 | パナソニック インテレクチュアル プロパティ コーポレーション オブアメリカPanasonic Intellectual Property Corporation of America | 再生装置、再生方法および記録媒体 |
CN111276170B (zh) * | 2014-08-07 | 2021-09-07 | 松下电器(美国)知识产权公司 | 解码系统以及解码方法 |
US10255951B2 (en) | 2014-08-07 | 2019-04-09 | Panasonic Intellectual Property Corporation Of America | Playback device, playback method, and recording medium |
US11335382B2 (en) | 2014-08-07 | 2022-05-17 | Panasonic Intellectual Property Corporation Of America | Playback device, playback method, and recording medium |
US10791311B2 (en) | 2014-08-28 | 2020-09-29 | Sony Corporation | Transmitting apparatus, transmitting method, receiving apparatus, and receiving method |
JP2016048888A (ja) * | 2014-08-28 | 2016-04-07 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
US10225539B2 (en) | 2014-08-28 | 2019-03-05 | Sony Corporation | Transmitting apparatus, transmitting method, receiving apparatus, and receiving method |
US11272149B2 (en) | 2014-08-28 | 2022-03-08 | Sony Corporation | Transmitting apparatus, transmitting method, receiving apparatus, and receiving method |
CN105794216B (zh) * | 2014-09-08 | 2020-12-11 | 索尼公司 | 图像处理设备和图像处理方法 |
JP2016058848A (ja) * | 2014-09-08 | 2016-04-21 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
CN105794216A (zh) * | 2014-09-08 | 2016-07-20 | 索尼公司 | 图像处理设备和图像处理方法 |
US10192294B2 (en) | 2014-09-08 | 2019-01-29 | Sony Corporation | Image processing apparatus and image processing method for display mapping |
CN111899769A (zh) * | 2014-09-12 | 2020-11-06 | 松下电器(美国)知识产权公司 | 非暂时性计算机可读介质、再现装置以及再现方法 |
CN111899769B (zh) * | 2014-09-12 | 2022-07-05 | 松下电器(美国)知识产权公司 | 非暂时性计算机可读介质、再现装置以及再现方法 |
JP2016082498A (ja) * | 2014-10-21 | 2016-05-16 | 三菱電機株式会社 | デジタル放送受信装置及び方法、並びにプログラム及び記録媒体 |
JP2020096392A (ja) * | 2014-12-03 | 2020-06-18 | パナソニックIpマネジメント株式会社 | データ生成装置 |
JP2020127208A (ja) * | 2014-12-03 | 2020-08-20 | パナソニックIpマネジメント株式会社 | データ生成装置 |
JP2020096393A (ja) * | 2014-12-03 | 2020-06-18 | パナソニックIpマネジメント株式会社 | データ生成装置 |
JP2021036713A (ja) * | 2014-12-03 | 2021-03-04 | パナソニックIpマネジメント株式会社 | データ生成方法及び復号装置 |
JP2021036716A (ja) * | 2014-12-03 | 2021-03-04 | パナソニックIpマネジメント株式会社 | データ生成方法及び復号装置 |
JP2021036714A (ja) * | 2014-12-03 | 2021-03-04 | パナソニックIpマネジメント株式会社 | データ生成方法及び復号装置 |
JP2021036715A (ja) * | 2014-12-03 | 2021-03-04 | パナソニックIpマネジメント株式会社 | データ生成方法及び復号装置 |
JP2020102884A (ja) * | 2014-12-03 | 2020-07-02 | パナソニックIpマネジメント株式会社 | データ生成装置 |
JP2020025277A (ja) * | 2014-12-03 | 2020-02-13 | パナソニックIpマネジメント株式会社 | データ符号化方法、データ復号方法、データ符号化装置及びデータ復号装置 |
JP2016122985A (ja) * | 2014-12-25 | 2016-07-07 | 株式会社東芝 | 画像処理システム、画像処理装置および画像処理方法 |
JP7384234B2 (ja) | 2014-12-29 | 2023-11-21 | ソニーグループ株式会社 | 送信装置、送信方法、受信装置および受信方法 |
CN107113457A (zh) * | 2014-12-29 | 2017-08-29 | 索尼公司 | 发送装置、发送方法、接收装置和接收方法 |
WO2016108268A1 (ja) * | 2014-12-29 | 2016-07-07 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
JPWO2016108268A1 (ja) * | 2014-12-29 | 2017-10-12 | ソニー株式会社 | 送信装置、送信方法、受信装置および受信方法 |
JP7067652B2 (ja) | 2014-12-29 | 2022-05-16 | ソニーグループ株式会社 | 送信装置、送信方法、受信装置および受信方法 |
US10609327B2 (en) | 2014-12-29 | 2020-03-31 | Sony Corporation | Transmission device, transmission method, reception device, and reception method |
JP2022097593A (ja) * | 2014-12-29 | 2022-06-30 | ソニーグループ株式会社 | 送信装置、送信方法、受信装置および受信方法 |
JP2021103898A (ja) * | 2014-12-29 | 2021-07-15 | ソニーグループ株式会社 | 送信装置、送信方法、受信装置および受信方法 |
US11394920B2 (en) | 2014-12-29 | 2022-07-19 | Sony Corporation | Transmission device, transmission method, reception device, and reception method |
JP2018530281A (ja) * | 2015-09-23 | 2018-10-11 | アリス エンタープライジズ エルエルシーArris Enterprises Llc | 高ダイナミックレンジ映像データの再形成および適応のためのシステムおよび方法 |
JP7066786B2 (ja) | 2015-09-23 | 2022-05-13 | アリス エンタープライジズ エルエルシー | トランスポートストリームにおける高ダイナミックレンジおよび広色域コンテンツの伝達 |
JP2018530237A (ja) * | 2015-09-23 | 2018-10-11 | アリス エンタープライジズ エルエルシーArris Enterprises Llc | トランスポートストリームにおける高ダイナミックレンジおよび広色域コンテンツの伝達 |
US11146807B2 (en) | 2015-09-23 | 2021-10-12 | Arris Enterprises Llc | Signaling high dynamic range and wide color gamut content in transport streams |
JP2018530282A (ja) * | 2015-09-23 | 2018-10-11 | アリス エンタープライジズ エルエルシーArris Enterprises Llc | 高ダイナミックレンジおよび広色域シーケンスの再形成および符号化のためのシステム |
US11695947B2 (en) | 2015-09-23 | 2023-07-04 | Arris Enterprises Llc | Signaling high dynamic range and wide color gamut content in transport streams |
US10869053B2 (en) | 2015-09-23 | 2020-12-15 | Arris Enterprises Llc | Signaling high dynamic range and wide color gamut content in transport streams |
JP2020182233A (ja) * | 2015-09-23 | 2020-11-05 | アリス エンタープライジズ エルエルシーArris Enterprises Llc | トランスポートストリームにおける高ダイナミックレンジおよび広色域コンテンツの伝達 |
US10582174B2 (en) | 2015-10-30 | 2020-03-03 | Canon Kabushiki Kaisha | Video processing apparatus, video processing method, and medium |
JP2017085481A (ja) * | 2015-10-30 | 2017-05-18 | キヤノン株式会社 | 映像処理装置、映像処理方法、及び映像処理プログラム |
JP2017143546A (ja) * | 2017-03-21 | 2017-08-17 | ソニー株式会社 | 再生装置、記録媒体、表示装置、および情報処理方法 |
JP2018198458A (ja) * | 2018-08-28 | 2018-12-13 | ソニー株式会社 | 再生装置、表示装置、情報処理方法、および記録媒体 |
JP2021192558A (ja) * | 2020-01-16 | 2021-12-16 | ソニーグループ株式会社 | 送信装置、送信方法、受信装置および受信方法 |
JP7205590B2 (ja) | 2020-01-16 | 2023-01-17 | ソニーグループ株式会社 | 送信装置、送信方法、受信装置および受信方法 |
Also Published As
Publication number | Publication date |
---|---|
EP3512203A1 (en) | 2019-07-17 |
JP6580648B2 (ja) | 2019-09-25 |
MX2014015818A (es) | 2015-03-05 |
JP6219823B2 (ja) | 2017-10-25 |
EP2869558A1 (en) | 2015-05-06 |
CN108965893A (zh) | 2018-12-07 |
KR20150024846A (ko) | 2015-03-09 |
JP2018023154A (ja) | 2018-02-08 |
US20150131904A1 (en) | 2015-05-14 |
CN104380738A (zh) | 2015-02-25 |
CN104380738B (zh) | 2018-10-16 |
RU2014152106A (ru) | 2016-07-10 |
TW201414312A (zh) | 2014-04-01 |
AR091515A1 (es) | 2015-02-11 |
EP2869558A4 (en) | 2016-01-20 |
CA2875199A1 (en) | 2014-01-03 |
RU2653464C2 (ru) | 2018-05-08 |
BR112014032224A2 (pt) | 2017-08-01 |
EP3512203B1 (en) | 2023-10-25 |
JPWO2014002901A1 (ja) | 2016-05-30 |
TWI586150B (zh) | 2017-06-01 |
KR102161017B1 (ko) | 2020-09-29 |
US20150010059A1 (en) | 2015-01-08 |
CN108965893B (zh) | 2021-10-01 |
BR112014032224B1 (pt) | 2022-12-06 |
AU2013281949B2 (en) | 2017-05-11 |
CA2875199C (en) | 2021-02-16 |
EP2869558B1 (en) | 2019-04-10 |
AU2013281949A2 (en) | 2015-01-22 |
AU2013281949A1 (en) | 2015-01-22 |
MX344551B (es) | 2016-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6580648B2 (ja) | 画像処理装置および記録媒体 | |
JP6358475B2 (ja) | 画像復号装置および方法、並びに、画像符号化装置および方法 | |
WO2014002896A1 (ja) | 符号化装置および符号化方法、復号装置および復号方法 | |
JP6287035B2 (ja) | 復号装置および復号方法 | |
WO2015137145A1 (ja) | 画像符号化装置および方法、並びに画像復号装置および方法 | |
WO2015053116A1 (ja) | 復号装置および復号方法、並びに、符号化装置および符号化方法 | |
JPWO2015105003A1 (ja) | 復号装置および復号方法、並びに、符号化装置および符号化方法 | |
JP2015005899A (ja) | 復号装置および復号方法、並びに、符号化装置および符号化方法 | |
KR102338766B1 (ko) | 화상 부호화 장치 및 방법, 및 기록 매체 | |
JP6477930B2 (ja) | 符号化装置および符号化方法 | |
WO2014050732A1 (ja) | 符号化装置および符号化方法、並びに、復号装置および復号方法 | |
JPWO2014002900A1 (ja) | 画像処理装置および画像処理方法 | |
JP6402802B2 (ja) | 画像処理装置および方法、プログラム、並びに記録媒体 | |
WO2014103765A1 (ja) | 復号装置および復号方法、並びに、符号化装置および符号化方法 | |
JP6341067B2 (ja) | 画像処理装置および方法 | |
JP2015050738A (ja) | 復号装置および復号方法、並びに、符号化装置および符号化方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13809522 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2875199 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013809522 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2014522599 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: MX/A/2014/015818 Country of ref document: MX |
|
WWE | Wipo information: entry into national phase |
Ref document number: IDP00201408065 Country of ref document: ID |
|
ENP | Entry into the national phase |
Ref document number: 20147035887 Country of ref document: KR Kind code of ref document: A Ref document number: 2014152106 Country of ref document: RU Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2013281949 Country of ref document: AU Date of ref document: 20130621 Kind code of ref document: A |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112014032224 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 112014032224 Country of ref document: BR Kind code of ref document: A2 Effective date: 20141222 |