WO2018062022A1 - Dispositif de réglage, procédé de réglage et programme - Google Patents

Dispositif de réglage, procédé de réglage et programme Download PDF

Info

Publication number
WO2018062022A1
WO2018062022A1 PCT/JP2017/034235 JP2017034235W WO2018062022A1 WO 2018062022 A1 WO2018062022 A1 WO 2018062022A1 JP 2017034235 W JP2017034235 W JP 2017034235W WO 2018062022 A1 WO2018062022 A1 WO 2018062022A1
Authority
WO
WIPO (PCT)
Prior art keywords
video signal
signal
video
conversion unit
linear
Prior art date
Application number
PCT/JP2017/034235
Other languages
English (en)
Japanese (ja)
Inventor
和彦 甲野
則竹 俊哉
Original Assignee
パナソニックIpマネジメント株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by パナソニックIpマネジメント株式会社 filed Critical パナソニックIpマネジメント株式会社
Priority to US16/335,703 priority Critical patent/US20200035198A1/en
Priority to JP2018542509A priority patent/JP6872693B2/ja
Priority to EP17855976.1A priority patent/EP3522514A4/fr
Publication of WO2018062022A1 publication Critical patent/WO2018062022A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/10Intensity circuits
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/36Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/20Circuitry for controlling amplitude response
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/02Improving the quality of display appearance
    • G09G2320/0271Adjustment of the gradation levels within the range of the gradation scale, e.g. by redistribution or clipping
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/06Colour space transformation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/16Calculation or use of calculated indices related to luminance levels in display data

Definitions

  • the present disclosure relates to an adjustment device, an adjustment method, and a program for adjusting a video signal.
  • Patent Document 1 discloses an image signal processing device for improving a displayable luminance level.
  • the present disclosure provides an adjustment device and an adjustment method capable of effectively adjusting the luminance of an image.
  • An adjustment apparatus includes an acquisition unit that acquires a first non-linear video signal generated by using a first OETF (Opto-Electronic Transfer Function), and the first acquired by the acquisition unit.
  • the first video signal is converted into a linear signal by using an inverse characteristic of the first OETF, and (ii) an input value and an output value with respect to the linear signal;
  • the adjusted linear signal obtained by performing the adjustment process is used by the second OETF corresponding to a predetermined format.
  • a conversion unit that performs a conversion process for converting into a second non-linear video signal obtained by And an output unit which outputs the second image signal obtained by being converted.
  • the luminance of the video can be adjusted effectively.
  • FIG. 1 is a diagram schematically illustrating an example of a configuration of an AV (Audio Visual) system according to the first embodiment.
  • FIG. 2A is a diagram illustrating an example of an EOTF (Electro-Optical Transfer Function) corresponding to each of HDR (High Dynamic Range) and SDR (Standard Dynamic Range).
  • FIG. 2B is a diagram illustrating an example of an OETF (Opto-Electronic Transfer Function) corresponding to each of HDR and SDR.
  • FIG. 3 is a block diagram schematically illustrating an example of a functional configuration of the adjustment device according to the first embodiment.
  • FIG. 4 is a schematic diagram for explaining conversion processing by the first conversion unit, the second conversion unit, and the third conversion unit of the conversion unit in the first embodiment.
  • FIG. 5 is a diagram illustrating combinations of characteristics of the HDR signal input to the adjustment apparatus and characteristics of the HDR signal output from the adjustment apparatus in the first embodiment.
  • FIG. 6 is a flowchart illustrating an example of the operation (adjustment method) of the adjustment device according to the first embodiment.
  • FIG. 7 is a flowchart illustrating an example of the conversion process according to the first embodiment.
  • FIG. 8 is a diagram for explaining the problem.
  • FIG. 9 is a diagram illustrating an example of the adjustment process according to the first embodiment.
  • FIG. 10 is a block diagram schematically illustrating an example of a functional configuration of the adjustment device according to the second embodiment.
  • FIG. 11 is a flowchart illustrating an example of the operation (adjustment method) of the adjustment device according to the second embodiment.
  • FIG. 12 is a block diagram schematically illustrating an example of a functional configuration of the adjustment device according to the third embodiment.
  • FIG. 13 is a flowchart illustrating an example of the conversion process according to the third embodiment.
  • FIG. 14 is a block diagram schematically illustrating an example of a functional configuration of the adjustment device according to the fourth embodiment.
  • FIG. 15 is a flowchart illustrating an example of the operation (adjustment method) of the adjustment device according to the fourth embodiment.
  • FIG. 16 is a block diagram schematically illustrating an example of a functional configuration of the adjustment device according to the fifth embodiment.
  • linear luminance is calculated for each pixel based on linear RGB values calculated from the pixels constituting the subject image. Then, based on the linear RGB value and the linear luminance, a corrected linear luminance value for each pixel and a corrected linear RGB value of a combined pixel obtained by combining a plurality of pixels including the pixel are calculated. Then, the display luminance and the display RGB value are calculated by performing gamma correction on the correction linear luminance and the correction linear RGB value, respectively. As described above, in the image signal processing apparatus, the luminance level that can be displayed is improved by correcting the linear luminance based on the corrected linear RGB value.
  • HDR video corresponding to HDR which has a wider dynamic range than the conventional dynamic range (SDR) as described in Patent Document 1
  • SDR conventional dynamic range
  • HDR video needs to express a vast dynamic range, high performance and new know-how are required for content production and display on a display device. For this reason, there are many problems in displaying HDR video.
  • HDR video displayed on the display device may vary widely and may look different. is there. For this reason, a playback device that plays back an HDR video is required to have a function of appropriately adjusting the display variation of the HDR video.
  • HDR video is PQ (Perceptual Quantization) or HLG (Hybrid Log-Gamma) and other newly defined EOTF (Electro-Optical Transfer Function) reverse characteristics or OETF (Opto-Frequency Production). Is done. For this reason, it is difficult to effectively adjust the brightness of the HDR video even if the image quality adjustment used for the conventional SDR video is used.
  • the PQ curve is an EOTF defined in the SMPTE 2084 standard.
  • HLG is ARIB-STD B67 standard, ITU-R BT. This is an OETF defined by the 2100 standard or the like, and is an OETF compatible with an OETF corresponding to SDR in a dark part (low luminance region).
  • FIG. 8 is a diagram for explaining the problem.
  • the luminance of the video is defined by the relative luminance reference (relative value).
  • the SDR signal is created with reference to the maximum luminance of the video of about 100 nits during video production.
  • the maximum luminance that can be displayed by an SDR display device that supports SDR video display is often more than 100 nits (for example, 400 nits).
  • the SDR display device increases the luminance of the SDR video up to the maximum luminance according to the display mode while maintaining the relative relationship. SDR video can be displayed. For example, as shown in FIG.
  • the first display mode is a display mode in which 100 nit, which is the maximum luminance of the SDR video, is displayed as 100 nit as it is, and the second display mode in which the maximum luminance of the SDR video is enlarged to 400 nit is displayed.
  • Set to display mode In the SDR display device, by switching from the first display mode to the second display mode, the SDR video to be displayed on the SDR display device can be displayed with increased brightness.
  • the SDR signal is a video signal defined on the basis of relative luminance
  • the SDR display device can display the SDR video by switching the display mode as shown in FIG. Can do. That is, in the SDR signal, it is possible to easily adjust the luminance by multiplying each luminance by a constant multiple (K times).
  • the function of switching the display mode can easily display the SDR video with the brightness increased as shown in FIG.
  • the maximum luminance value is a signal of 10,000 nits
  • the luminance of the video is an absolute luminance reference (Absolute value).
  • the brightness that can be displayed by the HDR display device is often about 1,000 nit at most, and most HDR display devices do not have the ability to display the full range of luminance up to the maximum value of 10,000 nit. That is, when displaying the HDR signal image in the full range, the HDR display device displays the image in a distorted state because the luminance is saturated in a bright region where the luminance of the image exceeds the maximum luminance that can be displayed by the HDR display device. Resulting in.
  • HDR display devices have a maximum displayable luminance of 1,000 nits, and it is difficult to display the maximum luminance value (10,000 nits) of the image of the HDR signal. It is difficult to adjust the HDR image brightly. For this reason, the HDR video displayed on the HDR display device may appear darker as shown in FIG. 8 than the SDR video displayed in the SDR signal after being adjusted to the second display mode.
  • the HDR signal has a steeper curve representing the correspondence between the code value and the luminance than the SDR signal. Therefore, when each luminance is multiplied by a coefficient in order to adjust the luminance of the HDR signal, the luminance changes greatly as ⁇ L1 shown in FIG. 2A in the bright portion (high luminance region), but the dark portion (low luminance region). ), The change in luminance is small compared to ⁇ L1 as ⁇ L2 shown in FIG. 2A (a). In FIG. 2A, ⁇ L2 is shown in an enlarged view, but it can be seen that ⁇ L2 is smaller than ⁇ L1 even if it is enlarged.
  • the HDR signal has a variation in luminance change between the dark portion and the bright portion, and the image is distorted It will end up.
  • the present disclosure discloses an adjustment device, an adjustment method, and a program that can effectively adjust an HDR video signal indicating an HDR video produced using an HDR OETF having PQ characteristics, HLG characteristics, or the like.
  • FIG. 1 is a diagram schematically illustrating an example of a configuration of an AV system 1 according to the first embodiment.
  • the AV system 1 includes an adjustment device 100 and a display device 200.
  • the adjustment device 100 and the display device 200 are connected to each other via a cable 300 compatible with a digital interface (for example, HDMI (registered trademark): High-Definition Multimedia Interface) so that they can communicate with each other.
  • a digital interface for example, HDMI (registered trademark): High-Definition Multimedia Interface) so that they can communicate with each other.
  • the adjustment device 100 and the display device 200 may be connected to each other using a digital interface, and the connection form may be wired or wireless.
  • the adjustment device 100 is a source device, for example, a playback device that plays back video data recorded on an optical disc (for example, Ultra HD (High Definition) Blu-ray (registered trademark) player).
  • the adjustment device 100 receives an HDR-compatible video data distributed through a network by a VOD (Video On Demand) and provides the received video data to the display device 200, or an Ultra HD Blu-ray (registered trademark) player. It may be STB (Set Top Box).
  • the adjusting device 100 acquires the first video data from the optical disc or the VOD, converts the first video signal included in the acquired first video data into the second video signal, and the second video signal obtained by the conversion. Is output to the display device 200 via the cable 300.
  • the display device 200 is a sink device and is, for example, a TV capable of displaying HDR video (hereinafter referred to as “HDR TV”).
  • the display device 200 acquires the second video data via the cable 300, and displays the video (HDR video) of the second video signal included in the acquired second video data.
  • FIG. 2A is a diagram showing an example of EOTF corresponding to each of HDR and SDR.
  • 2A is a diagram showing an example of EOTF (PQ curve) corresponding to HDR
  • FIG. 2A is a diagram showing an example of EOTF (gamma curve) corresponding to SDR.
  • EOTF indicates the correspondence between code values and luminance values, and converts code values into luminance values. That is, EOTF is relationship information indicating a correspondence relationship between a plurality of code values and luminance values.
  • FIG. 2B is a diagram showing an example of OETF corresponding to each of HDR and SDR.
  • 2A is a diagram showing an example of OETF corresponding to HDR (reverse characteristics of PQ curve), and
  • FIG. 2B is an example of OETF corresponding to SDR (reverse characteristics of gamma curve).
  • FIG. 1A is a diagram showing an example of OETF corresponding to HDR (reverse characteristics of PQ curve)
  • FIG. 2B is an example of OETF corresponding to SDR (reverse characteristics of gamma curve).
  • OETF indicates the correspondence between the luminance value and the code value, and, contrary to EOTF, converts the luminance value into a code value. That is, OETF is relationship information indicating a correspondence relationship between a luminance value and a plurality of code values. For example, when the luminance value of a video corresponding to HDR is expressed by a 10-bit gradation code value, the luminance values in the HDR luminance range up to 10,000 nits are quantized and 1024 from 0 to 1023 Mapped to the integer value of.
  • the luminance value is quantized based on OETF, whereby the luminance value in the luminance range up to 10,000 nits (the luminance value of the video corresponding to HDR) is converted into an HDR signal that is a 10-bit code value.
  • the In an EOTF corresponding to HDR hereinafter referred to as “HDR EOTF”) or an OETF corresponding to HDR (hereinafter referred to as “HDR OETF”)
  • an EOTF corresponding to SDR hereinafter referred to as “SDR EOTF”
  • SDR EOTF EOTF corresponding to SDR
  • the maximum value (peak) of HDR luminance (Luminance) is 10,000 nits. That is, HDR includes all SDR luminances, and the maximum HDR luminance value is larger than the maximum SDR luminance value.
  • HDR is a dynamic range obtained by expanding the maximum value of luminance from 100 nit which is the maximum value of luminance of SDR to the maximum value of luminance of HDR (for example, 10,000 nit).
  • the first video signal in the present embodiment is an HDR signal corresponding to HDR.
  • the image after grading is converted to a non-linear first video signal by using HDR OETF (see (a) of FIG. 2B), and image coding or the like is performed based on the first video signal.
  • a video stream is generated.
  • the display device 200 converts the decoding result of the stream into a linear signal (linear signal) using HDR EOTF (see (a) in FIG. 2A), and converts the signal into the signal. Display the based HDR video.
  • FIG. 3 is a block diagram schematically illustrating an example of a functional configuration of the adjustment device 100 according to the first embodiment.
  • the adjustment device 100 includes an acquisition unit 110, a YUV-RGB conversion unit 120, a conversion unit 130, an RGB-YUV conversion unit 140, and an output unit 150.
  • the acquisition unit 110 acquires first video data including a non-linear first video signal generated by the first OETF.
  • the first video signal is an HDR video signal.
  • the acquisition unit 110 may acquire the first video data by playing back the first video data recorded on the optical disc, or may be HDR distributed via a network or broadcast wave.
  • the first video data may be acquired by receiving the first video data corresponding to.
  • the first video data may include first metadata indicating the characteristics of the first video signal. That is, the acquisition unit 110 may acquire the first metadata together with the first video signal.
  • the acquisition unit 110 may be realized by, for example, an optical disc drive that reads and reproduces data recorded on an optical disc, or is connected to a content provider via a network such as the Internet. It may be realized by a communication interface.
  • the acquisition unit 110 may be realized by a tuner or the like that receives broadcast waves.
  • the YUV-RGB converter 120 converts a video signal composed of YUV signals into RGB signals.
  • the YUV-RGB conversion unit 120 converts the first video signal from the YUV signal to the RGB signal and converts the first R signal, the first G signal, and the first B signal that constitute the first video signal. Output.
  • the YUV-RGB conversion unit 120 may be realized by a processor that executes a program and a memory in which the program is recorded, or a dedicated circuit (for example, an IC (Integrated Circuit) or an LSI (Large-Scale Integration)). May be realized by a circuit including
  • the conversion unit 130 performs conversion processing for converting the first video signal included in the first video data acquired by the acquisition unit 110 into a non-linear second video signal.
  • the second video signal is (i) the first video signal is converted into a linear signal by using the inverse characteristic of the first OETF, and (ii) an input value with respect to the linear signal.
  • an adjusted linear signal obtained by performing the adjustment process is a second signal corresponding to a predetermined format. It is a signal obtained by being converted by using OETF.
  • conversion unit 130 converts the first video signal into each of the first R signal, the first G signal, and the first B signal obtained by converting the first video signal into an RGB signal in YUV-RGB conversion unit 120. By performing the conversion process, the image is converted into a second video signal composed of the second R signal, the second G signal, and the second B signal.
  • the conversion unit 130 includes a first conversion unit 131, a second conversion unit 132, and a third conversion unit 133.
  • FIG. 4 is a schematic diagram for explaining the conversion process by the first conversion unit 131, the second conversion unit 132, and the third conversion unit 133 of the conversion unit 130 in the first embodiment.
  • the first conversion unit 131 converts the first video signal acquired by the acquisition unit 110 into a linear signal using the inverse characteristic of the first OETF.
  • the first converter 131 converts the first R signal, the first G signal, and the first B signal into linear signals using the inverse characteristics of the first OETF, respectively.
  • the first G signal converter 131G and the first B signal converter 131B are provided.
  • the second conversion unit 132 performs adjustment processing on the linear signal obtained by the first conversion unit 131, as shown in FIG.
  • the adjustment process is a process of performing a gain change that makes the relationship between the input value and the output value linear with respect to the linear signal obtained by the first conversion unit 131. That is, in the adjustment process, the gain is changed by multiplying the input value by A (A> 1) or by multiplying the input value by B (B ⁇ 1). In the adjustment process, the gain is changed for all linear signals.
  • the second conversion unit 132 adjusts the linear signal obtained by converting each of the first R signal, the first G signal, and the first B signal using the inverse characteristic of the first OETF.
  • the third conversion unit 133 uses the second OETF to convert the adjusted linear signal obtained by performing the adjustment process in the second conversion unit 132 into the second video. Convert to signal.
  • the third conversion unit 133 converts the RGB of the adjusted linear signal composed of RGB signals using the second OETF, and converts the RGB signals into a third R signal conversion unit 133R and a third G signal conversion unit. 133G and 3B signal converter 133B.
  • the third R signal conversion unit 133R outputs the second R signal
  • the third G signal conversion unit 133G outputs the second G signal
  • the third B signal conversion unit 133B outputs the second B signal.
  • the second R signal, the second G signal, and the second B signal output from the third conversion unit 133 are signals that constitute the second video signal.
  • the first video signal input to the conversion unit 130 may be an HDR signal having a PQ characteristic or an HDR signal having an HLG characteristic.
  • the first OETF has a reverse characteristic of the PQ EOTF. Therefore, the first conversion unit 131 uses the PQ EOTF as the reverse characteristic of the first OETF. Then, the non-linear first video signal is converted into a linear signal.
  • the first OETF is an OLG of the HLG. Therefore, the first conversion unit 131 uses the reverse characteristic of the HLG OETF as the reverse characteristic of the first OETF. Then, the non-linear first video signal is converted into a linear signal.
  • the second video signal output from the conversion unit 130 may be an HDR signal having a PQ characteristic or an HDR signal having an HLG characteristic.
  • the conversion unit 130 may output an HDR signal having characteristics that the display device 200 supports.
  • the third conversion unit 133 When outputting a PQ characteristic HDR signal as the second video signal, the third conversion unit 133 generates a non-linear second video signal using the inverse characteristic of the PQ EOTF as the second OETF.
  • the third converter 133 uses the OLG of the HLG as the second OETF to generate a non-linear second video signal. That is, in the present embodiment, the predetermined format may be PQ or HLG, and the second OETF corresponding to the predetermined format may be an inverse characteristic of PQ EOTF or an HLG OETF. .
  • FIG. 5 is a diagram illustrating a combination of the characteristics of the HDR signal input to the adjustment apparatus 100 and the characteristics of the HDR signal output from the adjustment apparatus 100 in the first embodiment.
  • the conversion unit 130 may be realized by, for example, a processor that executes a program and a memory in which the program is recorded, or may be realized by a dedicated circuit (for example, a circuit including an IC or an LSI).
  • the RGB-YUV converter 140 converts a video signal composed of RGB signals into a YUV signal.
  • the RGB-YUV conversion unit 140 converts the second video signal output from the third conversion unit 133 from an RGB signal to a YUV signal. Thereby, the RGB-YUV conversion unit 140 outputs a second video signal composed of the YUV signal obtained by the conversion.
  • the RGB-YUV conversion unit 140 may be realized by, for example, a processor that executes a program and a memory in which the program is recorded, or may be realized by a dedicated circuit (for example, a circuit including an IC or an LSI). Also good.
  • the output unit 150 outputs the second video signal obtained by the conversion in the RGB-YUV conversion unit 140.
  • output unit 150 outputs second video data including the second video signal and first metadata.
  • the output unit 150 may be realized by a digital interface, for example.
  • FIG. 6 is a flowchart illustrating an example of the operation (adjustment method) of the adjustment apparatus 100 according to the first embodiment.
  • the acquisition unit 110 acquires the first video signal (step S101).
  • the YUV-RGB conversion unit 120 converts the first video signal acquired by the acquisition unit 110 from a YUV signal to an RGB signal (step S102).
  • the conversion unit 130 performs adjustment processing on each of the first R signal, the first G signal, and the first B signal that constitute the first video signal, which is obtained by the conversion to the RGB signal in the YUV-RGB conversion unit 120. Thus, a conversion process is performed for converting the first video signal into a second video signal composed of the second R signal, the second G signal, and the second B signal (step S103).
  • FIG. 7 is a flowchart showing an example of the conversion process in the first embodiment.
  • the first conversion unit 131 of the conversion unit 130 converts the first video signal into a linear signal using the inverse characteristic of the first OETF (step S111).
  • the second conversion unit 132 of the conversion unit 130 performs adjustment processing on the linear signal obtained by the first conversion unit 131 (step S112).
  • the third conversion unit 133 of the conversion unit 130 converts the adjusted linear signal obtained by performing the adjustment process in the second conversion unit 132 into a non-linear second video signal using the second OETF. (Step S113).
  • the conversion unit 130 performs the conversion process in steps S111 to S113 on the first video signal composed of RGB signals, thereby obtaining the second video signal composed of RGB signals. .
  • the RGB-YUV conversion unit 140 converts the second video signal from the RGB signal to the YUV signal (step S104).
  • the output unit 150 outputs the second video signal obtained by the conversion in the RGB-YUV conversion unit 140 (step S105).
  • the adjustment device includes the acquisition unit that acquires the non-linear first video signal generated by using the first OETF, and the first video signal acquired by the acquisition unit.
  • the first video signal is converted into a linear signal by using the inverse characteristic of the first OETF, and (ii) the relationship between the input value and the output value with respect to the linear signal.
  • An adjustment process including a gain change to be linear is performed, and (iii) a second OETF corresponding to a predetermined format is used for the adjusted linear signal obtained by performing the adjustment process.
  • a conversion unit that performs conversion processing to convert to a non-linear second video signal obtained by being converted, and an output unit that outputs the second video signal obtained by conversion in the conversion unit, Preparation .
  • the adjustment method in the present embodiment is an adjustment method performed by an adjustment device that converts a non-linear first video signal generated by the first OETF into a second video signal and outputs the second video signal.
  • the first video signal is acquired, and the first video signal acquired in the acquisition is converted into a linear signal by (i) using the inverse characteristic of the first OETF.
  • an adjustment process including a gain change is performed on the linear signal so that the relationship between the input value and the output value is linear; and (iii) the adjustment process is performed.
  • the adjusted linear signal is converted by using a second OETF corresponding to a predetermined format, and is converted into a non-linear second video signal obtained by the conversion. 2 video signals are output.
  • the adjustment device 100 is an example of an adjustment device.
  • the acquisition unit 110 is an example of an acquisition unit.
  • the conversion unit 130 is an example of a conversion unit.
  • the output unit 150 is an example of an output unit.
  • the flowchart shown in FIG. 6 is an example of an adjustment method performed by the adjustment device.
  • the adjustment device 100 shown in the first embodiment includes an acquisition unit 110, a conversion unit 130, and an output unit 150.
  • the acquisition unit 110 acquires a non-linear first video signal generated by the first OETF.
  • the conversion unit 130 performs conversion processing for converting the first video signal acquired by the acquisition unit 110 into a non-linear second video signal.
  • the non-linear second video signal is (i) the first video signal is converted into a linear signal by using the inverse characteristic of the first OETF, and (ii) an input value with respect to the linear signal.
  • an adjusted linear signal obtained by performing the adjustment process is a second signal corresponding to a predetermined format. It is obtained by being converted by using OETF.
  • the output unit 150 outputs a second video signal obtained by converting the first video signal in the conversion unit 130.
  • the conversion unit is obtained by the first conversion unit that converts the first video signal acquired by the acquisition unit into a linear signal using the inverse characteristic of the first OETF, and the first conversion unit.
  • a second conversion unit that performs an adjustment process on the linear signal, and an adjusted linear signal obtained by performing the adjustment process in the second conversion unit is converted into a second video signal using the second OETF.
  • a third conversion unit A third conversion unit.
  • the first conversion unit 131 is an example of a first conversion unit.
  • the second conversion unit 132 is an example of a second conversion unit.
  • the third conversion unit 133 is an example of a third conversion unit.
  • the conversion unit 130 includes a first conversion unit 131, a second conversion unit 132, and a third conversion unit 133.
  • the first conversion unit 131 converts the first video signal acquired by the acquisition unit 110 into a linear signal using the inverse characteristic of the first OETF.
  • the second conversion unit 132 performs adjustment processing on the linear signal obtained by the first conversion unit 131.
  • the third conversion unit 133 converts the adjusted linear signal obtained by performing the adjustment process in the second conversion unit 132 into a second video signal using the second OETF.
  • the adjustment apparatus 100 configured as described above converts the first video signal into a linear signal, adjusts the luminance of the video by changing the gain of the linear signal, and converts the adjusted linear signal into a second non-linear signal. Convert to video signal and output. That is, since the adjustment device 100 adjusts the linear signal corresponding to the first video signal, the adjustment device 100 can adjust the luminance variation between the dark part and the bright part to be small. For this reason, for example, when it is difficult to visually recognize the dark part of the video in a bright viewing environment, the luminance of the video can be increased in a balanced manner, and the luminance of the video can be effectively increased.
  • the first video signal and the second video signal may be HDR video signals.
  • the first video signal and the second video signal are HDR video signals.
  • the adjustment device 100 configured as described above can output an HDR signal in which the luminance of the video is appropriately adjusted when the video signal is an HDR signal corresponding to HDR. Thereby, it is possible to easily display the HDR video whose luminance is effectively adjusted on the display device 200.
  • the adjustment apparatus further includes a YUV-RGB converter that converts a video signal composed of YUV signals into RGB signals, and an RGB-YUV converter that converts a video signal composed of RGB signals into YUV signals; , May be provided.
  • the YUV-RGB conversion unit may convert the first video signal from a YUV signal to an RGB signal.
  • the conversion unit performs the conversion process on each of the first R signal, the first G signal, and the first B signal that constitute the first video signal, which is obtained by the conversion to the RGB signal in the YUV-RGB conversion unit.
  • the first video signal may be converted into a second video signal composed of the second R signal, the second G signal, and the second B signal.
  • the RGB-YUV conversion unit may convert the second video signal obtained by converting the first video signal in the conversion unit from an RGB signal to a YUV signal.
  • the YUV-RGB conversion unit 120 is an example of a YUV-RGB conversion unit.
  • the RGB-YUV conversion unit 140 is an example of an RGB-YUV conversion unit.
  • the adjustment device 100 further includes a YUV-RGB conversion unit 120 and an RGB-YUV conversion unit 140.
  • the YUV-RGB converter 120 converts a video signal composed of YUV signals into RGB signals.
  • the RGB-YUV converter 140 converts a video signal composed of RGB signals into a YUV signal.
  • the YUV-RGB converter 120 converts the first video signal from a YUV signal to an RGB signal.
  • the conversion unit 130 performs the conversion process on each of the first R signal, the first G signal, and the first B signal constituting the first video signal obtained by converting the RGB signal into the RGB signal in the YUV-RGB conversion unit 120.
  • the first video signal is converted into a second video signal composed of the second R signal, the second G signal, and the second B signal.
  • the RGB-YUV converter 140 converts the second video signal obtained by converting the first video signal in the converter 130 from an RGB signal to a YUV signal.
  • the adjustment apparatus 100 configured as described above performs the above-described conversion processing on the RGB signal, the adjustment of the luminance of the image is performed while maintaining the color relationship (RGB ratio) of each pixel constituting the image. be able to.
  • the first video signal and the second video signal may have PQ characteristics or HLG characteristics.
  • the first video signal and the second video signal have PQ characteristics or HLG characteristics.
  • the adjustment device 100 configured as described above can easily convert the first video signal having the PQ characteristic or the HLG characteristic into the second video signal having the PQ characteristic or the HLG characteristic. Thereby, the conversion process in the conversion unit 130 can be appropriately performed according to the HDR characteristic of the video signal and the HDR characteristic supported by the display device 200.
  • the adjustment device may be connected to the display device through a digital interface.
  • the output unit may output the second video signal to the display device via a digital interface.
  • HDMI registered trademark
  • the adjustment device 100 is connected to the display device 200 by the cable 300.
  • the output unit 150 outputs the second video signal to the display device 200 via the cable 300.
  • the conversion unit 130 has shown a configuration example in which the gain change that makes the relationship between the input value and the output value of the linear signal linear is performed on the entire region of the linear signal as the adjustment process.
  • the disclosure is not limited to this configuration example.
  • FIG. 9 is a diagram illustrating an example of adjustment processing according to the first embodiment.
  • the conversion unit 130 when performing conversion to increase the gain, the conversion unit 130 inputs an input value in a range less than a predetermined threshold of the linear signal (for example, 100 nit in the example illustrated in FIG. 9).
  • the gain may be changed so that the relationship between the value and the output value is linear.
  • the conversion unit 130 may perform monotonically increasing conversion in which the relationship between the input value and the output value is monotonously increased with respect to an input value in a range equal to or greater than a predetermined threshold.
  • the monotonically increasing conversion may be, for example, a conversion in which the relationship between the input value and the output value is a straight line as shown in FIG. That is, in the adjustment process, conversion may be performed using a knee curve in which the slope is A when the threshold is less than the predetermined threshold and the slope is B smaller than A when the threshold is equal to or greater than the predetermined threshold.
  • the monotonically increasing conversion may be a conversion in which the relationship between an input value and an output value such as a logarithmic curve is a curve, for example.
  • the maximum value of the output value corresponding to the maximum value of the input value may be the peak luminance of the video (content) of the first video data, or the maximum luminance that can be displayed by the display device 200. It may be a value, or may be a predetermined luminance (for example, 500 nit, 1,000 nit, etc.).
  • the gain is set to be greater than 1 and input.
  • the phenomenon that the luminance value of the video that occurs when the adjustment process for raising the output value over the value is performed is less likely to occur. For this reason, an adjustment process for lowering the output value may be performed for all input values.
  • Embodiment 2 Next, Embodiment 2 will be described with reference to FIGS.
  • components that are substantially the same as the components described in the first embodiment are given the same reference numerals as those components, and descriptions thereof are omitted.
  • FIG. 10 is a block diagram schematically illustrating an example of a functional configuration of the adjustment device 100a according to the second embodiment.
  • the adjustment device 100a of the second embodiment is substantially the same as the configuration of the adjusting device 100 described in the first embodiment with reference to FIG. 3, detailed description thereof is omitted.
  • the adjustment device 100a of the second embodiment is different from the adjustment device 100 of the first embodiment in that it further includes a changing unit 160.
  • the changing unit 160 changes the first metadata acquired by the acquiring unit 110 to second metadata indicating the characteristics of the second video signal obtained by converting the first video signal in the converting unit 130. . That is, the changing unit 160 changes the first metadata attached to the first video data according to the conversion content of the first video signal in the conversion unit 130.
  • the changing unit 160 outputs the second metadata obtained by changing the first metadata to the output unit 150.
  • the changing unit 160 may be realized by, for example, a processor that executes a program and a memory in which the program is recorded, or may be realized by a dedicated circuit (for example, a circuit including an IC or an LSI).
  • the first metadata may be, for example, MaxCLL (Maximum Content Light Level), or MaxFALL (Maximum Frame-Average Light Level).
  • MaxCLL is a value indicating the maximum luminance of the pixel in all frames in the content.
  • MaxFALL is a value indicating the maximum value of the average luminance in the frame in all frames in the content.
  • the first metadata includes the first maximum luminance value that is the maximum luminance value (MaxCLL) of the pixels included in the entire video of the first video signal (all frames in the content), and the video of the first video signal. It may include at least one of a first maximum frame average luminance value which is a maximum value (MaxFALL) of the average luminance of each of a plurality of frames (all frames in the content) constituting the frame.
  • MaxCLL maximum luminance value
  • MaxFALL maximum value of the average luminance of each of a plurality of frames (all frames in the content) constituting the frame.
  • the changing unit 160 when changing the first metadata to the second metadata, includes (i) the entire video of the second video signal (all frames in the content) from the first maximum luminance value. Change to the second maximum luminance value, which is the maximum luminance value of the pixel, and (ii) a plurality of frames (all frames in the content) constituting the video of the second video signal from the first maximum frame average luminance value You may perform at least 1 of the change to the 2nd largest frame average luminance value which is the maximum value of the average luminance of each flame
  • the change unit 160 changes the metadata if MaxCLL and MaxFALL are changed in the first video signal and the second video signal. If not, the change unit 160 changes the metadata. It is not necessary to perform.
  • the first metadata may be static metadata other than MaxCLL and MaxFALL.
  • the first metadata is, for example, metadata related to the luminance of the video such as the maximum luminance value of the master monitor used when the master video from which the first video data is generated is generated. There may be.
  • the output unit 150 outputs data including the second video signal and the second metadata as second video data.
  • FIG. 11 is a flowchart illustrating an example of the operation (adjustment method) of the adjustment device 100a according to the second embodiment.
  • steps that perform substantially the same operations as the steps shown in the flowchart of FIG. 6 are given the same reference numerals, and descriptions of those steps are omitted.
  • the adjustment method of the second embodiment is substantially the same as the adjustment method of the first embodiment shown in FIG. However, the adjustment method of the second embodiment is different from the adjustment method of the first embodiment in that step S104a is added after step S104. Other steps in the adjustment method are the same as those in the first embodiment.
  • step S ⁇ b> 104 the changing unit 160 indicates the characteristics of the second video signal obtained by converting the first video signal obtained by the converting unit 130 from the first metadata obtained by the obtaining unit 110. 2 metadata is changed (step S104a).
  • step S104a When step S104a is completed, step S105 in the adjustment method of the first embodiment is performed.
  • the acquisition unit may further acquire the first metadata indicating the characteristics of the first video signal.
  • the adjustment apparatus may further include a changing unit that changes the first metadata acquired by the acquiring unit to second metadata indicating characteristics of the second video signal.
  • the output unit may output the second video signal and the second metadata.
  • the adjustment device 100a is an example of an adjustment device.
  • the change unit 160 is an example of a change unit.
  • the acquisition unit 110 further acquires first metadata indicating the characteristics of the first video signal.
  • the adjusting device 100a further includes a changing unit 160.
  • the changing unit 160 changes the first metadata acquired by the acquiring unit 110 to second metadata indicating the characteristics of the second video signal obtained by converting the first video signal in the converting unit 130. To do.
  • the output unit 150 outputs second video data including the second video signal and second metadata.
  • the adjustment device 100a configured as described above changes the first metadata of the first video signal to the second metadata according to the result of the conversion process in the conversion unit 130, and the second metadata obtained by the change. Data is output to the display device 200. That is, since the adjustment device 100a can output the second metadata indicating the characteristics of the second video signal to the display device 200, the second video signal can be easily displayed on the display device 200 appropriately.
  • the first metadata includes a first maximum luminance value that is a maximum luminance value of a pixel included in the entire video of the first video signal, and a plurality of frames constituting the video of the first video signal. May include at least one of a first maximum frame average luminance value that is a maximum value of the average luminance of the first frame.
  • the changing unit (i) changes the first maximum luminance value to a second maximum luminance value that is the maximum luminance value of the pixels included in the entire video of the second video signal, and (ii) the first maximum frame average At least one of changing from the luminance value to the second maximum frame average luminance value that is the maximum value of the average luminance of each of the plurality of frames constituting the video of the second video signal may be performed.
  • the first metadata is the first maximum value that is the maximum luminance value (MaxCLL) of the pixels included in the entire video (all frames in the content) of the first video signal. At least one of the luminance value and the first maximum frame average luminance value which is the maximum value (MaxFALL) of the average luminance of each of a plurality of frames (all frames in the content) constituting the video of the first video signal. Including one.
  • the changing unit 160 (i) changing from the first maximum luminance value to a second maximum luminance value that is the maximum luminance value of pixels included in the entire video of the second video signal (all frames in the content), and (Ii) From the first maximum frame average luminance value, a second maximum frame average luminance value that is a maximum value of the average luminance of each of a plurality of frames (all frames in the content) constituting the video of the second video signal. Make at least one of the following changes.
  • the adjustment device 100a configured in this way can change at least one of MaxCLL and MaxFALL to an appropriate value in the second video signal after the conversion process. Therefore, the adjustment device 100a can easily display the second video signal on the display device 200 appropriately.
  • Embodiment 3 Next, Embodiment 3 will be described with reference to FIGS.
  • FIG. 12 is a block diagram schematically illustrating an example of a functional configuration of the adjustment device 100b according to the third embodiment.
  • the configuration of the adjustment device 100b in the third embodiment is substantially the same as the configuration of the adjustment device 100 described in the first embodiment with reference to FIG.
  • the first metadata included in the first video data acquired by the acquisition unit 110 is output to the conversion unit 130b as compared with the adjustment device 100 according to the first embodiment.
  • the conversion unit 130b is different in that the conversion process is performed according to the first metadata.
  • the conversion unit 130b converts the first video signal into the second video signal according to the first metadata.
  • the first metadata includes the static metadata described in the second embodiment.
  • the first metadata includes the maximum luminance value of the master monitor used when the master video that is the basis for generating the video of the first video signal is generated, and the entire video of the first video signal (within the content). At least one of the first maximum luminance values which are the maximum luminance values (MaxCLL) of the pixels included in all the frames.
  • the second video signal has a second conversion unit such that the luminance of the second video signal is equal to or less than one of the maximum luminance value of the master monitor and the first maximum luminance value (MaxCLL) included in the first metadata.
  • This signal is obtained by performing adjustment processing in the second R signal conversion unit 132Rb, the second G signal conversion unit 132Gb, and the second B signal conversion unit 132Bb included in 132b. That is, when the conversion unit 130b performs conversion from the first video signal to the second video signal, the luminance of the second video signal is the maximum luminance value of the master monitor included in the first metadata. And the adjustment process may be performed in the second conversion unit 132b so as to be equal to or less than one of the first maximum luminance values.
  • the second conversion unit 132b of the conversion unit 130b has an inclination A that is less than a predetermined threshold value and a B that has an inclination greater than the predetermined threshold value that is smaller than A. Conversion using a knee curve may be performed. Then, when the conversion by the knee curve is performed, the second conversion unit 132b of the conversion unit 130b may perform the adjustment process so that the maximum luminance value included in the first metadata is maintained.
  • the first metadata is dynamic metadata, and may include the maximum luminance value of each of a plurality of scenes constituting the video of the first video signal.
  • the second video signal is adjusted by the second conversion unit 132b so that the luminance of the second video signal is equal to or lower than the maximum luminance value of the scene included in the first metadata for each of a plurality of scenes. It is a signal obtained by processing. That is, when the conversion unit 130b performs conversion from the first video signal to the second video signal for each of a plurality of scenes constituting the video of the first video signal, the luminance of the second video signal is The adjustment process may be performed so that the luminance value is equal to or less than the maximum luminance value of the scene included in the metadata.
  • the adjustment method of the third embodiment is substantially the same as the adjustment method of the first embodiment shown in FIG.
  • the adjustment method of the third embodiment is different from the adjustment method of the first embodiment in a part of the conversion process in step S103 shown in detail in FIG. Therefore, only the difference in the conversion process will be described here.
  • FIG. 13 is a flowchart showing an example of the conversion process in the third embodiment.
  • steps that perform substantially the same operations as the steps shown in the flowchart of FIG. 7 are given the same reference numerals, and descriptions of those steps are omitted.
  • step S112a is performed instead of step S112 of the conversion process of the first embodiment shown in FIG.
  • the conversion process of the third embodiment is different from the conversion process of the first embodiment in that respect.
  • Other steps of the conversion process are the same as those in the first embodiment.
  • the conversion unit 130b After step S111, the conversion unit 130b performs adjustment processing according to the first metadata (step S112a). In the conversion unit 130b according to the third embodiment, the second conversion unit 132b performs the adjustment process.
  • the acquisition unit may further acquire the first metadata indicating the characteristics of the first video signal.
  • the conversion unit may convert the first video signal into a second video signal according to the first metadata acquired by the acquisition unit.
  • the first metadata includes the maximum luminance value of the master monitor used when the master video on which the video of the first video signal is generated, and the first video signal. It may include at least one of a first maximum luminance value that is a maximum luminance value of pixels included in the entire video.
  • the second video signal is obtained by performing adjustment processing so that the luminance of the second video signal is equal to or less than one of the maximum luminance value and the first maximum luminance value of the master monitor included in the first metadata. It may be a signal.
  • the adjusting device 100b is an example of an adjusting device.
  • the conversion unit 130b is an example of a conversion unit.
  • the acquisition unit 110 further acquires first metadata indicating the characteristics of the first video signal.
  • the conversion unit 130b converts the first video signal into the second video signal according to the first metadata acquired by the acquisition unit 110.
  • the first metadata includes the maximum luminance value of the master monitor used when the master video that is the basis for generating the video of the first video signal is generated, and the entire video of the first video signal (within the content). At least one of the first maximum luminance values which are the maximum luminance values (MaxCLL) of the pixels included in all the frames.
  • the second video signal is adjusted so that the luminance of the second video signal is less than or equal to one of the maximum luminance value of the master monitor and the first maximum luminance value (MaxCLL) included in the first metadata. Is a signal obtained by
  • the adjustment device 100b configured as described above can perform knee curve conversion while maintaining the maximum luminance value of the first video signal in the second video signal, so that the luminance of the video is appropriately adjusted. Can do.
  • the first metadata may include the maximum luminance value of each of a plurality of scenes constituting the video of the first video signal.
  • the second video signal is obtained by performing adjustment processing for each of the plurality of scenes so that the luminance of the second video signal is equal to or lower than the maximum luminance value of the scene included in the first metadata. It may be a signal.
  • the first metadata includes the maximum luminance value of each of a plurality of scenes constituting the video of the first video signal.
  • the second video signal is obtained by performing adjustment processing for each of the plurality of scenes so that the luminance of the second video signal is equal to or lower than the maximum luminance value of the scene included in the first metadata. Signal.
  • the adjustment device 100b configured as described above can adjust the second video signal to an appropriate luminance for each scene.
  • Embodiment 4 will be described with reference to FIGS. 14 and 15.
  • FIG. 14 is a block diagram schematically illustrating an example of a functional configuration of the adjustment device 100c according to the fourth embodiment.
  • the configuration of the adjustment device 100c in the fourth embodiment is substantially the same as the configuration of the adjustment device 100 described in the first embodiment with reference to FIG.
  • the adjustment device 100c according to the fourth embodiment does not include the YUV-RGB conversion unit 120 and the RGB-YUV conversion unit 140 as compared with the adjustment device 100 according to the first embodiment, and the conversion unit 130c includes the RGB signal. The difference is that it does not have a conversion unit corresponding to each of the signals.
  • the conversion unit 130c performs conversion processing on the first Y signal indicating the luminance signal in the first video signal composed of YUV signals. Specifically, in the conversion unit 130c, the first conversion unit 131c converts the first Y signal into a linear signal by the same operation as the first conversion unit 131 described in the first embodiment.
  • the second conversion unit 132c performs adjustment processing by the same operation as the second conversion unit 132 described in the first embodiment, and the third conversion unit 133c is the same as the third conversion unit 133 described in the first embodiment. Conversion to a non-linear signal is performed by operation. Thereby, the conversion unit 130c outputs the second Y signal. Note that the first U signal and the first V signal indicating the color difference signals in the first video signal acquired by the acquisition unit 110 are output to the output unit 150 without being subjected to conversion processing in the conversion unit 130c.
  • the output unit 150 outputs a second video signal composed of the second Y signal, the first U signal, and the first V signal.
  • FIG. 15 is a flowchart illustrating an example of the operation (adjustment method) of the adjustment device 100c according to the fourth embodiment.
  • steps that perform substantially the same operations as the steps shown in the flowchart of FIG. 6 are given the same reference numerals, and descriptions of those steps are omitted.
  • the adjustment method of the fourth embodiment is substantially the same as the adjustment method of the first embodiment shown in FIG. However, the adjustment method of the fourth embodiment is different from the adjustment method of the first embodiment in that step S102 and step S104 are omitted. Other steps of the adjustment method are the same as those in the first embodiment.
  • step S101, step S103, and step S105 similar to those of the first embodiment are performed.
  • step S103 in the adjustment method of Embodiment 4 is performed with respect to the 1st Y signal of the 1st video signal comprised by a YUV signal.
  • the adjustment device 100c according to the fourth embodiment performs the same conversion process as the conversion process described in the first embodiment on the first video signal configured with the YUV signal. Therefore, in the adjustment device 100c, the processing load can be reduced as compared with the configuration in which the conversion processing is performed on each of the RGB signals described in the first embodiment.
  • Embodiment 5 Next, Embodiment 5 will be described with reference to FIG.
  • FIG. 16 is a block diagram schematically illustrating an example of a functional configuration of the adjustment device 100d according to the fifth embodiment.
  • the configuration of the adjustment device 100d in the fifth embodiment is substantially the same as the configuration of the adjustment device 100 described in the first embodiment with reference to FIG.
  • the adjustment device 100d of the fifth embodiment is different from the adjustment device 100 of the first embodiment in that the conversion unit 130d includes the first conversion unit 131, the second conversion unit 132, and the third conversion unit 133.
  • the difference is that it has an R signal table conversion unit 130R, a G signal table conversion unit 130G, and a B signal table conversion unit 130B.
  • the converting unit 130d does not individually perform conversion to the linear signal, adjustment processing, and conversion to the non-linear signal described in the first embodiment, and converts the linear signal to the linear signal described in the first embodiment. Conversion using a table in which conversion, adjustment processing, and conversion to a non-linear signal are woven may be performed for each signal of the first video signal formed of RGB signals.
  • Embodiments 1 to 5 have been described as examples of the technology disclosed in the present application. However, the technology in the present disclosure is not limited to this, and can also be applied to embodiments in which changes, replacements, additions, omissions, and the like are appropriately performed. Also, it is possible to combine the components described in the first to fifth embodiments to form a new embodiment.
  • the adjustment devices 100 and 100a to 100d may perform the adjustment process according to the maximum luminance that can be displayed on the display device 200.
  • the adjustment devices 100 and 100a to 100d may acquire the maximum displayable luminance from the display device 200 via the cable 300.
  • the adjustment devices 100 and 100a to 100d shown in the first to fifth embodiments detect the illuminance of the surroundings (viewing environment) with the illuminance sensor, and perform adjustment processing according to the magnitude of the illuminance detected by the illuminance sensor. Also good. In this case, the adjustment devices 100 and 100a to 100d may perform conversion to increase the magnification in gain change (“A-times” in the first embodiment) in the adjustment process as the detected illuminance increases.
  • the illuminance sensor may be provided in the adjusting devices 100, 100a to 100d. Alternatively, the display device 200 may include an illuminance sensor, and the adjustment devices 100 and 100a to 100d may acquire the detection value of the illuminance sensor from the display device 200 via the cable 300.
  • the adjustment device 100b may perform adjustment processing for each scene of a video using dynamic metadata.
  • the present disclosure is not limited to this operation.
  • the adjustment device 100b according to the third embodiment may dynamically analyze the luminance of the video for each scene and perform adjustment processing according to the analysis result.
  • the conversion unit 130 of the adjustment device 100a illustrated in the second embodiment may perform the conversion process according to the first metadata acquired in the acquisition unit 110 as in the adjustment device 100b illustrated in the third embodiment. Good.
  • the changing unit 160 may change the first metadata to the second metadata according to the content of the conversion process performed according to the first metadata.
  • the conversion unit may perform the adjustment process so that the luminance of the second video signal is equal to or lower than a predetermined maximum luminance value.
  • the adjusting device may further include a changing unit that changes the first metadata acquired by the acquiring unit to second metadata indicating the characteristics of the second video signal.
  • the output unit may output the second video signal and the second metadata.
  • the conversion unit 130 of the adjustment device 100a shown in the second embodiment may perform the adjustment process so that the luminance of the second video signal is equal to or lower than a predetermined maximum luminance value.
  • the changing unit 160 of the adjustment device 100a may change the first metadata acquired by the acquiring unit 110 to second metadata indicating the characteristics of the second video signal.
  • the output unit 150 may output the second video signal and the second metadata.
  • the predetermined maximum luminance value is, for example, the peak luminance of the first video signal, the maximum luminance value that can be displayed by the display device 200, a predetermined luminance (for example, 500 nit, 1,000 nit, etc.), the first video
  • a predetermined luminance for example, 500 nit, 1,000 nit, etc.
  • the configuration in which the processing of the conversion unit 130 in the first embodiment is performed using a table is shown, but for example, the processing in the conversion unit 130c in the fourth embodiment may be performed using a table. .
  • each component may be configured by dedicated hardware (electronic circuit), or may be realized by a processor executing a software program suitable for each component.
  • Each component may be realized by a program execution unit such as a CPU (Central Processing Unit) or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
  • a program execution unit such as a CPU (Central Processing Unit) or a processor reading and executing a software program recorded on a recording medium such as a hard disk or a semiconductor memory.
  • a plurality of function blocks may be realized as one function block, one function block may be divided into a plurality of functions, or some functions may be transferred to another function block.
  • the functions of a plurality of functional blocks may be processed in parallel by a single hardware or software, or may be processed in a time division manner.
  • the software that realizes the adjustment method of each of the above embodiments is the following program.
  • this program is an adjustment performed by the adjustment device that adjusts the video displayed on the display device by converting the non-linear first video signal generated by the first OETF into the second video signal.
  • the adjusted linear signal is converted by using the second OETF corresponding to a predetermined format, and is converted into a non-linear second video signal, which is obtained by the conversion.
  • the adjustment method a computer program that causes a computer to execute the adjustment method, and a computer-readable recording medium that records the program are included in the scope of the present disclosure.
  • the computer-readable recording medium include a flexible disk, hard disk, CD-ROM, MO, DVD, DVD-ROM, DVD-RAM, BD (Blu-ray (registered trademark) Disc), semiconductor memory, Etc.
  • the computer program is not limited to the one recorded on the recording medium, but may be transmitted via an electric communication line, a wireless or wired communication line, a network represented by the Internet, and the like.
  • each of the above-described devices may be configured from an IC card that can be attached to and detached from each device or a single module.
  • LSI Large Scale Integration
  • each processing unit is not limited to an LSI or an IC, and may be realized by a dedicated circuit or a general-purpose processor. Alternatively, it may be realized by an FPGA (Field Programmable Gate Array) capable of programming the circuit configuration, or a reconfigurable processor capable of reconfiguring connection and setting of circuit cells inside the LSI.
  • FPGA Field Programmable Gate Array
  • the above program may be recorded on a recording medium and distributed or distributed.
  • the distributed program by installing the distributed program in the devices and causing the processors of the devices to execute the programs, it is possible to cause the devices to perform various processes.
  • the computer program or digital signal in the present disclosure may be transmitted via an electric communication line, a wireless or wired communication line, a network such as the Internet, data broadcasting, or the like.
  • the present disclosure can be implemented by another independent computer system by recording and transferring a program or digital signal on a recording medium, or by transferring a program or digital signal via a network or the like. Also good.
  • each processing may be realized by centralized processing by a single device (system), or may be realized by distributed processing by a plurality of devices. Also good.
  • the present disclosure can be applied to an adjustment device that can effectively adjust the luminance of an image.
  • the present disclosure is applicable to an Ultra HD Blu-ray (registered trademark) player, an STB (Set Top Box), and the like.
  • Adjustment device 110 Acquisition unit 120 YUV-RGB conversion unit 130, 130b, 130c, 130d Conversion unit 130R R signal table conversion unit 130G G signal table conversion unit 130B B signal table conversion unit 131, 131c First conversion unit 131R First R signal conversion unit 131G First G signal conversion unit 131B First B signal conversion unit 132, 132b, 132c Second conversion unit 132R, 132Rb Second R signal conversion unit 132G, 132Gb Second G signal conversion unit 132B, 132Bb 2B signal conversion unit 133, 133c 3rd conversion unit 133R 3R signal conversion unit 133G 3G signal conversion unit 133B 3B signal conversion unit 140 RGB-YUV conversion unit 150 output unit 160 change unit 200 display device 300 cable

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Picture Signal Circuits (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

La présente invention concerne un dispositif de réglage, et analogues, pouvant régler efficacement la luminosité d'une vidéo. Ce dispositif de réglage est pourvu : d'une unité d'acquisition qui acquiert un premier signal vidéo non linéaire généré en utilisant une première fonction de transfert opto-électrique (OETF) ; d'une unité de transformation qui réalise un processus de transformation pour transformer le premier signal vidéo acquis par l'unité d'acquisition en un second signal vidéo non linéaire, obtenu en (i) utilisant une caractéristique inverse de la première OETF pour transformer le premier signal vidéo en un signal linéaire, (ii) soumettant le signal linéaire à un processus de réglage qui comprend une modification de gain dans laquelle une relation entre une valeur d'entrée et une valeur de sortie est linéaire, et (iii) utilisant une seconde OETF qui correspond à un format prescrit pour transformer le signal linéaire réglé obtenu en réalisant le processus de réglage ; et d'une unité de sortie qui produit en sortie le second signal vidéo obtenu au moyen de la transformation effectuée par l'unité de transformation.
PCT/JP2017/034235 2016-09-28 2017-09-22 Dispositif de réglage, procédé de réglage et programme WO2018062022A1 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US16/335,703 US20200035198A1 (en) 2016-09-28 2017-09-22 Adjusting device, adjusting method, and program
JP2018542509A JP6872693B2 (ja) 2016-09-28 2017-09-22 調整装置、調整方法およびプログラム
EP17855976.1A EP3522514A4 (fr) 2016-09-28 2017-09-22 Dispositif de réglage, procédé de réglage et programme

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201662400803P 2016-09-28 2016-09-28
US62/400803 2016-09-28
JP2016-219296 2016-11-10
JP2016219296 2016-11-10

Publications (1)

Publication Number Publication Date
WO2018062022A1 true WO2018062022A1 (fr) 2018-04-05

Family

ID=61763413

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2017/034235 WO2018062022A1 (fr) 2016-09-28 2017-09-22 Dispositif de réglage, procédé de réglage et programme

Country Status (1)

Country Link
WO (1) WO2018062022A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019193025A (ja) * 2018-04-20 2019-10-31 日本放送協会 映像輝度変換装置およびそのプログラム
CN113593501A (zh) * 2021-07-29 2021-11-02 昆山国显光电有限公司 一种显示面板的mura补偿方法、补偿装置及显示装置
JP7475187B2 (ja) 2020-04-15 2024-04-26 キヤノン株式会社 表示制御装置および方法、プログラム、記憶媒体

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008098814A (ja) * 2006-10-10 2008-04-24 Matsushita Electric Ind Co Ltd 映像信号処理装置
JP2008167418A (ja) 2006-12-05 2008-07-17 Nippon Hoso Kyokai <Nhk> 画像信号処理装置
WO2015190045A1 (fr) * 2014-06-10 2015-12-17 パナソニックIpマネジメント株式会社 Procédé de conversion et dispositif de conversion
WO2015190246A1 (fr) * 2014-06-13 2015-12-17 ソニー株式会社 Dispositif de transmission, procédé de transmission, dispositif de réception et procédé de réception

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008098814A (ja) * 2006-10-10 2008-04-24 Matsushita Electric Ind Co Ltd 映像信号処理装置
JP2008167418A (ja) 2006-12-05 2008-07-17 Nippon Hoso Kyokai <Nhk> 画像信号処理装置
WO2015190045A1 (fr) * 2014-06-10 2015-12-17 パナソニックIpマネジメント株式会社 Procédé de conversion et dispositif de conversion
WO2015190246A1 (fr) * 2014-06-13 2015-12-17 ソニー株式会社 Dispositif de transmission, procédé de transmission, dispositif de réception et procédé de réception

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
IWAMURA, SHUNSUKE; ICHIGAYA, ATSURO; NISHIDA, YUKIHIRO: "Hybrid Log-Gamma Perceptual Quantizer Comparison of Coding Performance between Hybrid Log-Gamma system and Perceptual Quantization system", ITE TECHNICAL REPORT, vol. 40, no. 14, 4 March 2016 (2016-03-04), pages 9 - 12, XP009514537, ISSN: 1342-6893, DOI: 10.11485/itetr.40.14.0_9 *
See also references of EP3522514A4 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019193025A (ja) * 2018-04-20 2019-10-31 日本放送協会 映像輝度変換装置およびそのプログラム
JP7045916B2 (ja) 2018-04-20 2022-04-01 日本放送協会 映像輝度変換装置およびそのプログラム
JP7475187B2 (ja) 2020-04-15 2024-04-26 キヤノン株式会社 表示制御装置および方法、プログラム、記憶媒体
CN113593501A (zh) * 2021-07-29 2021-11-02 昆山国显光电有限公司 一种显示面板的mura补偿方法、补偿装置及显示装置

Similar Documents

Publication Publication Date Title
JP7065376B2 (ja) 表示装置、変換装置、表示方法、および、コンピュータプログラム
JP6872693B2 (ja) 調整装置、調整方法およびプログラム
TWI734978B (zh) 用於執行高動態範圍視訊的色調映射的方法及裝置
US10992898B2 (en) Display method and display device
US10891722B2 (en) Display method and display device
CN109661806B (zh) 图像处理装置、图像处理方法和程序
JP6891882B2 (ja) 画像処理装置、および画像処理方法、並びにプログラム
KR20170129004A (ko) 영상 처리 장치 및 그의 영상 처리 방법
CN107533832B (zh) 图像处理装置、图像处理方法与程序
WO2016063475A1 (fr) Procédé de transmission et dispositif de reproduction
WO2018062022A1 (fr) Dispositif de réglage, procédé de réglage et programme
WO2016181584A1 (fr) Procédé d&#39;affichage et dispositif d&#39;affichage
WO2021131209A1 (fr) Dispositif de commande et procédé de commande
WO2023095718A1 (fr) Procédé de traitement vidéo, dispositif de traitement vidéo et programme
TW202423104A (zh) 共享影像處理程序的影音系統及其視訊處理方法
JP2020195137A (ja) 表示方法および表示装置
JP2017220689A (ja) 送信方法および再生装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17855976

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018542509

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017855976

Country of ref document: EP

Effective date: 20190429