WO2017085786A1 - 画像変換方法及び装置 - Google Patents
画像変換方法及び装置 Download PDFInfo
- Publication number
- WO2017085786A1 WO2017085786A1 PCT/JP2015/082244 JP2015082244W WO2017085786A1 WO 2017085786 A1 WO2017085786 A1 WO 2017085786A1 JP 2015082244 W JP2015082244 W JP 2015082244W WO 2017085786 A1 WO2017085786 A1 WO 2017085786A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- luminance
- jnd
- width
- corresponding value
- component
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/10—Intensity circuits
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/02—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
- G09G5/026—Control of mixing and/or overlay of colours in general
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2320/00—Control of display operating conditions
- G09G2320/06—Adjustment of display parameters
- G09G2320/0626—Adjustment of display parameters for control of overall brightness
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/06—Colour space transformation
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/10—Mixing of images, i.e. displayed pixel being the result of an operation, e.g. adding, on the corresponding input pixels
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2340/00—Aspects of display data processing
- G09G2340/14—Solving problems related to the presentation of information to be displayed
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/14—Detecting light within display terminals, e.g. using a single or a plurality of photosensors
- G09G2360/144—Detecting light within display terminals, e.g. using a single or a plurality of photosensors the light being ambient light
Definitions
- the present invention relates to an image conversion method and apparatus capable of appropriately reproducing the appearance of an original image even in environments with different brightness.
- the brightness in the environment where the user uses the display device varies depending on the usage scene.
- the external light is bright, the visibility of the original image is deteriorated by the external light irradiating the display unit.
- Patent Document 1 using the illuminance obtained from the illuminance detection unit, a compression gain to be applied to a low frequency component of an input image, a gain derivation unit to derive an expansion gain to be applied to a high frequency component of the input image,
- An image processing apparatus comprising: a display image generation unit configured to generate a display image in which pixel values of the input image are corrected based on the compression gain and the enlargement gain derived by a gain deriving unit. It is disclosed.
- the present invention has been made in view of such circumstances, and an image conversion method capable of appropriately reproducing the texture of the original image even when the ambient light environment or the brightness of the display device itself changes.
- a device is provided.
- a JND corresponding value width acquisition step for acquiring a JND corresponding value width corresponding to the reflectance component of the input image data, and a reference used when acquiring the JND corresponding value width
- the corrected reflectance component acquisition step for acquiring the gradation width as the corrected reflectance component, the illumination light component in the input image data or the corrected component, and the corrected reflectance component are combined and output.
- An image conversion method is provided comprising a synthesis step for generating image data.
- the present inventors have examined the cause of the difference in the texture of the display image from the original image, and the human eye has a frequency variation more than the illumination light component having a relatively small frequency variation. We paid attention to the large influence of the relatively large reflectance component on the texture. Then, by maintaining the JND-corresponding value width corresponding to the reflectance component of the input image data before or after correction of the reflectance component, or by converting it to a value after conversion according to a predetermined rule, the environment of external light or the display device It has been found that the texture of the original image can be appropriately reproduced even when the brightness of the image itself changes, and the present invention has been completed.
- the “JND corresponding value width” is a difference between two JND corresponding values, and the “JND corresponding value width corresponding to the reflectance component of the input image data” is a luminance corresponding to the total light component of the input image data. It means the difference between the corresponding JND corresponding value and the JND corresponding value corresponding to the luminance corresponding to the illumination light component of the input image data.
- the JND-corresponding value is a value corresponding to the luminance on a one-to-one basis, and an example is a DICOM standard JND index based on the Barten Model of visual recognition.
- the JND index is defined as 1 JND (Just-Noticeable Difference, Discrimination Area), which is the minimum luminance difference of a given target that can be identified by an average human observer. A value that results in a difference.
- JND correspondence value data derived by a method other than Barten Model and corresponding to the minimum luminance difference that can be identified by the observer can be used instead of the JND index.
- the predetermined rule is a predetermined correction function, a table in which a multiplication coefficient or a division coefficient, an addition constant or a subtraction constant, and a converted value of the JND corresponding value width and the JND corresponding value width are associated with each other.
- a correction function including at least one of mathematical expressions is used.
- a difference between a JND corresponding value corresponding to an illumination light component of the input image data and a JND corresponding value corresponding to all light components of the input image data is calculated as the JND corresponding value width. Get as.
- a gradation / luminance conversion step of converting gradation values of the illumination light component and all light components of the input image data into luminance is provided, and the JND corresponding value width acquisition step includes the difference of the converted luminance.
- the JND corresponding value width corresponding to is acquired.
- the first reference luminance is a luminance corresponding to an illumination light component or an all-light component of the input image data.
- the second reference luminance is a luminance obtained by correcting the first reference luminance based on the intensity of external light or a luminance set by a user.
- the step is performed on a pixel basis.
- a JND corresponding value width acquisition unit that acquires a JND corresponding value width corresponding to a reflectance component of the input image data, and a first reference luminance that is used as a reference when acquiring the JND corresponding value width
- the second reference brightness different from the reference is used as a reference
- a brightness width acquisition unit that acquires the brightness width corresponding to the JND corresponding value width or a value obtained by converting the JND corresponding value width according to a predetermined rule, and a gradation width corresponding to the brightness width.
- the corrected reflectance component acquisition unit that acquires the corrected reflectance component, the illumination light component in the input image data or the corrected component, and the corrected reflectance component are combined to output image data.
- an image conversion apparatus including a generating unit for generating.
- FIG. 1 is a block diagram of an image conversion apparatus according to a first embodiment of the present invention. It is a figure for demonstrating correction
- FIG. 1 is a block diagram showing a configuration of an image conversion apparatus 10 according to a first embodiment of the present invention.
- the image conversion apparatus 10 includes a color space conversion unit 1, an extraction unit 3, an illumination light component acquisition unit 5, an illumination light component correction unit 7, a gradation / luminance conversion unit 9, and an all-light component acquisition unit 11.
- the color space conversion unit 1 converts the color space of the input image data S.
- conversion of the color space for example, conversion from the RGB color space to the HSV color space is performed. Such conversion is performed using a general conversion formula.
- the extraction unit 3 is a filter that extracts the illumination light component L from the input image data S.
- an edge-preserving low-pass filter can be used as the filter.
- the edge preserving low-pass filter extracts the illumination light component L of the input image data S by calculating a weighted average of local brightness for the input image data S, and outputs it to the illumination light component acquisition unit 5.
- the illumination light component acquisition unit 5 acquires the illumination light component L from the extraction unit 3.
- the illumination light component correction unit 7 performs gradation correction on the illumination light component L and outputs a corrected illumination light component L ′.
- a correction method is not particularly limited.
- the correction method can be executed using LGain that is a parameter for determining a mixing ratio when generating a composite image of the correction component and the original illumination light component L.
- the correction of the illumination light component L may or may not be performed as necessary.
- the gradation / luminance conversion unit 9a converts the illumination light component L into a luminance
- the gradation / luminance conversion unit 9b converts the gradation value of the corrected illumination light component L ′ into luminance. Such conversion can be performed according to the characteristics of a display which is an example of a display device.
- the conversion method for example, a mathematical expression defining the relationship between the gradation value and the luminance, a lookup table created in advance, or the like can be used. Thereby, the gradation value can be converted into luminance, and conversely, the luminance can be converted into gradation value.
- the all-light component acquisition unit 11 acquires the all-light component A, which is the sum of the illumination light component L and the reflectance component R included in the input image data S, and outputs it to the gradation / luminance conversion unit 13.
- the gradation / luminance conversion unit 13 acquires the all-light component A and converts the gradation value of the all-light component A into luminance.
- the conversion method is the same as that of the gradation / luminance conversion unit 9. Then, the luminance, and obtains the first luminance Y 1p, and outputs the first luminance Y 1p in JND corresponding width acquisition unit 15.
- the JND corresponding value width acquisition unit 15 acquires a JND corresponding value width ⁇ R corresponding to the reflectance component R based on the input image data S. Specifically, the JND corresponding value width ⁇ R is acquired using the first luminance Y 1p acquired from the gradation / luminance conversion unit 13 and the first reference luminance Y 1r acquired from the gradation / luminance conversion unit 9. This will be described with reference to FIG.
- FIG. 2 is a graph showing the correspondence between JND correspondence values and luminance. This is the 1JND corresponding value for the minimum luminance difference of a given target that the average human observer can identify. As shown in FIG. 2, the average human observer can sensitively identify a change in luminance when the luminance is low, but is insensitive to the luminance change when the luminance is high. In order to simplify the description, it is assumed that correction by the illumination light component correction unit 7 is not executed. First, the points corresponding to the first reference luminance Y 1r obtained by the gradation / luminance conversion unit 9a are plotted on the graph. This point A corresponds to the illumination light component L of the input image data S.
- the points corresponding to the first luminance Y 1p obtained by the gradation / luminance conversion unit 13 are plotted on the graph.
- This point B corresponds to the all-light component A of the input image data S.
- the JND corresponding value width acquisition unit 15 obtains the JND corresponding value R 1r corresponding to the point A and the JND corresponding value R 1p corresponding to the point B, and acquires the difference between the JND corresponding value R 1r and the JND corresponding value R 1p. To do.
- Such a difference corresponds to the JND corresponding value width ⁇ R corresponding to the reflectance component R.
- the JND corresponding value width ⁇ R can also be referred to as a JND corresponding value width corresponding to the luminance width ⁇ Y 1 which is the difference between the first reference luminance Y 1r and the first luminance Y 1p .
- the JND correspondence value width acquisition unit 15 can obtain the JND correspondence value from the luminance based on the following conversion formula.
- the JND corresponding value width / luminance width conversion unit 17 acquires a second reference luminance Y 2r different from the first reference luminance Y 1r from the second reference luminance acquisition unit 30. Then, using the second reference luminance Y 2r as a reference, the JND corresponding value width ⁇ R or the luminance width ⁇ Y 2 corresponding to the value obtained by converting the corresponding value width ⁇ R according to a predetermined rule is acquired.
- the second reference luminance Y 2r is a luminance obtained by adding the luminance Y p due to external light to the first reference luminance Y 1r .
- the second reference luminance acquisition unit 30 can use, for example, an illuminance sensor that measures the surface luminance of the display.
- the points corresponding to the second reference luminance Y 2r are plotted on the graph.
- This point A ′ corresponds to the illumination light component L whose brightness is increased by external light or display setting by the user.
- the JND corresponding value R 2r corresponding to point A ' to calculate the JND corresponding value R 2p corresponding to the value of maintaining the JND corresponding width [Delta] R, is plotted on a graph a point corresponding to the JND corresponding value R 2p .
- This point B ′ corresponds to the total light component A after the reflectance component R is corrected.
- the luminance Y 2r corresponding to the point A ′ and the luminance Y 2p corresponding to the point B ′ are obtained, and the difference between the luminance Y 2r and the luminance Y 2p is obtained.
- This difference corresponds to the luminance width ⁇ Y 2 corresponding to the corrected reflectance component R.
- the JND-corresponding value is a JND index defined by the DICOM standard
- the JND-corresponding value width / luminance width conversion unit 17 can obtain the luminance from the JND-corresponding value based on the following conversion formula.
- the luminance width / gradation value width conversion unit 19 acquires the luminance width ⁇ Y 2 from the JND corresponding value width / luminance width conversion unit 17 and converts the luminance width ⁇ Y 2 into a gradation width. Then, the converted gradation width is acquired as the corrected reflectance component R ′.
- the method of correcting the reflectance component R includes the method shown in FIG. 3 in addition to calculating the JND corresponding value R 2p corresponding to the value maintaining the JND corresponding value width ⁇ R from the JND corresponding value R 2r . .
- a luminance width corresponding to a value obtained by converting the JND corresponding value width ⁇ R according to a predetermined rule with the second reference luminance Y2r as a reference is acquired.
- an arbitrary correction function can be used as the predetermined rule. That is, the value after conversion according to a predetermined rule can be a value obtained by multiplying or adding a predetermined value to the JND corresponding value width. Moreover, it can also be set as the value which divided or subtracted the JND corresponding
- the correction coefficient ⁇ is preferably 0.01 to 10, more preferably 0.1 to 5, and still more preferably 0.5 to 1.5.
- the correction coefficient ⁇ can be an arbitrary value between these two values.
- the correction coefficient ⁇ is 1, that is, when the JND corresponding value R 2p corresponding to the value maintaining the JND corresponding value width ⁇ R is calculated from the JND corresponding value R 2r , the reflectance component in the original environment. Since the JND-corresponding value width ⁇ R of R is maintained even in an environment where the brightness is different from the original environment, the “look” seen from the human eye is appropriately reproduced.
- the second luminance Y 2p has a smaller value, so that an image with reduced brightness of the corrected reflectance component R ′ can be obtained.
- the correction coefficient ⁇ is larger than 1, the second luminance Y 2p has a larger value, so that the contrast of the corrected reflectance component R ′ is enhanced.
- the combining unit 21 acquires the illumination light component L ′ from the illumination light component correction unit 7 and the corrected reflectance component R ′ from the luminance width / tone value width conversion unit 19. Then, the illumination light component L and the reflectance component R ′ are synthesized and output image data S ′ is output.
- the above processing is performed in units of pixels.
- the combining unit 21 may acquire the illumination light component L from the illumination light component acquisition unit 5 without acquiring the illumination light component L ′ from the illumination light component correction unit 7.
- the range of the output image data S ′ may be corrected by a range correction unit (not shown). Further, the color space of the output image data S ′ whose range has been corrected may be converted from the HSV color space to the RGB color space by a color space inverse conversion unit (not shown).
- the gradation / luminance conversion unit 9b converts the gradation value of the corrected illumination light component L ′ into luminance. . Then, this luminance is output to the JND corresponding value width / luminance width converter 17.
- the JND corresponding value width / luminance width conversion unit 17 adds the luminance input from the gradation / luminance conversion unit 9b and the second reference luminance Y 2r acquired from the second reference luminance acquisition unit 30, thereby adding the second reference luminance Y 2r. 'And.
- the process in which the JND corresponding value width / luminance width conversion unit 17 calculates the luminance from the JND corresponding value may be performed by replacing the above-described second reference luminance Y 2r with the second reference luminance Y 2r ′.
- the combining unit 21 acquires the illumination light component L ′ from the illumination light component correction unit 7 and the corrected reflectance component R ′ from the luminance width / tone value width conversion unit 19 and combines them.
- the JND-corresponding value width (of the reflectance component R) based on human sensory evaluation is maintained before and after correction, or is set to a value after conversion according to a predetermined rule. Since the relative characteristics of the image are maintained, the appearance of the original image can be appropriately reproduced even when the ambient light environment or the luminance of the display device itself changes. In this way, the human eye uses the characteristic that it reacts more strongly to the relative characteristics than the absolute characteristics of the image, so the “look” of the “texture” in the details of the original image is appropriate. Can be reproduced.
- FIG. 4 is a diagram for explaining correction of the reflectance component according to the second embodiment of the present invention.
- the luminance corresponding to the illumination light component L or the corrected illumination light component L ′ of the input image data S is used as the first reference luminance Y 1r , but in the second embodiment, the total light component A is used. The difference is that the corresponding luminance is used as the first reference luminance. Since the configuration of the image conversion apparatus 10 is the same as that of the first embodiment, description thereof is omitted.
- the point B ′ corresponding to the second reference luminance is the total light whose luminance is raised by the external light or the display setting by the user.
- the JND corresponding value R 2p corresponding to point B ' is plotted on a graph a point corresponding to the JND corresponding value R 2r .
- This point A ′ corresponds to the illumination light component L after the reflectance component R is corrected.
- the luminance Y 2r corresponding to the point A ′ and the luminance Y 2p corresponding to the point B ′ are obtained, and the difference between the luminance Y 2r and the luminance Y 2p is obtained.
- This difference corresponds to the luminance width ⁇ Y 2 corresponding to the corrected reflectance component R.
- the subsequent processing is the same as in the first embodiment.
- the luminance width corresponding to the value obtained by converting the JND corresponding value width ⁇ R according to a predetermined rule may be acquired.
- the relative characteristic of the image before and after the correction can be obtained by maintaining the JND-corresponding value width of the reflectance component R before and after the correction or by converting the reflectance component R according to a predetermined rule.
- the JND corresponding value width ⁇ R corresponding to the reflectance component R there is a method using linear approximation. For example, in FIG. 2 to FIG. 4, 1 is added to the JND corresponding value R 1r corresponding to the point A corresponding to the illumination light component L, and the point on the graph corresponding to the JND corresponding value and the point A 1 is subtracted from the JND corresponding value R 1r to be calculated, and the “slope” between the points on the graph corresponding to the JND corresponding value is calculated. Then, a unit luminance corresponding to 1 JND correspondence is obtained, and an LUT in which the luminance and the JND correspondence value are associated is created. Using such an LUT, the difference luminance between the illumination light component and the reflected light component is expressed by a unit luminance corresponding to 1 JND.
- the image conversion device 10 may be built in the display device. Moreover, you may provide as an external conversion box (set top box) of a display apparatus. Also provided as an ASIC (application specific integrated circuit), a field-programmable gate array (FPGA), or a DRP (Dynamic Reconfigurable Processor) that implements the functions of the image conversion apparatus 10.
- ASIC application specific integrated circuit
- FPGA field-programmable gate array
- DRP Dynamic Reconfigurable Processor
- 1 color space conversion unit
- 3 extraction unit
- 5 illumination light component acquisition unit
- 7 illumination light component correction unit
- 9 gradation / luminance conversion unit
- 11 all-light component acquisition unit
- 13 gradation / Luminance conversion unit
- 15 JND corresponding value acquisition unit
- 17 JND corresponding value width / luminance width conversion unit
- 19 luminance width / tone value width conversion unit
- 21 composition unit
- 10 image conversion device
- 30 second reference Luminance acquisition unit
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Controls And Circuits For Display Device (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
Description
好ましくは、前記所定の規則は、予め定められた補正関数であって、乗算係数又は除算係数、加算定数又は減算定数、前記JND対応値幅と前記JND対応値幅の変換後の値を対応付けたテーブル又は数式の少なくとも1つを含む補正関数を利用する。
好ましくは、前記JND対応値幅取得ステップは、前記入力画像データの照明光成分に対応するJND対応値と、前記入力画像データの全光成分に対応するJND対応値と、の差分を前記JND対応値幅として取得する。
好ましくは、前記入力画像データの前記照明光成分及び前記全光成分の階調値を輝度に変換する階調/輝度変換ステップを供え、前記JND対応値幅取得ステップは、前記変換された輝度の差分に対応する前記JND対応値幅を取得する。
好ましくは、前記第1基準輝度は、前記入力画像データの照明光成分又は全光成分に対応する輝度である。
好ましくは、前記第2基準輝度は、第1基準輝度を外光の強度に基づいて補正して得られた輝度又はユーザーにより設定された輝度である。
好ましくは、前記ステップは、画素単位で実行される。
好ましくは、入力画像データに基づいて、前記入力画像データの反射率成分に対応するJND対応値幅を取得するJND対応値幅取得部と、前記JND対応値幅を取得する際に基準とした第1基準輝度とは異なる第2基準輝度を基準として、前記JND対応値幅又はこれを所定の規則に従って変換した後の値に対応する輝度幅を取得する輝度幅取得部と、前記輝度幅に対応する階調幅を補正後の反射率成分として取得する補正後反射率成分取得部と、前記入力画像データ中の照明光成分又はその補正後の成分と、前記補正後の反射率成分を合成して出力画像データを生成する合成部を備える、画像変換装置が提供される。
図1は、本発明の第1実施形態に係る画像変換装置10の構成を示すブロック図である。画像変換装置10は、色空間変換部1と、抽出部3と、照明光成分取得部5と、照明光成分補正部7と、階調/輝度変換部9と、全光成分取得部11と、階調/輝度変換部13と、JND対応値幅取得部15と、JND対応値幅/輝度幅変換部17と、輝度幅/階調値幅変換部19と、合成部21と、を備える。
1.「JND対応値幅ΔR×α=補正後のJND対応値幅」(点AのJND対応値>200、補正係数=α)
「JND対応値幅ΔR×β=補正後のJND対応値幅」(点AのJND対応値<200、補正係数=β)
2.「JND対応値幅ΔR×(α-(点AのJND対応値-γ))=補正後のJND対応値幅」(補正係数=(α-(点AのJND対応値-γ))
3.「JND対応値幅ΔR+0.1=補正後のJND対応値幅」(補正定数=0.1)
4.「JND対応値幅ΔR+0.1(点AのJND対応値-γ)=補正後のJND対応値幅」(補正係数=0.1(点AのJND対応値-γ))
このように、所定の規則としては、任意の補正関数(補正係数及び補正定数を含む)を用いることができる。つまり、所定の規則に従って変換した後の値は、JND対応値幅に予め定められた値を乗算又は加算した値とすることができる。また、JND対応値幅を予め定められた値で除算又は減算した値とすることもできる。また、JND対応値幅を予め定められた補正関数に入力した後の出力値とすることもできる。さらに、JND対応値幅と予め定められた値を対応付けたテーブルを利用することもできる。
る。なお、補正係数αは、これらの2つの値の間の任意の値とすることができる。ここで、補正係数αが1の場合、つまり、JND対応値R2rから、JND対応値幅ΔRを維持した値に対応するJND対応値R2pを算出する場合には、元の環境における反射率成分RのJND対応値幅ΔRが、元の環境と明るさの異なる環境においても維持されるので、ヒトの目から見た「見た目」が適切に再現される。また、補正係数αが0より大きく1未満の場合には、第2輝度Y2pがより小さな値となるので、補正後の反射率成分R'の明るさを抑えた画像を得ることができる。逆に、補正係数αが1より大きい場合には、第2輝度Y2pがより大きな値となるので、補正後の反射率成分R'のコントラストが強調される。
次に本発明の第2実施形態に係る、画像変換装置10を用いた画像変換方法について説明する。図4は、本発明の第2実施形態に係る反射率成分の補正について説明するための図である。第1実施形態では、入力画像データSの照明光成分L又は補正後照明光成分L'に対応する輝度を第1基準輝度Y1rとして用いたが、第2実施形態では、全光成分Aに対応する輝度を第1基準輝度として用いる点が異なる。画像変換装置10の構成については第1実施形態と同様であるため、説明を省略する。
Claims (8)
- 入力画像データに基づいて、前記入力画像データの反射率成分に対応するJND対応値幅を取得するJND対応値幅取得ステップと、
前記JND対応値幅を取得する際に基準とした第1基準輝度とは異なる第2基準輝度を基準として、前記JND対応値幅又はこれを所定の規則に従って変換した後の値に対応する輝度幅を取得する輝度幅取得ステップと、
前記輝度幅に対応する階調幅を補正後の反射率成分として取得する補正後反射率成分取得ステップと、
前記入力画像データ中の照明光成分又はその補正後の成分と、前記補正後の反射率成分を合成して出力画像データを生成する合成ステップを備える、画像変換方法。 - 前記所定の規則は、予め定められた補正関数であって、乗算係数又は除算係数、加算定数又は減算定数、前記JND対応値幅と前記JND対応値幅の変換後の値を対応付けたテーブル又は数式の少なくとも1つを含む補正関数を利用する、
請求項1に記載の画像変換方法。 - 前記JND対応値幅取得ステップは、前記入力画像データの照明光成分に対応するJND対応値と、前記入力画像データの全光成分に対応するJND対応値と、の差分を前記JND対応値幅として取得する、
請求項1又は請求項2に記載の画像変換方法。 - 前記入力画像データの前記照明光成分及び前記全光成分の階調値を輝度に変換する階調/輝度変換ステップを供え、
前記JND対応値幅取得ステップは、前記変換された輝度の差分に対応する前記JND対応値幅を取得する、
請求項3に記載の画像変換方法。 - 前記第1基準輝度は、前記入力画像データの照明光成分又は全光成分に対応する輝度である、
請求項1~請求項4のいずれか1項に記載の画像変換方法。 - 前記第2基準輝度は、第1基準輝度を外光の強度に基づいて補正して得られた輝度又はユーザーにより設定された輝度である、
請求項1~請求項5のいずれか1項に記載の画像変換方法。 - 前記ステップは、画素単位で実行される、
請求項1~請求項6のいずれか1項に記載の画像変換方法。 - 入力画像データに基づいて、前記入力画像データの反射率成分に対応するJND対応値幅を取得するJND対応値幅取得部と、
前記JND対応値幅を取得する際に基準とした第1基準輝度とは異なる第2基準輝度を基準として、前記JND対応値幅又はこれを所定の規則に従って変換した後の値に対応する輝度幅を取得する輝度幅取得部と、
前記輝度幅に対応する階調幅を補正後の反射率成分として取得する補正後反射率成分取得部と、
前記入力画像データ中の照明光成分又はその補正後の成分と、前記補正後の反射率成分を合成して出力画像データを生成する合成部を備える、画像変換装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/773,047 US10332485B2 (en) | 2015-11-17 | 2015-11-17 | Image converting method and device |
JP2017551423A JP6342085B2 (ja) | 2015-11-17 | 2015-11-17 | 画像変換方法及び装置 |
PCT/JP2015/082244 WO2017085786A1 (ja) | 2015-11-17 | 2015-11-17 | 画像変換方法及び装置 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2015/082244 WO2017085786A1 (ja) | 2015-11-17 | 2015-11-17 | 画像変換方法及び装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017085786A1 true WO2017085786A1 (ja) | 2017-05-26 |
Family
ID=58718540
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2015/082244 WO2017085786A1 (ja) | 2015-11-17 | 2015-11-17 | 画像変換方法及び装置 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10332485B2 (ja) |
JP (1) | JP6342085B2 (ja) |
WO (1) | WO2017085786A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019159266A1 (ja) * | 2018-02-14 | 2019-08-22 | Eizo株式会社 | 表示システム及びプログラム |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111445394B (zh) * | 2019-12-10 | 2023-06-20 | 西南技术物理研究所 | 空对地观测的可见光图像自适应增强方法 |
CN111415608B (zh) * | 2020-04-13 | 2021-10-26 | 深圳天德钰科技股份有限公司 | 驱动方法、驱动模组及显示装置 |
KR20230109764A (ko) * | 2021-02-02 | 2023-07-20 | 에이조 가부시키가이샤 | 화상 표시 시스템, 화상 표시 장치, 화상 표시 방법및 컴퓨터 프로그램 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012198464A (ja) * | 2011-03-23 | 2012-10-18 | Fujitsu Ten Ltd | 表示制御装置、画像表示システム及び表示制御方法 |
JP2012256168A (ja) * | 2011-06-08 | 2012-12-27 | Sharp Corp | 画像処理装置及び撮像装置 |
JP2013152334A (ja) * | 2012-01-25 | 2013-08-08 | Olympus Corp | 顕微鏡システムおよび顕微鏡観察方法 |
WO2013145388A1 (ja) * | 2012-03-30 | 2013-10-03 | Eizo株式会社 | 階調補正方法、イプシロンフィルタの閾値決定装置またはその方法 |
JP2013246265A (ja) * | 2012-05-25 | 2013-12-09 | Hitachi Consumer Electronics Co Ltd | 映像表示装置 |
WO2014027569A1 (ja) * | 2012-08-15 | 2014-02-20 | 富士フイルム株式会社 | 表示装置 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7433540B1 (en) * | 2002-10-25 | 2008-10-07 | Adobe Systems Incorporated | Decomposing natural image sequences |
US20060146193A1 (en) * | 2004-12-30 | 2006-07-06 | Chaminda Weerasinghe | Method and system for variable color saturation |
KR100879536B1 (ko) * | 2006-10-30 | 2009-01-22 | 삼성전자주식회사 | 영상의 화질 개선을 위한 방법 및 시스템 |
JP5105209B2 (ja) * | 2007-12-04 | 2012-12-26 | ソニー株式会社 | 画像処理装置および方法、プログラム、並びに記録媒体 |
US8411990B1 (en) * | 2009-08-28 | 2013-04-02 | Adobe Systems Incorporated | System and method for decomposing an image into reflectance and shading components |
JP5609080B2 (ja) | 2009-11-30 | 2014-10-22 | 富士通株式会社 | 画像処理装置、画像表示装置、画像処理プログラム及び画像処理方法 |
KR20120118383A (ko) * | 2011-04-18 | 2012-10-26 | 삼성전자주식회사 | 이미지 보정 장치 및 이를 이용하는 이미지 처리 장치와 그 방법들 |
US9842532B2 (en) * | 2013-09-09 | 2017-12-12 | Nvidia Corporation | Remote display rendering for electronic devices |
US10089913B2 (en) * | 2014-07-25 | 2018-10-02 | Eizo Corporation | Picture conversion method, picture conversion device, computer program for picture conversion, and picture display system |
US9373162B2 (en) * | 2014-10-10 | 2016-06-21 | Ncku Research And Development Foundation | Auto-contrast enhancement system |
WO2016088162A1 (ja) * | 2014-12-01 | 2016-06-09 | Eizo株式会社 | 画像変換方法 |
US9621767B1 (en) * | 2015-11-24 | 2017-04-11 | Intel Corporation | Spatially adaptive tone mapping for display of high dynamic range (HDR) images |
US10074162B2 (en) * | 2016-08-11 | 2018-09-11 | Intel Corporation | Brightness control for spatially adaptive tone mapping of high dynamic range (HDR) images |
-
2015
- 2015-11-17 US US15/773,047 patent/US10332485B2/en active Active
- 2015-11-17 JP JP2017551423A patent/JP6342085B2/ja active Active
- 2015-11-17 WO PCT/JP2015/082244 patent/WO2017085786A1/ja active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012198464A (ja) * | 2011-03-23 | 2012-10-18 | Fujitsu Ten Ltd | 表示制御装置、画像表示システム及び表示制御方法 |
JP2012256168A (ja) * | 2011-06-08 | 2012-12-27 | Sharp Corp | 画像処理装置及び撮像装置 |
JP2013152334A (ja) * | 2012-01-25 | 2013-08-08 | Olympus Corp | 顕微鏡システムおよび顕微鏡観察方法 |
WO2013145388A1 (ja) * | 2012-03-30 | 2013-10-03 | Eizo株式会社 | 階調補正方法、イプシロンフィルタの閾値決定装置またはその方法 |
JP2013246265A (ja) * | 2012-05-25 | 2013-12-09 | Hitachi Consumer Electronics Co Ltd | 映像表示装置 |
WO2014027569A1 (ja) * | 2012-08-15 | 2014-02-20 | 富士フイルム株式会社 | 表示装置 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019159266A1 (ja) * | 2018-02-14 | 2019-08-22 | Eizo株式会社 | 表示システム及びプログラム |
KR20200087865A (ko) * | 2018-02-14 | 2020-07-21 | 에이조 가부시키가이샤 | 표시 시스템 및 프로그램 |
CN111557028A (zh) * | 2018-02-14 | 2020-08-18 | Eizo株式会社 | 显示系统以及程序 |
JPWO2019159266A1 (ja) * | 2018-02-14 | 2021-01-28 | Eizo株式会社 | 表示システム及びプログラム |
US11056079B2 (en) | 2018-02-14 | 2021-07-06 | Eizo Corporation | Display system and program |
KR102334881B1 (ko) * | 2018-02-14 | 2021-12-02 | 에이조 가부시키가이샤 | 표시 시스템 및 프로그램 |
CN111557028B (zh) * | 2018-02-14 | 2023-02-03 | Eizo株式会社 | 显示系统以及计算机可读记录介质 |
Also Published As
Publication number | Publication date |
---|---|
US20180322847A1 (en) | 2018-11-08 |
US10332485B2 (en) | 2019-06-25 |
JP6342085B2 (ja) | 2018-06-13 |
JPWO2017085786A1 (ja) | 2018-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10134359B2 (en) | Device or method for displaying image | |
US9336581B2 (en) | Method for correcting gradations and device or method for determining threshold of epsilon filter | |
JP6342085B2 (ja) | 画像変換方法及び装置 | |
JP5596075B2 (ja) | 階調補正装置またはその方法 | |
JP2008244591A (ja) | 画像処理装置及びその方法 | |
JP2009111541A (ja) | 画像処理装置及び画像処理方法 | |
JP6550138B2 (ja) | 映像処理装置 | |
JP2017187994A (ja) | 画像処理装置、画像処理方法、画像処理システムおよびプログラム | |
US20150071562A1 (en) | Image processing apparatus | |
JP2006114005A (ja) | 階調変換装置、プログラム、電子カメラ、およびその方法 | |
WO2016088162A1 (ja) | 画像変換方法 | |
KR101389932B1 (ko) | 이미지 톤 매핑 장치 및 방법 | |
JP2011128325A (ja) | 表示装置 | |
JP6160426B2 (ja) | 画像処理装置及びプログラム | |
KR20180094949A (ko) | 디지털 이미지를 프로세싱하기 위한 방법, 디바이스, 단말 장비 및 연관된 컴퓨터 프로그램 | |
KR101585187B1 (ko) | Cielab 색 공간에서의 통합된 멀티 스케일 레티넥스를 수행하는 이미지 처리 방법 및 장치 | |
JP4359662B2 (ja) | カラー画像の露出補正方法 | |
JP5624896B2 (ja) | 画像処理装置、画像処理プログラム、及び、画像処理方法 | |
JP2011120299A (ja) | 階調変換装置、プログラム、電子カメラ、及びその方法 | |
JP2020123138A (ja) | 画像処理装置および画像処理方法、プログラム、記憶媒体 | |
Haro et al. | Visual acuity in day for night | |
JP5050141B2 (ja) | カラー画像の露出評価方法 | |
US10565756B2 (en) | Combining drawing media texture and image data for display while maintaining the dynamic range of the original image | |
JP5300791B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JPWO2016199234A1 (ja) | 画像処理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15908726 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017551423 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15773047 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15908726 Country of ref document: EP Kind code of ref document: A1 |