WO2005027042A1 - 視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置 - Google Patents
視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置 Download PDFInfo
- Publication number
- WO2005027042A1 WO2005027042A1 PCT/JP2004/013602 JP2004013602W WO2005027042A1 WO 2005027042 A1 WO2005027042 A1 WO 2005027042A1 JP 2004013602 W JP2004013602 W JP 2004013602W WO 2005027042 A1 WO2005027042 A1 WO 2005027042A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- value
- visual processing
- processing device
- function
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 1087
- 230000000007 visual effect Effects 0.000 title claims abstract description 797
- 238000003672 processing method Methods 0.000 title claims description 35
- 239000004065 semiconductor Substances 0.000 title claims description 6
- 230000002093 peripheral effect Effects 0.000 claims abstract description 25
- 238000006243 chemical reaction Methods 0.000 claims description 227
- 230000008859 change Effects 0.000 claims description 65
- 230000006870 function Effects 0.000 description 495
- 230000006835 compression Effects 0.000 description 114
- 238000007906 compression Methods 0.000 description 114
- 238000004364 calculation method Methods 0.000 description 105
- 238000012937 correction Methods 0.000 description 77
- 238000000034 method Methods 0.000 description 45
- 230000000694 effects Effects 0.000 description 44
- 238000010586 diagram Methods 0.000 description 41
- 239000008186 active pharmaceutical agent Substances 0.000 description 39
- 230000008569 process Effects 0.000 description 39
- 238000012986 modification Methods 0.000 description 33
- 230000004048 modification Effects 0.000 description 33
- 230000014509 gene expression Effects 0.000 description 32
- 230000002708 enhancing effect Effects 0.000 description 24
- 230000001965 increasing effect Effects 0.000 description 16
- 239000011159 matrix material Substances 0.000 description 14
- 230000007423 decrease Effects 0.000 description 12
- 230000009466 transformation Effects 0.000 description 12
- 238000011156 evaluation Methods 0.000 description 8
- 238000005259 measurement Methods 0.000 description 8
- 229920006395 saturated elastomer Polymers 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000006854 communication Effects 0.000 description 7
- 238000001514 detection method Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 230000001629 suppression Effects 0.000 description 6
- 230000015572 biosynthetic process Effects 0.000 description 5
- 238000005282 brightening Methods 0.000 description 5
- 230000003247 decreasing effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 230000007613 environmental effect Effects 0.000 description 5
- 238000003786 synthesis reaction Methods 0.000 description 5
- 230000003321 amplification Effects 0.000 description 4
- 238000012886 linear function Methods 0.000 description 4
- 238000003199 nucleic acid amplification method Methods 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000004044 response Effects 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 230000003111 delayed effect Effects 0.000 description 2
- 230000006866 deterioration Effects 0.000 description 2
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 2
- 238000009499 grossing Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- 229920001690 polydopamine Polymers 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 101100280216 Caenorhabditis elegans exl-1 gene Proteins 0.000 description 1
- 201000005569 Gout Diseases 0.000 description 1
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 1
- 240000000233 Melia azedarach Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 239000011440 grout Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- GKAOGPIIYCISHV-UHFFFAOYSA-N neon atom Chemical group [Ne] GKAOGPIIYCISHV-UHFFFAOYSA-N 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000009738 saturating Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
- G06T5/75—Unsharp masking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/409—Edge or detail enhancement; Noise or error suppression
- H04N1/4092—Edge or detail enhancement
Definitions
- Visual processing device Description: Visual processing device, visual processing method, visual processing program, and semiconductor device (technical field)
- the present invention relates to a visual processing device, and more particularly to a visual processing device that performs visual processing such as spatial processing or gradation processing of an image signal.
- spatial processing and gradation processing are known. Spatial processing is to perform processing of a pixel of interest using pixels around the pixel of interest to which a filter is applied.
- techniques for performing contrast enhancement of the original image and dynamic range (D R) compression using a spatially processed image signal are known.
- contrast enhancement the difference between the original image and the blur signal (sharp component of the image) is added to the original image to sharpen the image.
- DR compression a part of the blur signal is subtracted from the original image to compress the dynamic range.
- the gradation process is a process of converting a pixel value using a look-up table (LUT) for each pixel of interest irrespective of pixels around the pixel of interest, and is sometimes referred to as gamma correction.
- LUT look-up table
- pixel values are converted using LUT that sets the gradation of the gradation level having a high appearance frequency (large area) in the original image.
- gradation processing As gradation processing using LUTs, gradation processing (histogram equalization method), which determines and uses one LUT for the entire original image, and determines LUTs for each image area obtained by dividing the original image into multiple parts It is known that the gradation processing (local histogram equalization method) to be used is disclosed in, for example, Japanese Patent Application Laid-Open No. 2000-57373 (page 3, FIG. 1'3 to FIG. 16). See Fig.).
- Figure 48 shows edge enhancement and contrast enhancement using unsharp masking. Shown is a visual processing device 400 for performing.
- a visual processing device 400 shown in FIG. 8 performs spatial processing on an input signal IS and outputs an unsharp signal US, and a spatial processing unit 401 that subtracts the unsharp signal US from the input signal IS and outputs a difference signal DS.
- FIG. 49 shows the enhancement functions R1 to R3.
- the horizontal axis represents the difference signal DS
- the vertical axis represents the emphasis processing signal TS.
- the enhancement function R 1 is a linear enhancement function for the difference signal DS.
- the enhancement function R2 is a nonlinear enhancement function for the difference signal DS, and is a function for suppressing excessive contrast.
- a larger suppression effect (a suppression effect by a larger suppression ratio) is exerted on an input X having a large absolute value (X is the value of the differential signal DS).
- the enhancement function R2 is represented by a graph having a smaller slope for an input X having a larger absolute value.
- the emphasis function R3 is a non-linear emphasis function for the differential signal DS, and suppresses small amplitude noise components. That is, a larger suppression effect (a suppression effect by a larger suppression ratio) is exerted on an input X having a small absolute value (X is the value of the differential signal DS).
- the enhancement function R 3 is represented by a graph having a larger slope for an input X having a larger absolute value.
- the emphasis processing section 403 uses one of these emphasis functions R 1 to R 3.
- the difference signal DS is a sharp component of the input signal IS.
- the intensity of the difference signal DS is converted and added to the input signal IS. For this reason, the edge and contrast of the input signal IS are enhanced in the output signal OS.
- FIG. 50 shows a visual processing device 406 for improving local contrast (intensity) (for example, see Japanese Patent No. 2832954 (page 2, FIG. 5)).
- the visual processing device 406 shown in FIG. 5.0 includes a spatial processing unit 407, a subtraction unit 408, a first conversion unit 409, a multiplication unit 410, a second conversion unit 411, and an addition unit 412. Have.
- the spatial processing unit 407 performs spatial processing on the input signal IS Outputs unsharp signal US.
- the subtraction unit 408 subtracts the unsharp signal US from the input signal IS and outputs a difference signal DS.
- the first converter 409 outputs an amplification coefficient signal GS for locally amplifying the difference signal DS based on the intensity of the unsharp signal US.
- the multiplier 410 multiplies the difference signal DS by the amplification coefficient signal GS, and outputs a contrast emphasis signal HS obtained by locally amplifying the difference signal DS.
- the second converter 41 1 locally corrects the intensity of the unsharp signal US and outputs a corrected unsharp signal AS.
- the adder 412 adds the contrast enhancement signal HS and the corrected unsharp signal AS, and outputs an output signal OS.
- the amplification coefficient signal GS is a nonlinear weighting coefficient for locally optimizing the contrast of a portion where the contrast is not appropriate in the input signal IS. Therefore, in the input signal IS, an appropriate portion of the contrast is output as it is, and an inappropriate portion is output after being optimized.
- FIG. 51 shows a visual processing device 416 that performs dynamic range compression (for example, see Japanese Patent Application Laid-Open No. 2001-298619 (page 3, FIG. 9) :).
- the visual processing device 416 shown in Fig. 5 "I” performs spatial processing on the input signal IS and outputs an unsharp signal US, and a spatial processing unit 417, and inverts the unsharp signal US using an LUT.
- An LUT operation unit 418 that outputs a LUT processing signal LS
- an addition unit 419 that adds the input signal IS and the LUT processing signal LS and outputs an output signal OS are provided.
- the 1_re-processed signal 1_3 is added to the input signal IS to compress a dynamic range of a low-frequency component (a frequency component lower than the cutoff frequency of the spatial processing unit 417) of the input signal IS. For this reason, the high frequency component is retained while compressing the dynamic range of the input signal IS. (Disclosure of the Invention)
- a person who views the visually processed image it is required for a person who views the visually processed image to obtain an image with a higher visual effect. For example, when a visually processed image is displayed, the displayed image is viewed under the influence of the display environment. Therefore, in the present invention, a person who views a visually processed image has a more visual effect. It is an object to obtain a high-quality image.
- the visual processing device includes a parameter output unit and a conversion unit.
- the parameter output means outputs a brightness adjustment parameter based on a parameter representing the ambient light.
- the conversion unit is configured to calculate the luminance of the target pixel based on the luminance adjustment parameter output from the parameter output unit, the luminance of the target pixel to be subjected to the visual processing, and the luminance of peripheral pixels located around the target pixel. Is converted.
- the parameter representing the ambient light is measured by, for example, an optical sensor that detects the intensity of the light, and is input to the parameter output unit.
- the parameter representing the ambient light is created by the user's judgment and input to the parameter output means.
- the brightness adjustment parameters include, for example, a look-up table (LUT) that stores the brightness of the target pixel, the brightness of the surrounding pixels, and the converted brightness of the target pixel for the calculation results of those values, and the like. Coefficient matrix data for converting the luminance, the luminance of peripheral pixels, or the operation result of those values. Further, the brightness adjustment parameter may include a parameter representing the ambient light.
- LUT look-up table
- the parameter output means selects and outputs a brightness adjustment parameter corresponding to the parameter representing the ambient light from the plurality of brightness adjustment parameters, or generates the brightness adjustment parameter by an operation corresponding to the parameter representing the ambient light, Output.
- the visual processing device is the visual processing device according to claim 1, wherein the parameter output unit is configured to output a brightness adjustment parameter based on a parameter representing ambient light and an external parameter input from outside. Is output.
- the external parameter is, for example, a parameter representing a visual effect required by a user who views an image. More specifically, it is a value such as contrast required by the user who views the image (the same applies to this section below).
- the parameter output means outputs a brightness adjustment parameter based on a parameter representing environmental light and an external parameter.
- the brightness adjustment parameter may further include, for example, an external parameter.
- the visual processing device is the visual processing device according to claim 1, wherein the parameter output unit outputs a brightness adjustment parameter based on a parameter representing ambient light, A second mode for outputting a brightness adjustment parameter based on a parameter representing light and an external parameter inputted from the outside, and being switched based on a switching signal. .
- a brightness adjustment parameter corresponding to the ambient light is output.
- brightness adjustment parameters according to the ambient light and external parameters are output.
- a predetermined brightness adjustment parameter preset in the system is output.
- the user himself / herself sets values such as contrast required by the user who views the image, and outputs a brightness adjustment parameter according to the set values and the ambient light.
- the visual processing device With the visual processing device according to the present invention, it is possible to switch between using a value such as contrast preset by the user who views an image, or using a default value preset in the system. Become.
- the visual processing device is the visual processing device according to claim 1, wherein the conversion unit performs an operation for enhancing a difference or a ratio between the luminance of the target pixel and the luminance of peripheral pixels.
- the emphasis operation includes not only the emphasis in the positive direction but also the emphasis in the negative direction.
- it includes a process of smoothing the brightness of the target pixel and the brightness of the peripheral pixels, and a process of enhancing local contrast.
- the visual processing device for example, it is possible to enhance local contrast and maintain the contrast that is viewed in an environment where ambient light exists.
- the visual processing device is the visual processing device according to claim 1, wherein a time change of a parameter representing environmental light or a luminance adjustment parameter is controlled.
- the time change adjusting unit controls a time change such as, for example, slowing down the response of the parameter over time or delaying the response of the parameter over time.
- the time change adjusting unit may use, for example, a smoothing filter such as an IIR filter or a means for outputting a value obtained by integrating the values of the respective parameters or an average of the integrated values. It may be configured.
- the visual processing device of the present invention by controlling the time change of the parameter representing the ambient light or the brightness adjustment parameter, for example, it is possible to suppress a sudden change in the parameter, and to suppress the flickering of the display screen. Becomes possible.
- the visual processing method includes a parameter output step and a conversion step.
- the parameter output step outputs a brightness adjustment parameter based on a parameter representing the ambient light.
- the luminance of the target pixel is determined based on the luminance adjustment parameter output in the parameter output step, the luminance of the target pixel to be subjected to visual processing, and the luminance of peripheral pixels located around the target pixel. Convert.
- the parameter representing the ambient light is measured by, for example, an optical sensor that detects the intensity of light.
- the parameter representing the ambient light is created by the user's judgment.
- the brightness adjustment parameters include, for example, a look-up table (LUT) that stores the brightness of the target pixel, the brightness of the surrounding pixels, and the converted brightness of the target pixel for the calculation results of those values, and the like. Coefficient matrix data for converting the luminance, the luminance of peripheral pixels, or the operation result of those values. Further, the brightness adjustment parameter may include a parameter representing the ambient light.
- LUT look-up table
- the parameter output step includes, for example, selecting and outputting a brightness adjustment parameter corresponding to a parameter representing ambient light from a plurality of brightness adjustment parameters, or generating a brightness adjustment parameter by an operation corresponding to the parameter representing ambient light, Output.
- the visual processing method according to the present invention makes it possible to realize visual processing according to ambient light. In other words, it is possible to realize visual processing with a higher visual effect. Become. .
- a visual processing program is a program for causing a computer to perform a visual processing method.
- the visual processing method includes a parameter output step and a conversion step.
- the parameter output step outputs a brightness adjustment parameter based on the parameter representing the ambient light.
- the conversion step is based on the brightness adjustment parameters output by the parameter output step, the brightness of the target pixel to be subjected to visual processing, and the brightness of peripheral pixels located around the target pixel. Is converted.
- the parameter representing the ambient light is measured by, for example, an optical sensor that detects the intensity of light.
- the parameter representing the ambient light is created by the user's judgment.
- the brightness adjustment parameters include, for example, a look-up table (LUT) that stores the brightness of the target pixel, the brightness of the surrounding pixels, and the converted brightness of the target pixel for the calculation results of those values, and the like. Coefficient matrix data for converting the luminance, the luminance of peripheral pixels, or the operation result of those values. Further, the brightness adjustment parameter may include a parameter representing the ambient light.
- LUT look-up table
- the parameter output step includes, for example, selecting and outputting a brightness adjustment parameter corresponding to a parameter representing ambient light from a plurality of brightness adjustment parameters, or generating a brightness adjustment parameter by an operation corresponding to the parameter representing ambient light, Output.
- the visual processing program according to the present invention makes it possible to realize visual processing according to ambient light. That is, it is possible to realize visual processing with a higher visual effect.
- the semiconductor device includes a parameter output unit and a conversion unit.
- the parameter output unit outputs a brightness adjustment parameter based on a parameter representing the ambient light.
- the conversion unit converts the luminance of the target pixel based on the luminance adjustment parameter output from the parameter output unit, the luminance of the target pixel to be subjected to visual processing, and the luminance of peripheral pixels located around the target pixel. I do.
- the parameter representing the ambient light is, for example, an optical sensor that detects the intensity of light. It is measured and input to the parameter output section. Alternatively, the parameter representing the ambient light is created by the user's judgment and input to the parameter output unit.
- the brightness adjustment parameters include, for example, a look-up table (LUT) that stores the brightness of the target pixel, the brightness of the surrounding pixels, and the converted brightness of the target pixel for the calculation results of those values, and the like. Coefficient matrix data for converting the luminance, the luminance of peripheral pixels, or the operation result of those values. Further, the brightness adjustment parameter may include a parameter representing the ambient light.
- LUT look-up table
- the parameter output unit for example, selects and outputs a brightness adjustment parameter corresponding to the parameter representing the ambient light from a plurality of brightness adjustment parameters, or generates the brightness adjustment parameter by an operation corresponding to the parameter representing the ambient light. And output.
- visual processing according to environmental light can be realized. That is, it is possible to realize visual processing with a higher visual effect.
- the visual processing device of the present invention it is possible for a person who views a visually processed image to obtain an image with a higher visual effect.
- FIG. 1 is a block diagram (first embodiment) for explaining the structure of the visual processing device 1.
- FIG. 2 is an example of profile data (first embodiment).
- FIG. 3 is a flowchart (first embodiment) for explaining the visual processing method.
- FIG. 4 is a block diagram (first embodiment) illustrating the structure of the visual processing unit 500.
- FIG. 5 is an example of profile data (first embodiment).
- FIG. 6 is a block diagram (first embodiment) for explaining the structure of the visual processing device 5 20.
- FIG. 7 is a block diagram (first embodiment) for explaining the structure of the visual processing device 5 25.
- FIG. 8 is a block diagram (first embodiment) for explaining the structure of the visual processing device 530.
- FIG. 9 is a block diagram (first embodiment) for explaining the structure of the profile data registration device 701.
- FIG. 10 is a flowchart (first embodiment) for explaining the visual processing profile creation method.
- FIG. 11 is a block diagram (first embodiment) for explaining the structure of the visual processing device 90 1.
- FIG. 12 is a graph (first embodiment) showing the relationship between the input signal I S ′ and the output signal O S ′ when the change degree function f k (z) is changed.
- FIG. 13 is a graph (first embodiment) showing change degree functions f "I (z) and f2 (z).
- FIG. 14 is a block diagram (first embodiment) illustrating the structure of the visual processing device 905.
- FIG. 15 is a block diagram (first embodiment) illustrating the structure of the visual processing device 11.
- FIG. 16 is a block diagram (first embodiment) for explaining the structure of the visual processing device 21.
- FIG. 17 is an explanatory diagram (first embodiment) illustrating the dynamic range compression function F4.
- FIG. 18 is an explanatory diagram (first embodiment) illustrating the enhancement function F5.
- FIG. 19 is a block diagram (first embodiment) illustrating the structure of the visual processing device 31.
- FIG. 20 is a block diagram (first embodiment) illustrating the structure of the visual processing device 41.
- FIG. 21 is a block diagram (first embodiment) illustrating the structure of the visual processing device 51.
- FIG. 22 is a block diagram (first embodiment) illustrating the structure of the visual processing device 61.
- FIG. 23 is a block diagram (first embodiment) illustrating the configuration of the visual processing device 71.
- FIG. 24 is a block diagram (second embodiment) illustrating the configuration of the visual processing device 600.
- FIG. 25 is a graph (second embodiment) for explaining the conversion by the expression M 20.
- FIG. 26 is a graph (second embodiment) for explaining the conversion by the expression M2.
- FIG. 27 is a graph (second embodiment) illustrating the conversion by the equation M21.
- FIG. 28 is a flowchart (second embodiment) for explaining the visual processing method.
- FIG. 29 is a graph (second embodiment) showing the tendency of the function QM (A).
- FIG. 30 is a graph (second embodiment) showing the tendency of the function ⁇ 2 ( ⁇ ).
- FIG. 31 is a graph (second embodiment) showing the tendency of the function QT 3 (A).
- FIG. 32 is a graph (second embodiment) showing the tendency of the function a 4 (A, B).
- FIG. 33 is a block diagram (second embodiment) illustrating the structure of an actual contrast setting unit 605 as a modification.
- FIG. 34 is a block diagram (second embodiment) illustrating the structure of the actual contrast setting unit 605 as a modification.
- FIG. 35 is not a flowchart (second embodiment) for explaining the operation of the control section 605 e.
- FIG. 36 is a block diagram (second embodiment) for explaining the structure of a visual processing device 600 including the color difference correction processing unit 60 8.
- FIG. 37 is an explanatory diagram for explaining the outline of the color difference correction process (second embodiment).
- FIG. 38 is a flowchart (second embodiment) for explaining the estimation calculation in the color difference correction processing section 608.
- FIG. 39 is a block diagram (second embodiment) for explaining the structure of a visual processing device 600 as a modified example.
- FIG. 40 is a block diagram (third embodiment) illustrating the structure of the visual processing device 910.
- FIG. 41 is a block diagram (third embodiment) illustrating the structure of the visual processing device 920.
- FIG. 42 is a block diagram illustrating the structure of the visual processing device 9 20 ′ (third embodiment) It is.
- FIG. 43 is a block diagram (third embodiment) illustrating the structure of the visual processing device 9200 ".
- FIG. 44 is a block diagram (fourth embodiment) illustrating the entire configuration of the content supply system.
- FIG. 45 is an example (fourth embodiment) of a mobile phone equipped with the interpolation frame creation device of the present invention.
- FIG. 46 is a block diagram (fourth embodiment) illustrating the configuration of a mobile phone.
- FIG. 47 is an example (fourth embodiment) of a digital broadcasting system.
- FIG. 48 is a block diagram (background art) illustrating the structure of the visual processing device 400 using unsharp masking.
- FIG. 49 is an explanatory diagram (background art) illustrating the enhancement functions R1 to R3.
- FIG. 50 is a block diagram (background art) illustrating the structure of a visual processing device 406 that improves local contrast.
- FIG. 51 is a block diagram (background art) illustrating the structure of a visual processing device 416 that compresses a dynamic range.
- first to fourth embodiments as the best mode of the present invention will be described.
- a visual processing device using two-dimensional LUT will be described.
- a visual processing device that corrects ambient light when ambient light is present in the environment for displaying an image will be described.
- a visual processing device 1 using a two-dimensional LUT according to a first embodiment of the present invention will be described with reference to FIGS. Also, using Fig. 11 to Fig. 14, visual processing A modified example of the device will be described. Further, a visual processing device that realizes visual processing equivalent to the visual processing device 1 will be described with reference to FIGS.
- the visual processing device 1 is a device that performs visual processing such as spatial processing and gradation processing of an image signal.
- the visual processing device 1 constitutes an image processing device together with a device that performs color processing of an image signal in a device that handles images such as a computer, a television, a digital camera, a mobile phone, a PDA, a printer, and a scanner.
- Fig. 1 shows the basic configuration of a visual processing device "! That performs visual processing on an image signal (input signal IS) and outputs a visual processing image (output signal OS).
- the visual processing device 1 acquires the input signal IS.
- the spatial processing unit 2 that performs spatial processing on the luminance value of each pixel of the original image and outputs an unsharp signal US, and the input signal IS and the unsharp signal US for the same pixel
- a visual processing unit 3 that performs processing and outputs an output signal OS.
- the spatial processing unit 2 obtains a lian sharp signal U S by, for example, a low-pass spatial filter that passes only the low-pass space of the input signal I S.
- Low-pass spatial filters include FIR (Finite Impulse Re spones) type low-pass spatial filters or IIR (lnfinite I mp ulse Re spones) type low-pass spatial filters that are commonly used to generate unsharp signals. May be used.
- the visual processing unit 3 has a two-dimensional LUT 4 that gives the relationship between the input signal IS and the unsharp signal US and the output signal OS, and the two-dimensional LUT 4 is applied to the input signal IS and the unsharp signal US. Refer to output signal OS.
- the profile data has a row (or column) corresponding to each pixel value of the input signal IS and a column (or row) corresponding to each pixel value of the unsharp signal US.
- the pixel value of the output signal OS corresponding to the combination of the input signal IS and the unsharp signal US is stored.
- Profile data is stored in or connected to the visual processing device 1
- the device 8 registers the two-dimensional LUT 4.
- the profile data registration device 8 stores a plurality of profile data created in advance by a personal computer (PC) or the like. For example, a plurality of profile data that realizes contrast enhancement, D-range compression processing, or gradation correction (for details, see ⁇ Profile Data> below) is stored.
- the visual processing device 1 can change the registered contents of the profile data of the two-dimensional LUT 4 using the profile data registration device 8 and realize various visual processes.
- An example of profile data is shown in Figure 2.
- the profile data shown in FIG. 2 is profile data for causing the visual processing device 1 to realize processing equivalent to that of the visual processing device 400 shown in FIG.
- the profile data is represented in a 64 ⁇ 64 matrix format.
- the upper 6 bits of the luminance value of the input signal IS represented by 8 bits are displayed.
- the upper 6 bits of the luminance value of the unsharp signal US expressed by 8 bits are shown.
- the value of the output signal OS is shown in 8 bits as a matrix element for two luminance values.
- the value C (value of the output signal OS) of each element of the profile data shown in FIG. 2 is the value A of the input signal IS (for example, a value obtained by truncating the lower 2 bits of the input signal IS expressed by 8 bits).
- the value B of the unsharp signal US for example, the value obtained by truncating the lower 2 bits of the unsharp signal US expressed by 8 bits
- G A + 0.5 * (A-B) (hereinafter , The expression M 11 1).
- the value C obtained by the expression M11 may be a negative value.
- the elements of the profile data corresponding to the value A of the input signal IS and the value B of the unsharp signal US may have a value of 0.
- the value C obtained by the expression M11 may be saturated. In other words, it may exceed the maximum value 255 that can be represented by 8 bits.
- the value A of the input signal IS and the value of the unsharp signal US The element of profile data corresponding to B may have the value 255. In FIG.
- the function R5 is a function for outputting the amplification coefficient signal GS from the unsharp signal US in the first conversion unit 409
- the function R6 is a modified unsharp signal from the unsharp signal US in the second conversion unit 411. This function outputs the sharp signal AS.
- the processing equivalent to the visual processing device 416 shown in FIG. Can be realized.
- the function R8 is a function for outputting the LUT processing signal L S from the unsharp signal U S.
- the value C of a certain element of the profile data obtained by Expressions Ml2 and Ml3 exceeds the range of 0 ⁇ C ⁇ 255, the value C of the element may be set to 0 or 255.
- FIG. 3 is a flowchart illustrating a visual processing method in the visual processing device 1.
- the visual processing method shown in FIG. 3 is realized by hardware in the visual processing device 1 and performs visual processing of the input signal I S (see FIG. 1).
- the input signal IS is spatially processed by a low-pass spatial filter (step S11), and an unsharp signal US is obtained. Further, the value of the two-dimensional LUT 4 for the input signal IS and the unsharp signal US is referred to, and the output signal OS is output (step S12). The above processing is performed for each pixel input as the input signal IS.
- Each step of the visual processing method shown in FIG. 3 may be realized as a visual processing program by a computer or the like.
- the visual processing device 1 uses the value A of the input signal IS and the unsharp signal Visual processing is performed using profile data created based on a two-dimensional function corresponding to the US value B. For this reason, pixels of the same density existing at different locations in the image can be brightened or darkened, including surrounding information, instead of being converted uniformly. You can adjust tomorrow. More specifically, the background of the same density can be brightened without changing the density of the human hair in the image.
- the visual processing device 1 uses 2D LUT 4 to perform visual processing of the input signal IS.
- the visual processing device 1 has a hardware X configuration that does not depend on a visual processing effect to be realized. That is, the visual processing device 1 can be composed of general-purpose hardware, which is effective in reducing hardware costs.
- the profile data registered in the two-dimensional LUT 4 can be changed by the profile data registration device 8. Therefore, the visual processing device 1 can realize various visual processes by changing the profile data without changing the hardware configuration of the visual processing device 1. More specifically, the visual processing device 1 can simultaneously perform spatial processing and gradation processing.
- the profile data registered in the two-dimensional LUT 4 can be calculated in advance. No matter how complicated the profile data is once created, the time required for visual processing using it is constant. Therefore, even if the visual processing has a complicated configuration when configured with hardware or software, the processing time depends on the complexity of the visual processing when the visual processing device 1 is used. Without doing so, it is possible to speed up visual processing. ⁇ Modification>
- the profile data of the 64 ⁇ 64 matrix format has been described.
- the effect of the present invention does not depend on the size of the profile data.
- the two-dimensional LUT 4 can have profile data corresponding to all possible combinations of values of the input signal IS and the unsharp signal US.
- the profile data may be in the form of a 256 x 256 matrix. In this case, the memory capacity required for the two-dimensional LUT4 is increased, but more accurate Visual processing can be realized.
- the profile data consists of the upper 6-bit value of the luminance value of the input signal IS expressed in 8 bits and the upper 6-bit value of the luminance value of the unsharp signal US expressed in 8 bits. It explained that the value of the output signal OS for and was stored.
- the visual processing device 1 further includes an interpolation unit that linearly interpolates the value of the output signal OS based on the adjacent profile data elements and the magnitudes of the lower two bits of the input signal IS and the unsharp signal US. You may have.
- the interpolation unit may be provided in the visual processing unit 3 and may output a value obtained by linearly interpolating the value stored in the two-dimensional LUT 4 as the output signal OS.
- FIG. 4 shows a visual processing unit 500 including an interpolation unit 501 as a modification of the visual processing unit 3.
- the visual processing unit 500 receives a two-dimensional LUT4 that gives the relationship between the input signal IS and the unsharp signal US and the pre-interpolation output signal NS, the pre-interpolation output signal NS, the input signal IS, and the unsharp signal US.
- an interpolation unit 501 for outputting an output signal OS.
- the two-dimensional LUT4 consists of the upper 6 bits of the luminance value of the input signal IS represented by 8 bits and the upper 6 bits of the luminance value of the unsharp signal US represented by 8 bits. And the value of the output signal N ⁇ before interpolation for the value of.
- the value of the output signal NS before interpolation is stored as an 8-bit value, for example.
- the interval including each value is (the value of the upper 6 bits of the input signal IS, the value of the upper 6 bits of the unsharp signal US), (the minimum 6 bits exceeding the value of the upper 6 bits of the input signal IS).
- Bit value, the value of the upper 6 bits of the unsharp signal US), (the value of the upper 6 bits of the input signal IS, the minimum 6 bits that exceed the value of the upper 6 bits of the unsharp signal US) Value), (the minimum 6-bit value exceeding the upper 6 bits of the input signal IS, and the minimum 6 bit value exceeding the upper 6 bits of the unsharp signal US) This is the section surrounded by the four pre-interpolation output signals N stored.
- the value of the lower 2 bits of the input signal IS and the value of the lower 2 bits of the unsharp signal US are input to the interpolation unit 501, and the four values output by the two-dimensional LUT 4 are used by using these values.
- the value of the output signal NS before interpolation is linearly interpolated. More specifically, using the lower 2 bits of the input signal IS and the lower 2 bits of the unsharp signal US, a weighted average of the four values of the output signal NS before interpolation is calculated, Output signal OS is output.
- interpolation unit 501 may perform linear interpolation only on either the input signal I S or the unsharp signal U S.
- the average value (simple average or weighted average) and the maximum value of the input signals IS of the target pixel and the peripheral pixels of the target pixel are obtained for the input signal IS of the target pixel.
- the minimum value, or the median value may be output as the unsharp signal US.
- the average value, the maximum value, the minimum value, or the median value of only the peripheral pixels of the target pixel may be output as the unsharp signal US. ( Four ) .
- the value C of each element of the profile data is created based on a linear function M 11 for each of the value A of the input signal I S and the value B of the unsharp signal U S.
- the value C of each element of the profile data may be created based on a non-linear function with respect to the value A of the input signal I S.
- the value C of each element of the profile data is created based on a nonlinear function, that is, a two-dimensional nonlinear function, for each of the value A of the input signal IS and the value B of the unsharp signal US. Also good.
- FIG. 5 shows an example of such profile data.
- the profile data shown in FIG. 5 is profile data for causing the visual processing device 1 to realize contrast enhancement suited to visual characteristics.
- the profile data is expressed in a 64 ⁇ 64 matrix format, and the upper 6 bits of the luminance value of the input signal IS expressed in 8 bits in the column direction (vertical direction). But 8 in the row direction (horizontal direction) The upper 6 bits of the luminance value of the unsharp signal US expressed in bits are shown. Also, the value of the output signal OS is shown in 8 bits as an element of a matrix for two luminance values.
- the value C (value of the output signal OS) of each element of the profile data shown in FIG. 5 is the value A of the input signal IS (for example, a value obtained by truncating the lower 2 bits of the input signal IS expressed by 8 bits).
- the value B of the unsharp signal US for example, the value obtained by truncating the lower two bits of the unsharp signal US represented by 8 bits
- the conversion function F1 the inverse conversion function F2 of the conversion function
- the enhancement function F3 F 2 (F 1 (A) + F 3 (F 1 (A)-F 1 (B))) (hereinafter referred to as formula M14).
- the transformation function F 1 is a common logarithmic function.
- the inverse transformation function F 2 is an exponential function (antilog) as an inverse function of the common logarithmic function.
- the enhancement function F3 is one of the enhancement functions R1 to R3 described with reference to FIG.
- the value C obtained by Equation M14 may be a negative value.
- the value of the profile data element corresponding to the value A of the input signal I S and the value B of the unsharp signal U S may be 0.
- the value C obtained by Expression M14 may be saturated. In other words, the maximum value of 255 that can be expressed in 8 bits may be exceeded.
- the element of the profile data corresponding to the value A of the input signal IS and the value B of the unsharp signal US may have a value of 255.
- the elements of the profile data obtained in this way are displayed in contour lines. A more detailed explanation of the non-linear profile data is given in ⁇ Profile Data> below.
- the profile data provided in the two-dimensional LUT 4 may include a plurality of gradation conversion curves (gamma curves) that realize gradation correction of the input signal IS.
- Each tone conversion curve is a monotonically increasing function, such as a gamma function having a different gamma coefficient, and is associated with the value of the unsharp signal US. The association is performed such that, for example, a gamma function having a large gamma coefficient is selected for a small value of the unsharp signal US.
- the unsharp signal US plays a role as a selection signal for selecting at least one gradation conversion curve from a group of gradation conversion curves included in the profile data.
- the gradation conversion of the value A of the input signal IS is performed using the gradation conversion curve selected by the value B of the unsharp signal US.
- the profile data registration device 8 is built in or connected to the visual processing device 1, stores a plurality of profile data created in advance by a PC or the like, and changes the registration content of the two-dimensional LUT 4.
- the profile data stored in the profile data registration device 8 is created by a PC installed outside the visual processing device 1.
- the profile data registration device 8 acquires profile data from a PC via a network or a recording medium.
- the profile data registration device 8 registers a plurality of stored profile data in the two-dimensional LUT 4 according to predetermined conditions. This will be described in detail with reference to FIGS. Note that portions having substantially the same functions as those of the visual processing device 1 described with reference to FIG.
- FIG. 6 shows a block diagram of a visual processing device 520 that determines the image of the input signal IS and switches the profile data registered in the two-dimensional LUT 4 based on the determination result.
- the visual processing device 520 has the same structure as the visual processing device 1 shown in FIG. A profile data registration unit 521 having the same function as the profile data registration device 8 is provided. Further, the visual processing device 520 includes an image determination unit 522.
- the image determination unit 522 receives the input signal I S as an input and outputs the determination result S A of the input signal I S as an output.
- the profile data registration unit 521 receives the determination result S A as input, and outputs the profile data PD selected based on the determination result S A.
- the image determination section 522 determines the image of the input signal IS. In the determination of the image, the brightness of the input signal IS is determined by acquiring the pixel values such as the brightness and the brightness of the input signal IS.
- the profile data registration unit 5 2 1 acquires the determination result S A and switches and outputs the profile data PD based on the determination result S A. More specifically, for example, when the input signal IS is determined to be bright, a profile for compressing the dynamic range is selected. As a result, it is possible to maintain the contrast even for an overall bright image. In addition, considering the characteristics of the device that displays the output signal OS, a profile is selected so that the output signal OS with an appropriate dynamic range is output.
- the visual processing device 5 20 can realize appropriate visual processing according to the input signal IS.
- the image determination unit 5 2 2 may determine not only pixel values such as luminance and brightness of the input signal IS but also image characteristics such as spatial frequency.
- FIG. 7 shows a block diagram of a visual processing device 525 that switches the profile data registered in the two-dimensional LUT 4 based on the input result from the input device for inputting the condition regarding the brightness.
- the visual processing device 5 25 has a profile data registration unit 5 having the same function as the profile data registration device 8 in addition to the same structure as the visual processing device 1 shown in FIG. It has 2 6. Further, the visual processing device 525 has an input device 527 connected by wire or wirelessly. More specifically, the input device 527 is an input signal or an input button provided on an image processing device itself such as a computer that outputs an OS, a television, a digital camera, a mobile phone, a PDA, a printer, and a scanner. It is realized as a remote control for each device.
- the input device 527 is an input device for inputting a condition relating to brightness, and includes, for example, switches such as “bright” and “dark”.
- the input device 527 outputs an input result SB by operation of a user.
- the profile data registration unit 526 acquires the input result SB, and switches and outputs the profile data PD based on the input result SB. More specifically, for example, when the user inputs “bright”, a profile for compressing the dynamic range of the input signal IS is selected and output as profile data PD. This makes it possible to maintain the contrast even when the environment in which the device that displays the output signal OS is placed is in the “bright J state”.
- the visual processing device 5 2 5 can realize appropriate visual processing in accordance with the input from the input device 5 2 7.
- the conditions related to brightness include not only the conditions related to the brightness of ambient light around the medium that outputs output signals such as computers, televisions, digital cameras, mobile phones, and PDAs, but also output signals such as printer paper. It may be a condition related to the brightness of the medium itself. In addition, for example, conditions relating to the brightness of the medium itself for inputting an input signal such as scanner paper may be used.
- These may be input automatically not only by a switch but also by a photo sensor.
- the input device 527 may be a device for not only inputting the condition regarding the brightness but also for directly operating the profile switching to the profile data registration unit 526.
- the input device 5 27 may display a list of profile data and allow the user to select in addition to the condition regarding brightness.
- the input device 527 may be a device for identifying a user.
- the input device 527 may be a camera for identifying a user or a device for inputting a user name.
- profile data that suppresses an excessive change in luminance is selected.
- FIG. 8 shows a block diagram of a visual processing device 5350 that switches the profile data registered in the two-dimensional LUT 4 based on the detection results from the lightness detection unit for detecting two types of brightness.
- the visual processing device 5 30 includes a profile data registration unit 5 31 having the same function as the profile data registration device 8 in addition to the same structure as the visual processing device 1 shown in FIG. Further, the visual processing device 5 3 0 includes a brightness detection unit 5 3 2.
- the brightness detection section 532 is composed of an image determination section 5222 and an input device 527.
- the image determination unit 5 2 2 and the input device 5 2 7 are the same as described with reference to FIGS.
- the lightness detection unit 5 3 2 receives the input signal I S and outputs the determination result S A from the image determination unit 5 2 2 and the input result S B from the input device 5 2 7 as detection results.
- the profile data registration unit 5 31 receives the judgment result S A and the input result S B as inputs, and switches and outputs the profile data PD based on the judgment result S A and the input result S B. More specifically, for example, when the ambient light is in a “bright” state and the input signal IS is also determined to be bright, a profile that compresses the dynamic range of the input signal IS is selected, and the profile data PD Is output as. This makes it possible to maintain the contrast when displaying the output signal OS.
- the visual processing device 530 can realize appropriate visual processing.
- each profile data registration unit may not be provided integrally with the visual processing device.
- the profile data registration unit may be a server having a plurality of profile data, or a plurality of servers having the respective profile data, connected to the visual processing device via a network.
- the network is a connection means capable of communication, such as a dedicated line, a public line, the Internet, or a LAN, and may be wired or wireless.
- the determination result SA and the input result SB are also transmitted from the visual processing device to the profile data registration unit via the same network.
- the profile data registration device 8 has a plurality of profile data, and realizes different visual processing by switching registration with the two-dimensional LUT 4.
- the visual processing device 1 may include a plurality of two-dimensional LUTs in which profile data for realizing different visual processing is registered. In this case, the visual processing device 1 realizes different visual processing by switching the input to each 2D LUT or switching the output from each 2D LUT. Also good.
- the profile data registration device 8 may be a device that generates new profile data based on a plurality of profile data and registers the generated profile data in the two-dimensional LUT 4.
- FIG. 9 is a block diagram mainly illustrating a profile data registration device 7 0 1 as a modified example of the profile data registration device 8.
- the profile data registration device 7 0 1 is a device for switching the profile data registered in the two-dimensional LUT 4 of the visual processing device 1.
- the profile data registration device 700 registers multiple profile data.
- Profile data registration unit 702 a profile creation execution unit 703 that generates new profile data based on a plurality of profile data, and a parameter for generating new port file data.
- a control unit 705 for controlling each unit.
- a plurality of profile data are registered in the profile data registration unit 720, similarly to the profile data registration unit 8 or the respective profile data registration units shown in FIGS. 6 to 8.
- the selected profile data selected by the control signal C10 from 5 is read.
- two pieces of selected profile data are read from the profile data registration unit 7202, and they are respectively referred to as first selected profile data d10 and second selected profile data d11.
- the profile data read from the profile data registration unit 70 2 is determined by the input of the parameter input unit 7 06.
- the parameter input unit 706 inputs desired visual processing effects, the degree of the processing, information on the visual environment of the processed image, and the like as parameters manually or automatically from a sensor or the like.
- the control unit 70 5 designates the profile data to be read from the parameters input by the parameter input unit 70 6 by the control signal c 1 0, and sets the synthesis value of each profile data to the control signal c 12 Specify more.
- the profile creation execution unit 703 includes a profile generation unit 7 that creates the generated profile data d6 as new profile data from the first selected profile data d10 and the second selected profile data d11. It has 0 4.
- the profile generation section 704 acquires the first selected profile data d10 and the second selected profile data d11 from the profile data registration section 702. Further, a control signal c 12 specifying the degree of synthesis of each selected profile data is acquired from the control unit 70 5.
- the profile generation unit 704 generates a control signal c 1 2 for the value [m] of the first selected profile data d “I 0 and the value [n] of the second selected profile data d 11.
- the generation profile of the value [I] Create the data d6.
- the value [k] satisfies 0 ⁇ k ⁇ 1
- the first selected profile data d1 0 and the second selected profile data d1 1 are internally divided
- the value [k] is k ⁇
- the first selected profile data d 10 and the second selected profile data d 11 are externally divided.
- the two-dimensional LUT 4 obtains the generated profile data d6 generated by the profile generating unit 704, and stores the obtained value at the address specified by the count signal c "I1" of the control unit 705.
- the profile data d6 is associated with the same image signal value to which each selected profile data used to create the generated profile data d6 is associated.
- new profile data that realizes further different visual processing can be created based on profile data that realizes different visual processing.
- a visual processing profile creation method executed in the visual processing device including the profile data registration device 701 will be described with reference to FIG.
- the address of the profile data registration unit 7202 is designated at a fixed count cycle, and the image signal value stored in the designated address is read (step S701). ). More specifically, the control unit 705 outputs a count signal c10 according to the parameter input by the parameter input unit 706.
- the count signal c 10 specifies the addresses of two profile data that realize different visual processes in the profile data registration unit 702. As a result, the first selected profile data d10 and the second selected profile data d11 are read from the profile data registration unit 702.
- the profile generation unit 704 obtains a control signal c12 specifying the degree of synthesis from the control unit 705 (step S702).
- the profile generation unit 704 calculates, for the value [m] of the first selection profile data d 10 and the value [n] of the second selection profile data d 11, the synthesis degree specified by the control signal c 12 Using the value [k], generate profile data for the value [I] Create d6 (Step S703)-.
- the generated profile data d 6 is written to the two-dimensional L U T 4 (step S 7 0 4).
- the write destination address is specified by the count signal G 11 from the control unit 705 applied to the two-dimensional LUT 4.
- the control unit 705 determines whether or not the processing for all data of the selected profile data has been completed (step 705), and from step S701 to step S705 until the processing is completed. Is repeated.
- the new profile data thus stored in the two-dimensional LUT 4 is used to execute visual processing.
- the profile data registration unit 702 can realize visual processing of an arbitrary degree of processing only by providing a small number of profile data, and the storage capacity of the profile data registration unit 702 can be reduced. Becomes possible.
- the profile data registration device 701 may be provided not only in the visual processing device 1 shown in FIG. 1 but also in the visual processing devices shown in FIGS. In this case, the profile data registration unit 702 and the profile creation execution unit 703 are used in place of the respective profile data registration units 521, 526, 531 shown in FIGS.
- the parameter input unit 706 and the control unit 705 may be used instead of the image determination unit 522 in FIG. 6, the input device 527 in FIG. 7, and the brightness detection unit 532 in FIG. No.
- the visual processing device may be a device that converts the brightness of the input signal IS.
- a visual processing device 9 0 1 that converts brightness will be described with reference to FIG.
- the visual processing device 90 1 is a device for converting the brightness of the input signal IS ′, A processing unit 902 that performs predetermined processing on the input signal IS ′ and outputs a processed signal US ′, and a conversion unit 903 that converts the input signal IS ′ using the input signal IS ′ and the processed signal US ′ Is done.
- the processing unit 902 operates similarly to the spatial processing unit 2 (see FIG. 1), and performs spatial processing of the input signal I S ′.
- the spatial processing as described in the above ⁇ Modification> (3) may be performed.
- the conversion unit 903 includes a two-dimensional LUT similarly to the visual processing unit 3 and includes an output signal OS ′ (value [y] based on the input signal IS ′ (value [X]) and the processed signal US ′ (value [z]). ]) Is output.
- the value of each element of the two-dimensional LUT included in the conversion unit 903 is determined based on the gain or offset determined according to the value of the function fk (z) relating to the degree of change in brightness with respect to the input signal IS, It is determined by applying the value [X].
- the function f k (z) relating to the degree of change in brightness is referred to as a “degree of change function”.
- this function is called a “conversion function”, and the conversion functions (a) to (d) are shown as examples.
- Figures 12 (a) to 12 (d) show the relationship between the input signal IS 'and the output signal OS' when the degree of change function fk (z) is changed.
- the degree-of-change function f l (z) acts as a gain of the input signal I S ′. Therefore, the gain of the input signal I S 'changes according to the value of the change degree function f 1 (z), and the value [y] of the output signal OS' changes.
- FIG. 12 (a) shows a change in the relationship between the input signal I S ′ and the output signal OS ′ when the value of the change degree function f 1 (z) changes.
- the degree-of-change function f 1 (z) increases (f 1 (z)> 1), the value [y] of the output signal increases. That is, the converted image becomes brighter.
- the degree of change function fl (z) becomes smaller (f 1 (z) ⁇ 1), the value [y] of the output signal becomes smaller. That is, the converted image becomes dark.
- the degree-of-change function f 1 (z) is a function whose minimum value in the domain of the value [z] does not become less than the value [0].
- the output signal may be clipped to the range of possible values.
- the value [y] of the output signal may be clipped to the value [1] if it exceeds the value [1], or the value [y] of the output signal if it is less than the value [0]. ] May be clipped to the value [0]. This is the same for the following conversion functions (b) to (d).
- the change degree function f 2 ( Z ) acts as an offset of the input signal IS ′. Therefore, the offset of the input signal IS 'changes and the value [y] of the output signal OS' changes according to the value of the change degree function f 2 (z).
- Figure 12 (b) shows the change in the relationship between the input signal I S 'and the output signal OS' when the value of the change function f 2 (z) changes.
- the output signal value [y] increases. That is, the converted image becomes brighter.
- the value [y] of the output signal decreases as the degree of change function f 2 (z) decreases (f 2 (z) ⁇ 0). In other words, the image after conversion becomes dark.
- the degree-of-change function f 1 (z) acts as the gain of the input signal I S '.
- the degree-of-change function f 2 (z) acts as an offset of the input signal I S ′. Therefore, the gain of the input signal IS 'changes according to the value of the change degree function f 1 (z), and the offset of the input signal IS' changes according to the value of the change degree function f 2 (z).
- Figure 12 (c) shows the change in the relationship between the input signal I S 'and the output signal OS' when the values of the change degree function f 1 (z) and change degree function f 2 (z) change.
- the value [y] of the output signal increases. That is, the converted image becomes brighter.
- the output signal value [y] becomes smaller as the change function f 1 (z) and the change function f 2 (z) become smaller. That is, the converted image becomes dark.
- the change rate function f 2 (z) determines “power j” of the “power function.” Therefore, the input signal IS ′ changes according to the value of the change rate function f 2 (z), and the output signal OS 'value [y] changes.
- Figure 12 (d) shows the change in the relationship between the input signal I S 'and the output signal OS' when the value of the degree of change function f 2 (z) changes.
- the output signal value [y] increases. That is, the converted image becomes brighter.
- the value [y] of the output signal decreases as the degree of change function f 2 (z) decreases (f 2 (z) ⁇ 0). In other words, the image after conversion becomes dark.
- the value [X] is a value obtained by normalizing the value of the input signal I S 'to a range of [0] to [1].
- visual processing of the input signal I S ′ is performed by a two-dimensional LUT having elements determined using any one of the conversion functions (a) to (d) described above.
- Each element of the 2D LUT stores the value [y] for the value [X] and the value [z]. Therefore, viewing angle processing for converting the brightness of the input signal I S ′ is realized based on the input signal I S ′ and the processing signal US ′.
- Figures 13 (a) and 13 (b) show the monotonically decreasing change degree functions f 1 (z) and f 2 (z ) Is shown. Three graphs (a 1 to a 3 and b 1 to b 3) are shown, but all are examples of monotonically decreasing functions.
- the degree-of-change function f 1 (z) is a function having a range that spans the value [1], and is a function in which the minimum value of the domain of the value [z] does not become less than the value [0].
- the variability function f 2 ( Z ) is a function having a range that spans the value [0].
- the value [z] of the processed signal US ' is small in a dark and large area in the image.
- the value of the variability function for small values of [z] is large. That is, if a two-dimensional LUT created based on the conversion functions (a) to (d) is used, the dark and large area in the image is converted to a bright area. Therefore, for example, in an image photographed in backlight, the dark part is improved for a dark and large area, and the visual effect is improved.
- the value Cz] of the processed signal US ' is large.
- the value of the degree of change function for large [ Z ] is small. That is, if a two-dimensional LUT created based on the conversion functions (a) to (d) is used, a bright and large-area portion in the image is converted to dark. Thus, for example, in an image having a bright portion such as the sky, overexposure is performed on a bright and large area, and the visual effect is improved.
- the above-described conversion function is an example, and may be any function as long as the conversion has the same properties.
- the 2D LUT is a value that is clipped to the range of values that can be handled as the output signal OS'. May be stored.
- conversion Unit 903 may output output signal OS ′ by calculating conversion functions (a)-(d) for input signal IS ′ and processed signal US ′.
- the visual processing device may include a plurality of spatial processing units and perform visual processing using a plurality of unsharp signals having different degrees of spatial processing.
- FIG. 14 shows the configuration of the visual processing device 905.
- the visual processing device 905 performs visual processing of the input signal IS ", and performs a first predetermined process on the input signal IS" and outputs a first processed signal U1.
- a second processing unit 906 b for performing a second predetermined process on the input signal IS "and outputting a second processed signal U 2
- a second processing unit 906 b for input signal IS a first processed signal U 1
- a second process A conversion unit 908 for converting the input signal IS "using the signal U2.
- the first processing unit 906a and the second processing unit 906b operate in the same manner as the spatial processing unit 2 (see FIG. 1) and perform the spatial processing of the input signal IS ".
- the spatial processing as described in may be performed.
- the first processing unit 906a and the second processing unit 906b have different sizes of peripheral pixel regions used in spatial processing.
- the first processing unit 906a uses peripheral pixels included in an area of 30 pixels vertically and 30 pixels horizontally around the pixel of interest (small unsharp signal), whereas the second processing unit 906a uses In 906b, peripheral pixels included in a region of 90 pixels vertically and 90 pixels horizontally around the pixel of interest are used (large unsharp signal).
- the peripheral pixel area described here is merely an example, and the present invention is not limited to this. It is preferable to generate an unsharp signal from a fairly large area in order to achieve the full effect of visual processing.
- the conversion unit 908 includes a LUT, and based on the input signal IS "(value [x]), the first processed signal U 1 (value [zl]), and the second processed signal U 2 (value [z 2]) Outputs the output signal OS "(value [y]).
- the LUT included in the conversion unit 903 is an output signal corresponding to the value [X] of the input signal IS ", the value [z 1] of the first processed signal LM, and the value [z 2] of the second processed signal U 2 O
- the value of each element of this three-dimensional LU ((the value [y] of the -output signal OS ") is the value [X of the input signal IS '. ], The value [z 1] of the first processing signal U 1, and the value [z 2] of the second processing signal U 2.
- This three-dimensional LUT can realize the processing described in the above embodiment and the following embodiment.
- the case where the three-dimensional LUT converts the brightness of the input signal IS ”and the case where the input signal IS "In case of emphasizing conversion" is explained.
- the conversion unit 908 performs conversion so as to brighten the input signal IS “.
- the value [z 2] of the second processing signal U 2 is also small. If so, suppress the degree of brightening.
- the value of each element of the three-dimensional LUT provided in the conversion unit 903 is determined based on the following conversion function (e) or (f).
- the degree-of-change functions f 11 (z 1) and f 12 (z 2) are the same as the degree-of-change function f 1 (z) described in the above ⁇ Modification> (8).
- the change degree function f 1 1 (z 1) and the change degree function f 1 2 (z 2) are different functions.
- [f 1 1 (z 1) / f 1 2 (z 2)] acts as a gain of the input signal IS ", and the value of the first processing signal U 1 and the value of the second processing signal U 2 As a result, the gain of the input signal IS "changes, and the value [y] of the output signal OS" changes.
- the change degree functions f 21 (z 1) and f 22 (z 2) are the same functions as the change degree function f 2 (z) described in the above ⁇ Modification> (8).
- the change degree function f 2 1 (z 1) and the change degree function f 22 (z 2) are different functions.
- the processing in the conversion unit 98 is not limited to the processing using the three-dimensional LUT, and may perform the same calculation as the conversion functions (e) and (f).
- each element of the three-dimensional LUT need not be strictly determined based on the conversion functions (e) and (f).
- the conversion in the converter 9 0 8 is a conversion that emphasizes the input signal I S ", it is possible to independently emphasize a plurality of frequency components.
- the conversion further emphasizes the first processed signal U 1, it is possible to emphasize a shaded portion having a relatively high frequency, and if the conversion further emphasizes the second processed signal U 2. It is possible to emphasize the shading part having a low frequency.
- the visual processing device 1 can include profile data that realizes various visual processing in addition to those described above.
- profile data that realizes various visual processing in addition to those described above.
- expressions for profile data for the first to seventh profile data for realizing various visual processing and the configuration of the visual processing device for realizing visual processing equivalent to the visual processing device 1 including the profile data will be described.
- Each profile data is determined based on a mathematical expression including an operation for enhancing a value calculated from the input signal I S and the unsharp signal U S.
- the calculation to be emphasized is, for example, a calculation using a nonlinear enhancement function.
- the first profile data is determined based on an operation including a function that emphasizes a difference between respective conversion values obtained by performing predetermined conversion on the input signal IS and the unsharp signal us. This makes it possible to convert the input signal IS and the unsharp signal US into different spaces and then emphasize the difference between them. Thereby, for example, it is possible to realize emphasis or the like that matches the visual characteristics.
- the value C (value of the output signal OS) of each element of the first profile data is the value A of the input signal IS, the value B of the unsharp signal US, the conversion function F1, the inverse conversion function of the conversion function F2, and the enhancement function F3.
- G F2 (F1 (A) + F3 (F1 (A) -F1 (B))) (hereinafter, referred to as formula Ml).
- the conversion function F 1 is a common logarithmic function.
- the inverse transformation function F 2 is an exponential function (antilog) as an inverse function of the common logarithmic function.
- the enhancement function F3 is any one of the enhancement functions R1 to R3 described with reference to FIG.
- FIG. 15 shows a visual processing device "! 1" equivalent to the visual processing device 1 in which the first profile data is registered in the two-dimensional LUT 4.
- the visual processing device 11 is a device that outputs an output signal OS based on a calculation that emphasizes a difference between respective conversion values obtained by performing predetermined conversion on the input signal IS and the unsharp signal us. This makes it possible to enhance the difference between the input signal IS and the unsharp signal US after converting them to a separate space. For example, it is possible to realize enhancement suited to visual characteristics.
- the visual processing device 11 shown in FIG. 15 performs spatial processing on the luminance value of each pixel of the original image acquired as the input signal IS, and outputs a unsharp signal US.
- a visual processing unit 13 that performs visual processing of the original image using the unsharp signal US and outputs an output signal OS.
- the visual processing unit 13 converts a signal space between the input signal IS and the unsharp signal US, and outputs a converted input signal TIS and a converted unsharp signal TUS to a signal space conversion unit 14;
- a subtractor 17 that receives the signal TIS as a first input, the converted unsharp signal TUS as a second input, and outputs a difference signal DS that is the difference between the two, and an enhancement process that receives the difference signal DS as an input and enhances the difference signal DS
- An emphasis processor 18 that outputs a signal TS; an adder 19 that outputs a converted input signal TIS as a first input and an emphasis processed signal TS as a second input;
- an inverse converter 20 that receives the addition signal PS as an input and outputs an output signal OS.
- the signal space conversion unit 14 includes a first conversion unit 15 that receives the input signal IS and outputs the converted input signal TIS, and a second conversion unit that receives the unsharp signal US and outputs the converted unsharp signal TUS. 16 and further.
- the first conversion unit 15 converts the input signal of the value A into the conversion input signal T IS of the value F 1 (A) using the conversion function F 1.
- the second conversion unit 16 converts the unsharp signal US having the value B into the converted unsharp signal TUS having the value F 1 (B) using the conversion function F 1.
- the subtractor 17 calculates the difference between the converted input signal TIS having the value F 1 (A) and the converted unsharp signal TUS having the value F 1 (B), and calculates the value F 1 (A) -F 1 (B ) Is output as the differential signal DS.
- the enhancement processing unit 18 uses the enhancement function F3 to perform an enhancement processing signal of a value F3 (F 1 (A) —F 1 (B)) from the difference signal DS of the value F 1 (A) -F 1 (B). Output TS.
- the adder 19 adds the converted input signal TIS having the value F 1 (A) and the emphasized signal TS having the value F 3 (F 1 (A) —F 1 (B)) to obtain a value F 1 (A) + Outputs the addition signal PS of F3 (F 1 (A) -F 1 (B)).
- the inverse transform unit 20 inversely transforms the sum signal PS of the value F 1 (A) + F3 (F 1 (A) —F 1 (B)) using the inverse transform function F 2, and obtains the value F 2 (F 1 Outputs the output signal OS of (A) + F 3 (F 1 (A) — F 1 (B)).
- the calculation using the conversion function F1, the inverse conversion function F2, and the enhancement function F3 may be performed using a one-dimensional LUT for each function, or may be performed without using the LUT. May be. ⁇ Effect >>,
- the visual processing device 1 and the visual processing device 11 having the first profile data have the same visual processing effect.
- Visual processing is realized using the converted input signal TIS and the converted unsharp signal TUS that have been converted to logarithmic space by the conversion function F1.
- Human visual characteristics are logarithmic, and visual processing suitable for the visual characteristics is realized by performing processing after converting to logarithmic space.
- Each visual processing device realizes contrast enhancement in logarithmic space.
- the conventional visual processing device 400 shown in FIG. 48 is generally used for enhancing a contour (edge) using an unsharp signal U S with a small degree of blur.
- the visual processing device 400 is under-enhanced in the bright part of the original image and over-enhanced in the dark part.
- Visual processing is not suitable. In other words, corrections for brightening tend to be underemphasized, while corrections for darkening tend to be overemphasized.
- visual processing when visual processing is performed using the visual processing device 1 or the visual processing device 11, visual processing suitable for the visual characteristics can be performed from a dark portion to a bright portion, and the visual direction can be increased.
- the emphasis and the emphasis in the darkening direction can be balanced.
- the output signal OS after the visual processing becomes negative and may fail.
- the transformation function F 1 is not limited to a logarithmic function.
- the conversion function F 1 is a conversion that removes the gamma correction applied to the input signal IS (for example, the gamma coefficient [0.45]), and the inverse conversion function F 2 is the gamma correction applied to the input signal IS. May be used as the conversion.
- the visual processing unit 13 may be one that calculates the above formula M1 based on the input signal IS and the unsharp signal US without using the two-dimensional LUT4.
- one-dimensional LUT may be used in the calculation of each of the functions F1 to F3.
- the second profile data is determined based on an operation including a function that emphasizes the ratio between the input signal IS and the unsharp signal US. This makes it possible to realize visual processing that emphasizes the shape component, for example.
- the second profile data is determined based on a calculation that performs dynamic range compression on the ratio between the emphasized input signal IS and the unsharp signal US. This makes it possible to realize visual processing that compresses the dynamic range while enhancing the sharp component, for example.
- the value C (the value of the output signal OS) of each element of the second profile data is calculated using the value A of the input signal IS, the value B of the unsharp signal US, the dynamic range compression function F4, and the enhancement function F5.
- C F4 (A) * F5 (A / B) (hereinafter referred to as equation M2).
- the dynamic range compression function F 4 is, for example, Is a monotonically increasing function of.
- FA (X) ⁇ ⁇ (o ⁇ r ⁇ i)
- c enhancement function F5 denoted are power functions.
- F5 (X) ⁇ ⁇ or (0 ⁇ Qf ⁇ 1).
- FIG. 16 shows a visual processing device 21 equivalent to the visual processing device 1 in which the second profile data is registered in the two-dimensional LUT 4.
- the visual processing device 21 is a device that outputs an output signal OS based on a calculation that emphasizes a ratio between the input signal I S and the unsharp signal U S. Thereby, for example, it is possible to realize visual processing for emphasizing the sharp component.
- the visual processing device 21 outputs the output signal OS based on a calculation that performs dynamic range compression on the ratio of the emphasized input signal IS and unsharp signal US. This makes it possible to realize visual processing that compresses the dynamic range while enhancing the sharp component, for example.
- the visual processing device 21 shown in FIG. 16 includes a spatial processing unit 22 that performs spatial processing on the luminance value of each pixel of the original image acquired as the input signal IS and outputs an unsharp signal US, It has a visual processing unit 23 that performs visual processing of the original image using the sharp signal US and outputs an output signal OS.
- the spatial processing unit 22 performs the same operation as the spatial processing unit 2 included in the visual processing device 1, description thereof is omitted.
- the visual processing unit 23 has a division unit 25 that receives the input signal IS as a first input, the unsharp signal US as a second input, and outputs a division signal RS obtained by dividing the input signal IS by the unsharp signal US, and a division unit.
- An enhancement processor 26 that receives the signal RS as an input and outputs the enhancement signal TS, and an output processor 27 that receives the input signal IS as a first input, the enhancement signal TS as a second input, and outputs an output signal OS.
- the output processing unit 27 receives the input signal IS as an input and outputs a dynamic range (DR) -compressed DR-compressed signal DRS.
- the DR-compressing unit 28 outputs the DR-compressed signal DRS to the "!
- a multiplying unit 29 is provided which has TS as a second input and outputs an output signal OS.
- the division unit 25 divides the input signal IS having the value A by the unsharp signal US having the value B, and outputs a division signal RS having the value A / B.
- the enhancement processor 26 uses the enhancement function F5 to output the enhancement signal TS having the value F5 (A / B) from the division signal RS having the value AZB.
- DR compression section 28 outputs DR compressed signal DRS of value F4 (A) from input signal IS of value A using dynamic range compression function F4.
- the multiplier 29 multiplies the DR compressed signal DRS having the value F 4 (A) by the enhancement processing signal TS having the value F 5 (A / B), and outputs an output signal having a value F 4 (A) * F 5 (A / B). Output OS.
- the calculation using the dynamic range compression function F 4 and the enhancement function F 5 may be performed using a one-dimensional LUT for each function, or may be performed without using a LUT. .
- the visual processing device 1 and the visual processing device 21 having the second profile data have the same visual processing effect.
- the gradation level is compressed using the dynamic range compression function F4 shown in Fig. 17 without saturating from the dark to the highlight. That is, assuming that the black level of the reproduction target in the image signal before compression is LO and the maximum white level is L1, the dynamic range L1: L0 before compression is compressed to the dynamic range Q1: QO after compression. .
- the ratio of image signal level to image signal level is reduced to (Q 1 / QO) * (LO / L 1) times due to the compression of the dynamic range.
- the dynamic range compression function F 4 is an upward convex power function or the like.
- the division signal RS having the value AZB that is, the sharp signal
- the value of the division signal RS is larger than 1
- the emphasis is performed on the brighter side, Emphasizes in the darker direction when less than 1.
- human vision has the property of seeing the same contrast if the local contrast is maintained, even if the overall contrast is reduced.
- the visual processing device 1 and the visual processing device 21 including the second mouth file data can realize the visual processing that does not visually reduce the contrast while compressing the dynamic range. .
- C is proportional to A because the value of B can be considered constant in the local range. That is, the ratio of the amount of change in the value C to the amount of change in the value A is 1, and the local contrast does not change in the input signal IS and the output signal OS.
- the visual processing device 1 and the visual processing device 21 including the second profile data can realize visual processing that does not visually lower the contrast while compressing the dynamic range.
- the visual processing unit 23 may calculate the above equation M2 based on the input signal IS and the unsharp signal US without using the two-dimensional LUT4.
- one-dimensional LUT may be used in the calculation of each of the functions F4 and F5.
- the value C of a certain element of the profile data obtained by Equation M2 satisfies C> 255, the value C of that element may be set to 255.
- the third profile data is determined based on a calculation including a function that emphasizes the ratio between the input signal IS and the unsharp signal US. This makes it possible to realize, for example, visual processing that emphasizes sharp components.
- the dynamic range compression function F4 may be a direct proportional function having a proportional coefficient of 1.
- Figure 19 shows a visual processing device with the third profile data registered in the two-dimensional LUT 4.
- a visual processing device 31 equivalent to 1 is shown. .
- the visual processing device 31 is a device that outputs the output signal OS based on a calculation that emphasizes the ratio between the input signal IS and the unsharp signal US. Thereby, for example, it is possible to realize visual processing for emphasizing the sharp component.
- the visual processing device 31 shown in FIG. 19 is different from the visual processing device 21 shown in FIG. 16 in that the visual processing device 31 shown in FIG.
- portions performing the same operations as those of the visual processing device 21 shown in FIG. 16 will be assigned the same reference numerals, and detailed description thereof will be omitted.
- the visual processing device 31 performs spatial processing on the luminance value of each pixel of the original image acquired as the input signal IS and outputs the unsharp signal US, and the input signal IS and the unsharp signal US. And a visual processing unit 32 that performs visual processing of the original image and outputs an output signal OS.
- the spatial processing unit 22 performs the same operation as the spatial processing unit 2 included in the visual processing device 1, description thereof is omitted.
- the visual processing unit 32 has a division unit 25 that outputs a division signal RS obtained by dividing the input signal IS by the unsharp signal US, with the input signal IS as the first input, the unsharp signal US as the second input, and the division.
- An enhancement processing unit 26 that receives the signal RS as an input and an enhancement processing signal TS as an output, and a multiplication unit 33 that outputs the output signal OS with the input signal IS as a first input and the enhancement processing signal TS as a second input. It has.
- the division unit 25 and the enhancement processing unit 26 perform the same operations as described for the visual processing device 21 shown in FIG.
- the multiplication unit 33 multiplies the input signal IS having the value A by the enhancement processing signal TS having the value F5 (A / B), and outputs an output signal OS having a value A * F5 (A / B).
- the enhancement function F5 is the same as that shown in FIG.
- the visual processing device 1 and the visual processing device 31 having the third profile data have the same visual processing effect.
- the emphasis processing section 26 performs an emphasis process on a sharp signal (divided signal RS) expressed as a ratio of the input signal IS and the unsharp signal US, and multiplies the emphasized sharp signal by the input signal IS. .
- a sharp signal (divided signal RS) expressed as a ratio of the input signal IS and the unsharp signal US, and multiplies the emphasized sharp signal by the input signal IS.
- Emphasizing the sharp signal represented as the ratio between the input signal I S and the unsharp signal U S is equivalent to calculating the difference between the input signal I S and the unsharp signal us in a logarithmic space. That is, visual processing suitable for logarithmic human visual characteristics is realized.
- the amount of enhancement by the enhancement function F5 increases when the input signal IS is large (when it is bright) and decreases when it is small (when it is dark). Also, the amount of enhancement in the direction of brightening is greater than the amount of enhancement in the direction of darkening. For this reason, visual processing suitable for visual characteristics can be realized, and natural visual processing with good balance can be realized.
- the value C of a certain element of the profile data obtained by Equation M3 satisfies C> 255, the value C of that element may be set to 255.
- the dynamic range of the input signal IS is not compressed, but the local contrast can be enhanced, and the dynamic range can be visually compressed and expanded.
- the fourth profile data is determined based on an operation including a function for enhancing the difference between the input signal IS and the unsharp signal US according to the value of the input signal IS. Thereby, for example, it becomes possible to emphasize the sharp component of the input signal IS according to the value of the input signal IS. For this reason, it is possible to properly enhance the input signal IS from the dark part to the bright part. Further, the fourth profile data is determined based on an operation of adding a value obtained by compressing a dynamic range of the input signal IS to the emphasized value. This makes it possible to compress the dynamic range while enhancing the sharp component of the input signal IS according to the value of the input signal IS.
- the value C (the value of the output signal OS) of each element of the fourth profile data is the value A of the input signal S, the value B of the unsharp signal US, the enhancement amount adjustment function F 6, the enhancement function F 7, and the dynamic range compression function.
- F8 F8 (A) + F6 (A) * F7 (A-B) (hereinafter, referred to as formula M4).
- the enhancement amount adjustment function F 6 is a function that monotonically increases with respect to the value of the input signal IS. That is, when the value A of the input signal IS is small, the value of the enhancement amount adjustment function F6 is also small, and when the value A of the input signal IS is large, the value of the enhancement amount adjustment function F6 is also large.
- the enhancement function F7 is one of the enhancement functions R 1 to R3 described with reference to FIG.
- FIG. 20 shows a visual processing device 41 equivalent to the visual processing device 1 in which the fourth profile data is registered in the two-dimensional LUT 4.
- the visual processing device 41 is a device that outputs an output signal OS based on a calculation that emphasizes the difference between the input signal IS and the unsharp signal US according to the value of the input signal IS.
- the sharp component of the input signal IS can be emphasized according to the value of the input signal IS. For this reason, it is possible to appropriately enhance the input signal IS from the dark part to the bright part.
- the visual processing device 41 outputs an output signal OS based on an operation of adding a value obtained by performing dynamic range compression on the input signal IS to the emphasized value. This makes it possible to compress the dynamic range while enhancing the sharp component of the input signal I S according to the value of the input signal I S.
- the visual processing device 41 shown in FIG. 20 performs spatial processing on the luminance value of each pixel of the original image acquired as the input signal IS and outputs an unsharp signal US 2, a visual processing unit 43 that performs visual processing of the original image using the input signal IS and the unsharp signal US, and outputs an output signal OS.
- the spatial processing unit 42 performs the same operation as the spatial processing unit 2 included in the visual processing device 1, description thereof is omitted.
- the visual processing unit 43 has an input signal IS as a first input, an unsharp signal US as a second input, a subtraction unit 44 that outputs a difference signal DS that is a difference between the two, and a difference signal DS as an input.
- An enhancement processing unit 45 that outputs the processing signal TS, an enhancement adjustment unit 46 that receives the input signal IS as an input, and outputs an enhancement adjustment signal IG, a first input of the enhancement adjustment signal IC, and an enhancement processing signal TS.
- a multiplication unit 47 that outputs a multiplication signal MS obtained by multiplying the emphasis amount adjustment signal IC and the emphasis processing signal TS, an input signal IS as a first input, and a multiplication signal MS as a second input.
- the output processing unit 48 receives the input signal IS as an input, outputs a dynamic range (DR) -compressed DR-compressed signal DRS, a DR-compressor 49, a DR-compressed signal DRS as a first input, and a multiplied signal MS as a It has an input section 2 and an addition section 50 for outputting an output signal OS.
- DR dynamic range
- the subtractor 44 calculates a difference between the input signal IS having the value A and the unsharp signal US having the value B, and outputs a difference signal DS having a value AB.
- the enhancement processing unit 45 uses the enhancement function F 7, the enhancement processing unit 45 outputs an enhancement processing signal TS having a value F 7 (A ⁇ B) from the difference signal DS having a value A—B.
- the enhancement amount adjustment unit 46 outputs an enhancement amount adjustment signal IC having a value F 6 (A) from the input signal IS having a value A using the enhancement amount adjustment function F 6.
- the multiplier 47 multiplies the enhancement amount adjustment signal IC having the value F6 (A) by the enhancement processing signal TS having the value F7 (AB), and outputs a multiplication signal MS having a value F6 (A) * F7 (AB). I do.
- the DR compression section 49 outputs a DR compressed signal D RS having a value F 8 (A) from the input signal IS having a value A by using a dynamic range compression function F 8.
- the adder 50 adds the 01 ⁇ compressed signal 01 : 3 ⁇ 43 and the multiplied signal MS of the value F6 (A) * F7 (AB) to obtain a value F8 (A) + F6 (A) * F7 (AB )
- Output signal OS is output.
- the enhancement amount adjustment function F 6, enhancement function F 7, dynamic range compression function F 8 Calculations using may be performed using a one-dimensional LUT for each function, or may be performed without using a LUT.
- the visual processing device 1 and the visual processing device 41 having the fourth profile data have the same visual processing effect.
- the value A of the input signal IS is used to adjust the amount of enhancement of the difference signal DS. For this reason, it is possible to maintain the local contrast from dark areas to bright areas while performing dynamic range compression.
- the enhancement amount adjustment function F 6 is a monotonically increasing function, but can be a function in which the amount of increase in the function value decreases as the value A of the input signal I S increases. In this case, the value of the output signal OS is prevented from being saturated.
- the enhancement function F7 is the enhancement function R2 described with reference to FIG. 49
- the enhancement amount when the absolute value of the difference signal DS is large can be suppressed. For this reason, it is possible to prevent the enhancement amount in a portion with high definition from being saturated, and it is possible to execute visually natural visual processing.
- the visual processing units 4.3 may calculate the above formula M4 based on the input signal IS and the unsharp signal US without using the two-dimensional LUT 4. .
- a one-dimensional LUT may be used.
- the enhancement processing unit 45 does not need to be provided.
- the value C of a certain element of the profile data obtained by Equation M4 is 0 ⁇ C ⁇ 2 If the range exceeds 55, the value C of the element may be 0 or 255.
- the fifth profile data is determined based on an operation including a function of enhancing a difference between the input signal IS and the unsharp signal US according to the value of the input signal IS.
- the dynamic range compression function F8 may be a direct proportional function having a proportional coefficient of 1.
- Equation M5 F 7 (A-B)
- FIG. 21 shows a visual processing device 51 equivalent to the visual processing device 1 in which the fifth profile data is registered in the two-dimensional LUT 4.
- the visual processing device 51 is a device that outputs an output signal OS based on a calculation that emphasizes the difference between the input signal IS and the unsharp signal US according to the value of the input signal IS.
- the sharp component of the input signal IS can be emphasized according to the value of the input signal IS. For this reason, it is possible to appropriately enhance the input signal Is from the dark part to the bright part.
- the visual processing device 51 shown in FIG. 21 is different from the visual processing device 41 shown in FIG.
- portions performing the same operations as those of the visual processing device 41 shown in FIG. 20 will be assigned the same reference numerals, and detailed description thereof will be omitted.
- the visual processing device 51 performs a spatial process on the luminance value of each pixel of the original image acquired as the input signal IS and outputs an unsharp signal US, and a spatial processing unit 42 that outputs the input signal IS and the unsharp signal US. Using the visual processing of the original image and the output signal A visual processing unit 52 for outputting an OS.
- the spatial processing unit 42 performs the same operation as the spatial processing unit 2 included in the visual processing device 1, description thereof is omitted.
- the visual processing unit 52 has an input signal IS as a first input, an unsharp signal US as a second input, a subtraction unit 44 that outputs a difference signal DS, which is a difference between the two, and a difference signal DS as an input.
- An enhancement processing unit 45 that outputs a processing signal TS, an input signal IS, and an enhancement adjustment unit 46 that outputs an enhancement adjustment signal IC;
- an enhancement adjustment signal IG is a first input;
- an enhancement processing TS As a second input, a multiplication unit 47 that outputs a multiplication signal MS obtained by multiplying the emphasis amount adjustment signal IC and the emphasis processing signal TS, an input signal IS as a first input, and a multiplication signal MS as a second input.
- an adder 53 for outputting an output signal OS.
- the subtraction unit 44, the enhancement processing unit 45, the enhancement amount adjustment unit 46, and the multiplication unit 47 perform the same operations as those described for the visual processing device 41 shown in FIG.
- the adder 53 adds the input signal IS having the value A and the multiplication signal MS having the value F6 (A) * F7 (AB) to obtain the output signal OS having the value A + F6 (A) * F7 (AB). Is output.
- the visual processing device 1 and the visual processing device 51 having the fifth profile data have the same visual processing effect.
- the visual processing device 1 and the visual processing device 41 provided with the fourth profile data have substantially the same visual processing effect as the visual processing effect.
- the emphasis amount of the difference signal DS is adjusted by the value A of the input signal IS. For this reason, it is possible to equalize the amount of contrast enhancement from dark to bright. ⁇ Modifications ⁇ .
- the enhancement processing unit 45 need not be provided.
- the value C of a certain element of the profile data obtained by Equation M5 exceeds the range of 0 ⁇ C ⁇ 255, the value C of that element may be set to 0 or 255.
- the sixth profile data is determined on the basis of an operation for gradation correction of a value obtained by adding the value of the input signal IS to the value obtained by enhancing the difference between the input signal IS and the unsharp signal US.
- an operation for gradation correction of a value obtained by adding the value of the input signal IS to the value obtained by enhancing the difference between the input signal IS and the unsharp signal US is determined on the basis of an operation for gradation correction of a value obtained by adding the value of the input signal IS to the value obtained by enhancing the difference between the input signal IS and the unsharp signal US.
- C value of the output signal OS of each element of the sixth profile data is calculated using the value A of the input signal IS, the value B of the unsharp signal US, the enhancement function F9, and the gradation correction function F10.
- C F 10 (A + F 9 (A—B)) (hereinafter referred to as equation M6).
- the enhancement function F9 is any one of the enhancement functions R1 to R3 described with reference to FIG.
- the gradation correction function F10 is a function used in normal gradation correction, such as a gamma correction function, an S-shaped gradation correction function, and an inverse S-shaped gradation correction function.
- FIG. 22 shows a visual processing device 61 equivalent to the visual processing device 1 in which the sixth profile data is registered in the two-dimensional LUT 4.
- the visual processing device 61 outputs an output signal OS based on a calculation for correcting a value obtained by adding a value of the input signal IS to a value in which a difference between the input signal IS and the unsharp signal US is emphasized.
- Device Thereby, for example, it is possible to realize visual processing for performing gradation correction on the input signal IS in which the sharp component is emphasized.
- the visual processing device 61 shown in FIG. Performs visual processing of the original image using spatial processing unit 62 that performs spatial processing on each luminance value and outputs unsharp signal US, and input signal IS and unsharp signal US, and outputs output signal OS. And a visual processing unit 63 for outputting.
- the spatial processing unit 62 performs the same operation as the spatial processing unit 2 included in the visual processing device 1, description thereof is omitted.
- the visual processing unit 63 has an input signal IS as a first input, an unsharp signal US as a second input, and a subtraction unit 64 that outputs a difference signal DS that is a difference between the input signal IS and a difference signal DS.
- An enhancement processing unit 65 that outputs the processed enhancement signal TS, an addition unit 66 that receives the input signal IS as a first input, uses the enhancement processing signal TS as a second input, and outputs an addition signal PS obtained by adding them,
- a gradation correction unit 67 that receives the addition signal PS as an input and outputs an output signal OS is provided.
- the subtraction unit 64 calculates a difference between the input signal I having the value A and the unsharp signal US having the value B, and outputs a difference signal DS having a value AB.
- the enhancement processing unit 65 outputs an enhancement processing signal TS having a value F 9 (A-B) from the difference signal DS having a value A—B using the enhancement function F 9.
- the adder 66 adds the value A input signal IS and the value F 9 (A-B) enhanced signal TS, and outputs an addition signal PS of value A + F9 (A-B).
- the gradation interpolation unit 67 outputs a value F10 (A + F9 (A-B)) from the addition signal PS of the value A + F9 (AB). Outputs signal OS.
- the calculation using the enhancement function F9 and the gradation correction function F10 may be performed using one-dimensional LUT for each function, or may be performed without using LUT.
- the visual processing device 1 and the visual processing device 61 provided with the sixth profile data have the same visual processing effect.
- the difference signal DS is enhanced by an enhancement function F9 and added to the input signal IS. Therefore, it is possible to enhance the contrast of the input signal IS. More Then, the gradation correcting section 67 executes a gradation correction process of the addition signal PS. For this reason, for example, it is possible to further emphasize the contrast with a halftone having a high appearance frequency in the original image. Further, for example, it is possible to brighten the entire addition signal PS. As described above, it is possible to realize the spatial processing and the gradation processing simultaneously.
- the visual processing unit 63 may calculate the above expression M6 without using the two-dimensional LUT 4 based on the input signal IS and the unsharp signal US.
- one-dimensional LUT may be used in the calculation of the functions F9 and F10.
- the value C of a certain element of the profile data obtained by Equation M6 exceeds the range of 0 ⁇ C ⁇ 255, the value C of that element may be set to 0 or 255.
- the seventh profile data is determined based on an operation of adding a value obtained by tone-correcting the input signal IS to a value obtained by enhancing the difference between the input signal IS and the unsharp signal US.
- enhancement of the sharp component and gradation correction of the input signal IS are performed independently. For this reason, it is possible to enhance a constant sharp component regardless of the gradation correction amount of the input signal IS.
- the enhancement function F 11 is any one of the enhancement functions R 1 to R 3 described with reference to FIG.
- the gradation correction function F12 is, for example, a gamma correction function, an S-shaped gradation correction function, an inverse S-shaped gradation correction function, or the like.
- Equivalent visual processing device 71 >>-FIG. 23 shows a visual processing device 71 equivalent to the visual processing device 1 in which the seventh profile data is registered in the two-dimensional LUT 4.
- the visual processing device 71 is a device that outputs an output signal OS based on an operation of adding a value obtained by tone-correcting the input signal IS to a value in which the difference between the input signal IS and the unsharp signal US is emphasized.
- the enhancement of the sharp component and the gradation correction of the input signal IS are performed independently. For this reason, it is possible to emphasize a certain sharp component regardless of the gradation correction amount of the input signal IS.
- the visual processing device 71 shown in FIG. 23 includes a spatial processing unit 72 that performs spatial processing on the luminance value of each pixel of the original image acquired as the input signal IS and outputs an unsharp signal US, A visual processing section 73 that performs visual processing of the original image using the signal US and outputs an output signal OS.
- the spatial processing unit 72 performs the same operation as the spatial processing unit 2 included in the visual processing device 1, and thus the description is omitted.
- the visual processing unit 73 includes an input signal IS as a first input, an unsharp signal US as a second input, and a subtraction unit 74 that outputs a difference signal DS that is a difference between the input signal IS and a difference signal DS.
- An enhanced processing unit 75 that outputs the enhanced enhancement processing signal TS, a tone correction unit 76 that receives the input signal IS as an input, and outputs a tone-corrected tone correction signal GC, And an adder 77 that outputs the enhanced signal TS as a second input and outputs an output signal OS.
- the subtraction unit 74 calculates a difference between the input signal IS having the value A and the unsharp signal US having the value B, and outputs a difference signal DS having a value AB.
- the emphasis processing section 75 outputs an emphasis processing signal TS of value F 11 (A ⁇ B) from the difference signal DS of value A ⁇ B using the emphasis function F 11.
- the tone correction unit 76 outputs a tone correction signal GC having a value F 12 (A) from the input signal IS having a value A by using a tone correction function F 12.
- the adder 77 adds the tone correction signal GC having the value F 1 2 (A) and the enhancement processing signal TS having the value F 11 (A—B) to obtain a value F 12 (A) + F 11 Outputs (AB) output signal OS.
- the calculation using the enhancement function F 11 and the gradation correction function F 12 may be performed using a one-dimensional LUT for each function, or may be performed without using the LUT.
- the visual processing device 1 and the visual processing device 71 having the seventh profile data have the same visual processing effect.
- the input signal IS is subjected to gradation correction by the gradation correction unit 76 and then added to the enhancement processing signal TS. Therefore, even in a region where the gradation correction function F12 has a small gradation change, that is, in a region where the contrast is reduced, the local contrast can be enhanced by adding the enhancement processing signal TS thereafter. .
- the visual processing unit 73 may calculate the above equation M7 based on the input signal IS and the unsharp signal US without using the two-dimensional LUT4.
- a one-dimensional LUT may be used in the calculation of the respective functions F 1 1 and F 1 2.
- the value C of a certain element of the profile data obtained by the equation M7 exceeds the range of 0 ⁇ C ⁇ 255, the value C of the element may be set to 0 or 255.
- each element of the first to seventh profile data stores a value calculated based on the expressions M1 to M7.
- the value calculated by the formulas M1 to M7 exceeds the range of values that can be stored in the profile data, the value of the element may be limited in each piece of file data. .
- some values may be optional .
- the value of the input signal IS is large, but the value of the unsharp signal US is small, such as a small> light part in a dark night view (a neon part in a night view), a visually processed input
- the effect of the value of the signal IS on the image quality is small.
- the value stored in the profile data is an approximate value of the value calculated by the equations Ml to M7, or an arbitrary value. good.
- a visual processing device 6 0 0 as a second embodiment of the present invention will be described with reference to FIGS. 24 to 39.
- the visual processing device 600 is a visual processing device that performs visual processing on an image signal (input signal IS) and outputs a visual processing image (output signal OS), and a display device (not shown) that displays the output signal OS. It is a device that performs visual processing according to the installed environment (hereinafter referred to as the display environment).
- the visual processing device 600 is a device that improves the reduction in the “visual contrast” of the display image due to the influence of ambient light in the display environment by visual processing using human visual characteristics. is there.
- the visual processing device 60 constitutes an image processing device together with a device that performs image signal color processing in a device that handles images such as a computer, a television, a digital camera, a mobile phone, a PDA, a printer, and a scanner.
- FIG. 24 shows the basic configuration of the visual processing device 600.
- the visual processing device 600 includes a target contrast conversion unit 601, a conversion signal processing unit 600, a real contrast conversion unit 603, a target contrast setting unit 604, and a real contrast setting unit 605.
- the target contrast conversion unit 601 receives the input signal IS as a first input, the target contrast C 1 set in the target contrast setting unit 604 as a second input, and outputs the target contrast signal J S. Note that the definition of the target contrast C1 will be described later.
- the conversion signal processing unit 602 receives the target contrast signal JS as a first input, the target contrast C 1 as a second input, the actual contrast C 2 set in the actual contrast setting unit 605 as a third input, and performs visual processing.
- the visual processing signal KS which is the target contrast signal JS, is output. Note that the definition of the actual contrast C2 will be described later.
- the actual contrast conversion unit 603 uses the visual processing signal KS as the first input, the actual contrast C2 as the second input, and the output signal OS as the output.
- the target contrast setting unit 604 and the actual contrast setting unit 605 allow the user to set the values of the target contrast C1 and the actual contrast C2 via an input interface or the like.
- the target contrast conversion unit 601 converts the input signal I S input to the visual processing device 600 into a target contrast signal J S suitable for contrast expression.
- the luminance value of the image input to the visual processing device 600 is represented by the gradation of the value [0.0 to 1.0].
- the target contrast conversion unit 601 converts the input signal IS (value [P]) by “Expression M20J, and converts the target contrast signal JS (value [A]).
- the target contrast CI value [m] indicates that the displayed image displayed by the display device is the maximum. Is set as a contrast value that makes the image look good.
- FIG. Fig. 25 is a graph showing the relationship between the value of the input signal I S (horizontal axis) and the value of the target contrast signal J S (vertical axis).
- the target contrast converter 601 converts the input signal IS having a value in the range of [0.0 to 1.0] into a target contrast signal having a value in the range of [1 Zm to 1.0]. Converted to JS.
- the conversion signal processing section 602 compresses the dynamic range while maintaining the local contrast of the input target contrast signal JS, and outputs the visual processing signal KS.
- the conversion signal processing unit 622 does not regard the input signal IS (see FIG. 16) in the visual processing device 21 shown in the first embodiment as the target contrast signal JS, and outputs the output signal OS (see FIG. 16) has the same configuration, operation, and effect as when the visual processing signal KS is considered.
- the converted signal processing unit 60 2 outputs the visual processing signal K S based on a calculation that emphasizes the ratio between the target contrast signal J S and the unsharp signal U S. This makes it possible to implement visual processing that emphasizes sharp components, for example.
- the converted signal processing unit 60 2 outputs a visual processing signal K S based on an operation for performing dynamic range compression on the ratio of the emphasized target contrast signal J S and the unsharp signal US. This enables, for example, visual processing that compresses the dynamic range while enhancing the sharp component.
- the conversion signal processing section 602 calculates the luminance of each pixel in the target contrast signal JS. Using the spatial processing unit 622 that performs spatial processing on the value and outputs the unsharp signal US, and the target contrast signal JS and the unsharp signal US, performs visual processing on the target contrast signal JS, and generates a visual processing signal KS And a visual processing unit 623 that outputs
- the spatial processing unit 622 performs the same operation as the spatial processing unit 2 included in the visual processing device 1 (see FIG. 1), and thus a detailed description is omitted.
- the visual processing unit 623 includes a division unit 625, an enhancement processing unit 626, and an output processing unit 627 including a compression unit 628 and a multiplication unit 629.
- the division unit 625 outputs a division signal RS obtained by dividing the target contrast signal J S by the unsharp signal US, with the target contrast signal J S as the first input, the unsharp signal US as the second input.
- the emphasis processing section 626 receives the division signal RS as a first input, the target contrast C1 as a second input, and the actual contrast C2 as a third input, and outputs an emphasis processing signal TS.
- the output processing unit 627 receives the target contrast signal JS as a first input, the enhancement processing signal TS as a second input, the target contrast C1 as a third input, the actual contrast C2 as a fourth input, and the visual processing signal KS Is output.
- the DR compression section 628 receives the target contrast signal JS as a first input, the target contrast C1 as a second input, and the actual contrast C2 as a third input, and receives a dynamic range (DR) compressed DR compressed signal.
- Output DRS receives the DR compression signal DRS as a first input, the enhancement processing signal TS as a second input, and outputs a visual processing signal KS.
- the conversion signal processing unit 602 converts the target contrast signal JS (value [A]) by formula M2. And outputs the visual processing signal KS (value [C]).
- the value [B] is the value of the unsharp signal U S obtained by spatially processing the target contrast signal J S.
- the spatial processing unit 622 performs spatial processing on the target contrast signal J S having the value [A], and outputs an unsharp signal US having the value [B].
- the division unit 625 divides the target contrast signal J S having the value [A] by the unsharp signal US having the value [B], and outputs a division signal RS having the value [A / B].
- the enhancement processor 626 uses the enhancement function F5 to output the enhancement signal TS having the value [F 5 (A / B)] from the division signal RS having the value [A / B].
- DR compression section 628 outputs DR compressed signal DRS of value [F4 (A)] from target contrast signal J S of value [A] using dynamic range compression function F4.
- the multiplier 629 multiplies the DR compressed signal DRS having the value [F4 (A)] by the enhancement signal TS having the value [F 5 (A / B)], and obtains the value [F4 (A) * F 5 (A / B)] Outputs the visual processing signal KS.
- the calculation using the dynamic range compression function F4 and the enhancement function F5 may be performed using a one-dimensional LUT for each function, or may be performed without using the LUT.
- the visual dynamic range of the visual processing signal K S is determined by the value of the dynamic range compression function F4.
- FIG. 26 is a graph showing the relationship between the value of the target contrast signal JS (horizontal axis) and the value obtained by applying the dynamic range compression function F4 to the target contrast signal JS (vertical axis).
- the dynamic range of the target contrast signal JS is compressed by the dynamic range compression function F4. More specifically, the target contrast signal JS having a value in the range of [1 Zm to 1.0] is converted into a value of [1 / n to 1.0] by the dynamic range compression function F4.
- the value [n] of the actual contrast C2 is set as the visual contrast value of the displayed image under the ambient light of the display environment. That is, the value [n] of the actual contrast C2 can be determined as a value obtained by reducing the value [m] of the target contrast G1 by the influence of the luminance of the ambient light of the display environment.
- the dynamic range of the target contrast signal JS is compressed from 1: m to 1: n by Expression M2.
- the “dynamic range” means the ratio between the minimum value and the maximum value of a signal.
- the local contrast change in the visual processing signal KS is expressed as the ratio of the change before and after the conversion between the value of the target contrast signal JS [A] and the value of the visual processing signal KS [C]. Is done.
- the value [B] of the unsharp signal US in a local that is, a narrow range, can be regarded as constant. Therefore, the ratio between the change amount of the value C and the change amount of the value A in the expression M2 is 1, and the local contrast between the target contrast signal JS and the visual processing signal KS does not change.
- the conversion signal processing unit 6002 can realize visual processing that does not reduce the visual contrast while compressing the dynamic range of the target contrast signal J S.
- the actual contrast conversion unit 603 converts the visual processing signal KS into image data in a range that can be input to a display device (not shown).
- the image data within the range that can be input to the display device is, for example, image data in which the luminance value of an image is represented by a gradation of a value [0.0 to 1.0].
- the real contrast conversion unit 603 converts the visual processing signal KS (value [C]) by using “Equation M21” using the real contrast C2 (value [n]), and outputs an output signal OS (value [Q]). I do.
- FIG. 27 is a graph showing the relationship between the value of the visual processing signal KS (horizontal axis) and the value of the output signal OS (vertical axis). As shown in FIG.
- the real-contrast conversion unit 603 converts the visual processing signal KS in the range of the value [i Zn ⁇ 1.0] into the output signal OS in the range of the value [0.0 to 1.0].
- the value of the output signal OS decreases with respect to the value of each visual processing signal KS. This decrease corresponds to the influence of the ambient light on the brightness of the display image.
- the output signal OS is converted to a value [0]. Further, in the actual contrast conversion unit 603, when a visual processing signal KS having a value [1] or more is input, the output signal OS is converted to a value [1].
- the visual processing device 600 has the same effect as the visual processing device 21 described in the first embodiment. Hereinafter, effects characteristic of the visual processing device 600 will be described.
- the output signal OS When ambient light is present in the display environment in which the output signal OS of the visual processing device 600 is displayed, the output signal OS is visually affected by the ambient light.
- the output signal OS is a signal that has been subjected to processing for correcting the influence of ambient light by the actual contrast converter 603.
- the output signal OS displayed on the display device is viewed as a display image having the characteristics of the visual processing signal KS.
- the characteristics of the visual processing signal KS are the dynamic range of the entire image while maintaining local contrast. Is compressed. In other words, the visual processing signal KS is compressed to a dynamic range that can be displayed under the influence of ambient light (corresponding to the actual contrast C2) while maintaining the target contrast G1 at which the display image is optimally displayed locally. Signal.
- the visual processing device 600 corrects the contrast, which is reduced due to the presence of ambient light, while increasing the visual contrast by processing using the visual characteristics. It can be maintained.
- a visual processing method that produces the same effect as the visual processing device 600 will be described with reference to FIG. Note that the specific processing of each step is the same as the processing in the visual processing device 600, and the description thereof is omitted.
- the set target contrast C1 and actual contrast C2 are obtained (step S601).
- conversion is performed on the input signal I S (step S602), and the target contrast signal J S is output.
- spatial processing is performed on the target contrast signal JS (step S603), and an unsharp signal US is output.
- the target contrast signal J S is divided by the unsharp signal US (step S604), and a division signal RS is output.
- the division signal RS is emphasized by an emphasis function F5 which is a “power function” having an index determined by the target contrast C1 and the actual contrast C2 (step S605), and an emphasis processing signal TS is output.
- the target contrast signal JS is re-dynamic-range-compressed by a dynamic-range compression function F4, which is a "power function" having an index determined by the target contrast C1 and the actual contrast C2 (step S606).
- the DR compressed signal D RS is output.
- the enhancement processing signal TS output in step S605 and the DR compression signal DRS output in step S606 are multiplied (step S607), and a visual processing signal K S is output.
- the conversion to the visual processing signal KS is performed using the actual contrast C2.
- Step S608 the output signal OS is output.
- the processing from step S602 to step S608 is repeated for all the pixels of the input signal IS (step S609).
- Each step of the visual processing method shown in FIG. 28 may be realized as a visual processing program in the visual processing device 600 or another computer. Further, the processing from step S604 to step S607 may be performed by calculating equation M2.
- the conversion signal processing unit 602 outputs the visual processing signal KS based on the equation M2.
- the conversion signal processing unit 602 may output the visual processing signal KS based only on the dynamic range enhancement function F4.
- the conversion signal processing unit 602 as a modification need not include the spatial processing unit 622, the division unit 625, the enhancement processing unit 626, and the multiplication unit 629, and need only include the DR compression unit 628.
- the converted signal processing unit 602 as a modified example can output a visual processing signal KS compressed to a dynamic range that can be displayed under the influence of ambient light.
- the exponent of the emphasis function F 5 is Alternatively, it may be a function of the value [A] of the target contrast signal JS or the value [B] of the unsharp signal US.
- the exponent of the enhancement function F5 is a function of the value [A] of the target contrast signal JS, and is monotonous when the value [A] of the target contrast signal JS is larger than the value [B] of the unsharp signal US. It is a decreasing function. More specifically, the exponent of the enhancement function F5 is expressed as M (A) * (1- ⁇ ), and the function (A) is the value of the target contrast signal JS [A] as shown in FIG. Is a function that decreases monotonically. Note that the maximum value of the function QM (A) is [1.0].
- the enhancement function F5 reduces the amount of local contrast enhancement in the high-brightness area. For this reason, when the luminance of the target pixel is higher than the luminance of the surrounding pixels, the local contrast in the high-luminance portion is suppressed from being overemphasized. That is, the luminance value of the target pixel is prevented from being saturated to high luminance, and a so-called overexposed state is suppressed. ⁇ 2 >>.
- the exponent of the enhancement function F5 is a function of the value [A] of the target contrast signal JS, and increases monotonically when the value [A] of the target contrast signal JS is smaller than the value [B] of the unsharp signal US. Function. More specifically, the exponent of the enhancement function F5 is expressed as a2 (A) * (1 ⁇ r), and the function 2 (A), as shown in FIG. Is a function that increases monotonically with respect to. Note that the maximum value of the function Qf 2 (A) is [1.0].
- the enhancement function F5 reduces the amount of enhancement of the local contrast in the low luminance part. For this reason, when the luminance of the pixel of interest is lower than the luminance of surrounding pixels, excessive enhancement of local contrast in the low luminance portion is suppressed. In other words, the luminance value of the target pixel is saturated to a low luminance, and the so-called blackened state is suppressed.
- the exponent of the enhancement function F 5 is a function of the target contrast signal JS value [A] and increases monotonously when the target contrast signal JS value [A] is greater than the unsharp signal US value [B]. Function. More specifically, the exponent of the emphasis function F5 is expressed as 3 (A) * (1 ⁇ r), and the function ⁇ 3 ( ⁇ ) is the value of the target contrast signal JS [A Is a function that increases monotonically with respect to. Note that the maximum value of the function Qf 3 ( ⁇ ) is [1.0].
- the enhancement function F5 reduces the amount of enhancement of the local contrast in the low-brightness part. For this reason, when the luminance of the target pixel is higher than the luminance of the surrounding pixels, overemphasis of the local contrast in the low luminance part is suppressed. Since the low-luminance part in the image has a relatively low signal level, the ratio of noise is relatively high. By performing such processing, it is possible to suppress the deterioration of the SN ratio.
- the exponent of the enhancement function F5 is a function of the value [A] of the target contrast signal JS and the value [B] of the unsharp signal US, and corresponds to the absolute value of the difference between the values [A] and [B]. Is a function that decreases monotonically.
- the exponent of the enhancement function F 5 is a function that increases as the ratio between the value [A] and the value [B] approaches 1. More specifically, the exponent of the enhancement function F5 is expressed as QT 4 (A, B) * (1 -r), and the function Qf 4 (A , B) is a function that monotonically decreases with respect to the absolute value of the value [AB] as shown in Fig. 32.
- An upper limit or a lower limit may be set for the calculation result of the enhancement function F5 in the above ⁇ 1> to ⁇ 4>. Specifically, when the value [F5 (A / B)] exceeds a predetermined upper limit, the predetermined upper limit is adopted as the calculation result of the enhancement function F5. If the value [F 5 (A / B)] exceeds a predetermined lower limit, the predetermined lower limit is adopted as the calculation result of the enhancement function F 5.
- the amount of local contrast enhancement by the enhancement function F5 can be limited to an appropriate range, and excessive or insufficient contrast enhancement is suppressed.
- the above ⁇ 1> to ⁇ 5> can be similarly applied to the case where the calculation using the enhancement function F5 is performed in the first embodiment (for example, the first embodiment ⁇ profile data> (2 ) Or (3).
- the value [A] is the value of the input signal IS
- the value [B] is the value of the spatial signal US obtained by spatially processing the input signal IS.
- the converted signal processing unit 60 has the same configuration as the visual processing device 21 shown in the first embodiment.
- the conversion signal processing unit 602 as a modification may have a configuration similar to that of the visual processing device 31 (see FIG. 19) shown in the first embodiment.
- the conversion signal processing unit 602 as a modified example converts the target contrast signal JS (value [A]) and the unsharp signal US (value [B]) into “Expression M 3), a visual processing signal KS (fi
- the dynamic range of the input signal IS is not compressed, but local contrast can be enhanced.
- the effect of this local contrast enhancement can give the impression that the dynamic range is “compressed or expanded” “visually”.
- the enhancement function F5 is a “power function”, and its exponent is the function 1 (A), Of It may be a function having the same tendency as 2 (A), or 3 (A), Of 4 (A, B). Further, as described in the above ⁇ Modification> (ii) ⁇ 5 >>, the calculation result of the enhancement function F5 may have an upper limit or a lower limit.
- the target contrast setting unit 604 and the actual contrast setting unit 605 allow the user to set the values of the target contrast C1 and the actual contrast C2 via the input interface or the like.
- the target contrast setting section 604 and the actual contrast setting section 605 may be capable of automatically setting the values of the target contrast C1 and the actual contrast C2.
- the display device that displays the output signal OS is a display such as a PDP, LCD, or CRT, and the white luminance (white level) and black luminance (black level) that can be displayed without ambient light are known.
- the actual contrast setting unit 605 that automatically sets the value of the actual contrast C2 will be described.
- Figure 33 shows the actual contrast setting unit 605 that automatically sets the value of actual contrast C2.
- the actual contrast setting unit 605 includes a luminance measurement unit 605a, a storage unit 605b, and a calculation unit 605c.
- the luminance measurement unit 605a is a luminance sensor that measures a luminance value of ambient light in a display environment of a display that displays the output signal OS.
- the storage unit 605b stores the white luminance (white level) at which the display for displaying the output signal OS can be displayed without ambient light.
- the calculation unit 605c acquires the values from the luminance measurement unit 605a and the storage unit 605b, respectively, and calculates the value of the actual contrast C2.
- the calculation unit 605c adds the luminance value of the ambient light acquired from the luminance measurement unit 605a to each of the black-level luminance value and the white-level luminance value stored in the storage unit 605b. Further, the calculation unit 605G outputs a value obtained by dividing the result of addition to the luminance value of the white level using the result of addition to the luminance value of the black level as a value [n] of the actual contrast C2. . As a result, the value [n] of the actual contrast C2 indicates the contrast value displayed by the display in a display environment where ambient light exists.
- the storage unit 605 b shown in FIG. 33 stores the ratio of the white luminance (white level) and the black luminance (black level) that can be displayed in a state where the display is free of ambient light to the value of the target contrast C 1 [ m] may be stored.
- the actual contrast setting section 605 simultaneously performs the function of the target contrast setting section 604 for automatically setting the target contrast C1.
- the storage unit 605b does not store the ratio, and the ratio may be calculated by the calculation unit 605c.
- the display device that displays the output signal OS is a projector, etc.
- the white luminance (white level) and black luminance (black level) that can be displayed without ambient light depend on the distance to the screen
- the actual contrast The actual contrast setting section 605 for automatically setting the value of C2 will be described.
- Figure 34 shows the actual contrast setting section 605 that automatically sets the value of the actual contrast C2.
- the actual contrast setting unit 60 5 includes a luminance measurement unit 60 05 d and a control unit 60 05 e.
- the luminance measuring section 605 d is a luminance sensor that measures a luminance value of the output signal OS displayed by the projector in a display environment.
- the control unit 605e causes the projector to display the white level and the black level. Further, the luminance value when each level is displayed is obtained from the luminance measuring section 60'5d, and the value of the actual contrast G2 is calculated.
- An example of the operation of the control unit 605 will be described with reference to FIG. First, the control unit 605e operates the projector in a display environment where ambient light exists, and performs white level display (step S620). The control unit 605e acquires the measured luminance of the white level from the luminance measurement unit 605d (step S621).
- control unit 605e operates the projector in a display environment where ambient light is present, and performs black level display (step S622).
- the control unit 605e acquires the measured black level luminance from the luminance measurement unit 605d (step S623).
- the control unit 605e calculates the ratio between the acquired luminance value of the white level and the luminance value of the black level, and outputs the result as the value of the actual contrast C2.
- the value [n] of the actual contrast C 2 indicates the contrast value displayed by the projector in a display environment where ambient light exists.
- the value [m] of the target contrast C 1 can be derived by calculating the ratio between the white level and the black level in a display environment in which no ambient light exists.
- the actual contrast setting unit 605 simultaneously performs the function of the target contrast setting unit 604 for automatically setting the target contrast C1.
- the processing in the visual processing device 600 is performed on the luminance of the input signal IS.
- the present invention is not effective only when the input signal IS is expressed in the YCbG r color space.
- the input signal IS may be represented by a YUV color space, a Lab color space, a LuV color space, a YIQ color space, an XYZ color space, a YPbPr color space, or the like.
- the processing described in the above embodiment can be executed for the luminance and lightness of each color space.
- the processing in the processing device 600 may be performed independently for each of the RGB components.
- the processing by the target contrast conversion unit 601 is independently performed on the RGB components of the input signal IS, and the RGB components of the target contrast signal JS are output. Furthermore, the conversion signal is converted to the RGB component of the target contrast signal JS.
- the processing by the signal processing unit 602 is performed independently, and the RGB components of the visual processing signal KS are output. Further, the RGB component of the visual processing signal KS is independently processed by the real contrast converter 603, and the RGB component of the output signal OS is output.
- a common value is used for the target contrast C1 and the actual contrast G2 in the processing of each of the RGB components.
- the visual processing device 600 includes a color difference correction processing unit to suppress the hue of the output signal OS from being different from the hue of the input signal IS due to the influence of the luminance component reprocessed by the conversion signal processing unit 602. May be further provided.
- FIG. 36 shows a visual processing device 600 that includes a color difference correction processing unit 608.
- the same components as those of the visual processing device 600 shown in FIG. It is assumed that the input signal IS has a color space of YCbCr, and the same processing as described in the above embodiment is performed on the Y component.
- the color difference correction processing unit 608 will be described.
- the color difference correction processing unit 608 converts the target contrast signal JS into a first input (value [Yin]), the visual processing signal KS into a second input (value [Yout]), and the Cb component of the input signal IS into the second input (value [Yout]). Input (value [CB in]), the Cr component of the input signal IS as the fourth input (value [CR in]), and the Cb component that has been subjected to the color difference correction processing to the first output (value [CB out] ]). The Cr component that has been subjected to the color difference correction processing is used as the second output (value [CRout]).
- FIG. 37 outlines the color difference correction process.
- the color difference correction processing unit 608 has four inputs of [Y in], [Y out], [CB in], and [CR in], and calculates [CBout], [GRout] by calculating these four inputs. ] 2 outputs are obtained.
- [GBo ut] and [CRo ut] are derived based on the following equation that corrects [CB in] and [CR in] by the difference and ratio between [Y in] and [Yo ut]. You.
- [CBo ut] is a 1 * ([Y out]-[Y in]) * [CB in] + a 2 * (1— [Y out] / [Y in]) * [CB in] + a 3 * ([Y out] one [Y in]) * [CR in] + a 4 * (1— [Y ut] [Y in]) * [ CR in] + [CB in], which is derived based on (hereinafter referred to as expression CB).
- [CRo ut] is a 5 * ([Y out]-[Y in]) * [CB in] + a 6 * (1— [Y out] [Y in]) * [CB in] + a 7 * ([Yo ut]-[Y in]) * [CR in] + a 8 * (1-[Y out] / [Y in]) * [CR in] + [CR in] (Hereinafter referred to as expression CR).
- expression CR For the coefficients a1 to a8 in the expressions CB and CR, values determined in advance by a calculation device external to the visual processing device 600 by an estimation operation described below are used.
- step S630 four inputs of [ ⁇ in:], [Yout]. [CB in] and [CR in] are obtained (step S630).
- the value of each input is data prepared in advance to determine the coefficients a 1 to a 8.
- [Y in:], [CB in] and [CR in] values obtained by thinning out all possible values at predetermined intervals are used.
- [Yout] a value obtained by thinning out a value that can be output when the value of [Yin] is input to the conversion signal processing unit 602 at a predetermined interval is used.
- the data prepared in this way is obtained as 4 inputs
- the obtained [Y in], [CB in] and [CR in] are converted to the Lab color space, and the converted Lab color space
- the chromaticity values [A in] and [B in] in are calculated (step S631).
- step S632 using the default coefficients a 1 to a 8, “Expression GBJ” and “Expression CRJ are calculated, and the values of [CBout] and [CRout] are obtained (step S632).
- the value and [Y out] are converted to the Lab color space, and the chromaticity values [Ao ut] and [Bo ut] in the converted ab color space are calculated (step S633).
- an evaluation function is calculated using the calculated chromaticity values [A ⁇ n], [Bin], [Aout], and [Bout] (step S634), and the value of the evaluation function is set to a predetermined value. It is determined whether it is less than or equal to the threshold.
- the evaluation functions are [A in] and [B in ] And [Aut] and [Bout.] are small functions when the change in hue is small, for example, a function such as the sum of squares of the deviation of each component. More specifically, the evaluation function is ([A in] — [Ao ut]) ⁇ 2+ ([ ⁇ i ⁇ ]-[ ⁇ ⁇ ut]) ⁇ 2, and the like.
- step S635 When the value of the evaluation function is larger than the predetermined threshold value (step S635), the coefficients al to a8 are corrected (step S636), and the operations of steps S632 to S635 are performed using the new coefficients. Repeated.
- step S635 If the value of the evaluation function is smaller than a predetermined threshold value (step S635), the coefficient a 1 ⁇ a 8 used in the calculation of the evaluation function is output as a result of the estimation calculation (Sutetsu flop S 637).
- the coefficients a 1 to a 8 are estimated using one of the four input combinations of [Y in], [You out], [CB in], and [CR in] prepared in advance.
- the calculation may be performed, but the above-described processing may be performed using a plurality of combinations, and coefficients a1 to a8 that minimize the evaluation function may be output as a result of the estimation calculation.
- the value of the target contrast signal JS is [Y in]
- the value of the visual processing signal KS is [Y out]
- the value of the Cb component of the input signal IS is [CB in]
- the input signal IS The value of the Cr component of the output signal OS is [CR ⁇ n]
- the value of the Cb component of the output signal OS is [CB out]
- the value of the Cr component of the output signal OS is [CR out].
- [Yin], [Yout], [CBin], [CRin], [CBout], and [CROut] may represent values of other signals.
- the target contrast conversion unit 601 performs processing on each component of the input signal IS.
- the signal in the RGB color space after processing is converted into a signal in the YCbCr color space, and the value of the Y component is [Yin], the value of the Cb component is [CBin], the value of the Cr component As [CR in].
- the output signal OS is a signal in the RGB color space
- the derived [Y ou t], [CB out], and [CR out.] may be converted into an RGB color space, and a conversion process may be performed on each component by the real contrast conversion unit 603 to be used as an output signal OS.
- the color difference correction processing unit 608 may correct each of the RGB components input to the color difference correction processing unit 608 using the ratio of the signal values before and after the process of the conversion signal processing unit 602.
- the structure of a visual processing device 600 as a modification will be described with reference to FIG. Note that portions that perform substantially the same functions as those of the visual processing device 600 shown in FIG. 36 are assigned the same reference numerals, and descriptions thereof are omitted.
- the visual processing device 600 as a modification includes a luminance signal generation unit 610 as a characteristic configuration.
- Each component of the input signal IS that is a signal in the RGB color space is converted in the target contrast conversion unit 601 into a target contrast signal J S that is a signal in the RGB color space. Since detailed processing has been described above, description thereof will be omitted.
- the values of the components of the target contrast signal J S are [R in], [G in], and [B in].
- the luminance signal generation unit 610 generates a luminance signal having a value [Y i n] from each component of the target contrast signal J S.
- the luminance signal is obtained by adding the values of each component of RGB in a certain ratio.
- the conversion signal processing unit 602 processes the luminance signal having the value [Y i n] and outputs the visual processing signal KS having the value [Y u t]. Detailed processing is the same as the processing in the conversion signal processing unit 602 (see FIG. 36) that outputs the visual processing signal KS from the target contrast signal J S, and thus the description thereof is omitted.
- the color difference correction processing unit 608 includes a luminance signal (value [ ⁇ ⁇ n]), a visual processing signal KS (value [Y out]), a target contrast signal JS (values [R in], [G in], and [B in]). ) To output a color difference correction signal (value [Ro ut], [Go ut] v [B out]) which is a signal in the RGB color space. Specifically, the color difference correction processing unit 6OS. Calculates a ratio (value [[Y out] / [Y in]]) between the value [Y in] and the value [You out].
- the calculated ratio is multiplied by the respective components of the target contrast signal JS (values [R in], [G in], [B in]) as a color difference correction coefficient.
- a color difference correction signal (values [Rout], [Gout], [Bout]) is output.
- the actual contrast conversion unit 603 converts each component of the color difference correction signal, which is a signal in the RGB color space, and converts it into an output signal OS, which is a signal in the RGB color space.
- the detailed processing has been described above, and a description thereof will be omitted.
- the processing in the conversion signal processing unit 602 is only processing for the luminance signal, and it is not necessary to perform processing for each of the RGB components. This reduces the visual processing load on the RGB color space input signal IS.
- the visual processing unit 623 shown in FIG. 24 may be formed by a two-dimensional LUT.
- the two-dimensional LUT is a value of the visual processing signal KS with respect to the value of the target contrast signal JS and the value of the unsharp signal US. Is stored. More specifically, the value of the visual processing signal KS is determined based on “Formula M2” described in [First Embodiment] ⁇ Profile Data> (2) ⁇ Second Profile Data >>.
- M2J the value of the target contrast signal J S is used as the value A, and the value of the unsharp signal U S is used as the value B.
- the visual processing device 600 includes a plurality of such two-dimensional LUTs in a storage device (not shown).
- the storage device may be built in the visual processing device 600 or may be connected to the outside via a wire or wirelessly.
- Each two-dimensional LUT stored in the storage device is associated with the value of the target contrast C1 and the value of the actual contrast C2. That is, for each combination of the value of the target contrast C 1 and the value of the actual contrast C 2, [Second Embodiment 3 ⁇ Conversion signal processing unit 602> The same operation as described in ⁇ Operation of conversion signal processing unit 602 >> is performed and stored as a two-dimensional LUT.
- the visual processing unit 623 When the visual processing unit 623 obtains the values of the target contrast C1 and the actual contrast C2, of the two-dimensional LUT stored in the storage device, the two-dimensional LUT associated with each of the obtained values is obtained. Read. Further, the visual processing unit 623 performs visual processing using the read two-dimensional LUT. Specifically, the visual processing unit 623 obtains the value of the target contrast signal JS and the value of the unsharp signal US, reads the value of the visual processing signal KS corresponding to the obtained value from the two-dimensional LUT, and Output the sensed processing signal KS.
- a visual processing device is a device that performs visual processing of an image by being built in or connected to a device that handles images, such as a computer, a television, a digital camera, a mobile phone, a PDA, a printer, a scanner, and the like. It is realized as an integrated circuit.
- each functional block of the above embodiment may be individually made into one chip, or may be made into one chip so as to include a part or all of them.
- LSI is used, but it may also be called IC, system LSI, super LSI, or ultra LSI depending on the degree of integration.
- the method of circuit integration is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor.
- a field programmable gate array FPGA
- a reconfigurable processor that can reconfigure the connection and setting of circuit cells inside the LSI may be used.
- each block of each visual processing device described in the first embodiment and the second embodiment is performed by, for example, a central processing unit (CPU) provided in the visual processing device.
- CPU central processing unit
- programs for performing the respective processes are stored in a storage device such as a hard disk or a ROM, and are read out from the ROM or read out to the RAM and executed.
- the two-dimensional LUT 4 is stored in a storage device such as a hard disk and a ROM, and is referred to as needed. Further, the visual processing unit 3 receives the provision of profile data from the profile data registration device 8 which is directly connected to the visual processing device 1 or indirectly connected via a network, and as a two-dimensional LUT 4 register.
- the visual processing device may be a device that performs gradation processing of an image for each frame (for each field) built in or connected to a device that handles moving images.
- the visual processing method described in the first embodiment is executed.
- the visual processing program is stored in a storage device such as a hard disk or ROM in a device built in or connected to an image processing device such as a computer, television, digital camera, mobile phone, PDA, printer, scanner, etc. It is a program that executes processing, and is provided, for example, via a recording medium such as a CD-ROM or via a network.
- the visual processing device described in the first embodiment and the second embodiment can be represented by the configurations shown in FIGS.
- FIG. 40 is a block diagram showing, for example, the configuration of a visual processing device 910 having the same function as the visual processing device 525 shown in FIG.
- the sensor 911 and the user input unit 912 have the same functions as the input device 527 (see FIG. 7). More specifically, the sensor 911 is connected to the environment where the visual processing device 910 is installed or from the visual processing device 910. This is a sensor that detects the ambient light in the environment where the OS is displayed, and outputs the detected value as a parameter P1 representing the ambient light.
- the user input unit 912 is a device that allows the user to set the intensity of the ambient light stepwise, for example, “strong, medium, weak” or steplessly (continuously). The set value is output as the parameter P1 representing the ambient light.
- the output unit 914 has the same function as the profile data registration unit 526 (see FIG. 7). More specifically, the output unit 914 includes a plurality of profile data associated with the value of the parameter P1 representing the ambient light.
- the profile data is data in the form of a table that gives the value of the output signal OS with respect to the input signal IS and the signal obtained by spatially processing the input signal IS. Further, the output unit 914 outputs profile data corresponding to the value of the acquired parameter P1 representing the ambient light to the conversion unit 915 as a brightness adjustment parameter P2.
- the conversion unit 9 15 has the same functions as the spatial processing unit 2 and the visual processing unit 3 (see FIG. 7).
- the conversion unit 9 15 receives, as inputs, the luminance of the target pixel (target pixel) that is the target of visual processing, the luminance of peripheral pixels located around the target pixel, and the luminance adjustment parameter P 2. Is output and the output signal OS is output.
- the conversion unit 9 15 performs spatial processing on the target pixel and the surrounding pixels. Further, the conversion unit 915 reads the value of the output signal OS corresponding to the target pixel and the result of the spatial processing from the brightness adjustment parameter P2 in the table format, and outputs the output signal OS as the output signal OS.
- the brightness adjustment parameter P2 is not limited to the profile data described above.
- the brightness adjustment parameter P2 may be coefficient matrix data used when calculating the value of the output signal OS from the brightness of the target pixel and the brightness of the peripheral pixels.
- the coefficient matrix data is data storing a coefficient part of a function used when calculating the value of the output signal OS from the brightness of the target pixel and the brightness of the surrounding pixels.
- the output unit 914 does not need to include profile data and coefficient matrix data for all values of the parameter P1 representing the ambient light.
- appropriate profile data or the like may be generated by appropriately dividing the provided profile data or the like according to the acquired parameter P 1 representing the ambient light. .
- FIG. 41 is a block diagram showing a configuration of a visual processing device 920 that performs the same function as the visual processing device 600 shown in FIG. 24, for example.
- the output unit 9 2 1 further acquires the external parameter P 3 in addition to the parameter P 1 representing the ambient light, and based on the parameter P 1 representing the ambient light and the external parameter P 3 Outputs brightness adjustment parameter P2.
- the parameter P 1 representing the ambient light is the same as described in (1) above.
- the external parameter P 3 is a parameter that represents a visual effect that a user who views the output signal OS, for example, finds. More specifically, it is a value such as contrast (target contrast) required by a user who views an image.
- the external parameter P 3 is set by the target contrast setting unit 60 4 (see FIG. 24).
- the default value stored in advance in the output unit 9 2 1 is used.
- the output unit 921 calculates the value of the real contrast according to the configuration shown in FIGS. 33 and 34 from the parameter P1 representing the ambient light, and outputs it as the brightness adjustment parameter P2. Further, the output unit 921 outputs the external parameter P 3 (target contrast) as the brightness adjustment parameter P 2.
- the output unit 9221 stores a plurality of profile data stored in the two-dimensional LUT described in [Second Embodiment] ⁇ Modification> (V V i).
- the profile data is selected from the actual contrast calculated from the parameter P1 that indicates the brightness, and the data in the form of a table is output as the brightness adjustment parameter P2.
- the converter 922 has the same function as the target contrast converter 601, the converted signal processor 602, and the actual contrast converter 603 (see FIG. 24 above). More Specifically, the input signal IS (the luminance of the target pixel and the luminance of the peripheral pixels) and the luminance adjustment parameter P 2 are input to the conversion unit 922, and the output signal OS is output. For example, the input signal IS is converted into the target contrast signal JS (see FIG. 24) using the target contrast acquired as the brightness adjustment parameter P2. Further, the target contrast signal JS is spatially processed to derive an unsharp signal US (see FIG. 24).
- the conversion unit 922 includes the visual processing unit 623 as the modification described in [Second Embodiment] ⁇ Modification> (Vii), and is obtained as the luminance adjustment parameter P2.
- a visual processing signal KS (see Fig. 24) is output from the profile data, the target contrast signal JS, and the unsharp signal US. Further, the visual processing signal KS is converted into an output signal OS using the actual contrast acquired as the brightness adjustment parameter P2.
- this visual processing device 920 it is possible to select profile data to be used for visual processing based on the external parameter P3 and the parameter P1 representing ambient light, and to correct the effect of ambient light. However, even in an environment where ambient light exists, it is possible to improve the local contrast and bring the output signal OS closer to the user's favorite contrast for viewing the OS.
- the configuration described in (1) and the configuration described in (2) can be switched and used as necessary. Switching may be performed using an external switching signal. Further, it may be determined which configuration is used depending on whether or not the external parameter P 3 exists.
- the actual contrast is calculated by the output unit 9 2
- the actual contrast value may be directly input to the output unit 9 2 1.
- the visual processing device 920 ′ shown in FIG. 42 is different from the visual processing device 920 shown in FIG. 41 in that the visual processing device 920 ′ has an adjustment unit 925 that makes the time change of the parameter P 1 representing the ambient light slow. .
- the adjustment unit 925 receives the parameter P 1 representing the ambient light as input and outputs the adjusted output P 4 as output.
- the output unit 921 can acquire the parameter P1 representing the ambient light without a rapid change, and as a result, the output of the output unit 921 also changes slowly with time.
- the adjustment unit 925 is realized by an IIR filter, for example.
- k 1 and k 2 are parameters each having a positive value
- [P 1] is the value of the parameter P 1 representing the ambient light
- [P4] ′ is the output of the adjustment unit 925.
- the value of the delayed output of P4 (for example, the previous output). Note that the processing in adjustment section 925 may be performed using a configuration other than the IIR filter.
- the adjustment unit 925 may be a means that is provided on the output side of the output unit 921 and directly moderates the time change of the brightness adjustment parameter P 2, like a visual processing device 920 ′′ shown in FIG.
- the operation of the adjusting unit 925 is the same as described above.
- k 3 and k 4 are parameters that take positive values
- [P2] is the value of the brightness adjustment parameter P 2
- [P4] ' is the output P of the adjustment unit 925. This is the value of 4 delayed outputs (eg, previous output).
- the processing in adjustment unit 925 may be performed using a configuration other than the IIR filter.
- FIG. 44 is a block diagram showing an overall configuration of a content supply system eX100 realizing a content distribution service.
- a communication service providing area is divided into desired sizes, and base stations e X 107 to ex 110 which are fixed wireless stations are installed in each cell.
- the content supply system eX100 is, for example, an Internet service provider eX102 and a telephone network eX104, and a base station eX107-eX110 on the Internet eX101.
- PDA personal di ital assistant
- cara exl 1 3 mobile phone ⁇ tongue ex 1 1 4
- camera equipped mobile phone ex 1 1 5 etc. Is connected.
- the content supply system eX100 is not limited to the combination as shown in FIG. 44, and may be connected by combining any of them. Further, each device may be directly connected to the telephone network ex "I04 without going through the base stations ex107 to ex110 which are fixed wireless stations.
- the camera eX113 is a device capable of shooting moving images such as a digital video camera.
- PDC Personal Digital Communications
- CDMA Code Division Multiple Access
- W-CDMA Wideband-Code Division Multiple Access
- GSM Global System for Mobile Communi cat ion
- PHS Personal Handyphone System
- the streaming server ex 103 is connected to the camera exll 3 via the base station ex 109 and the telephone network ex 104, and transmits encoded data transmitted by the user using the camera ex 113. Live distribution etc. based on this is possible. Encoding processing of captured data may be performed by the camera eX113, or may be performed by a server or the like that performs data transmission processing. Also, video data shot by the camera ex116 is transmitted to the streaming server ex103 via the computer eX111. May be.
- the camera ex116 is a device such as a digital camera that can shoot still images and moving images. In this case, the video data may be encoded by the camera e X 1 1 6 or the computer e X 1 1 1.
- the encoding process is performed in the LSI ex 117 included in the combi- ter exl 11 and the camera ex 116.
- the software for image encoding and decoding may be incorporated in any storage medium (CD-ROM, flexible disk, hard disk, etc.) that is a recording medium readable by a computer eX111 or the like.
- the video data may be transmitted by a mobile phone with a camera eX115.
- the moving image data at this time is data encoded by the LSI included in the mobile phone eX115.
- the content for example, a video image of a live music that the user shoots with the camera ex113, power camera eX116, or the like is encoded and processed.
- the streaming server eX103 While transmitting to the streaming server eX103, the streaming server eX103 stream-distributes the above-mentioned content data to the requested client.
- the client there are a computer eX111, a PDAex112, a camera ex113, a mobile phone ex114, etc., capable of decoding the encoded data.
- the content supply system exl 00 can receive and reproduce the encoded data at the client, and further receive, decode, and reproduce the data in real time at the client. It is a system that can also realize personal broadcasting o
- the visual processing device When displaying the content, the visual processing device, the visual processing method, and the visual processing program described in the above embodiment may be used.
- a computer exl 11, 06 6 ⁇ 11, a camera 61 1 13, a mobile phone eX 114, etc. are provided with the visual processing device described in the above embodiment, and perform a visual processing method and a visual processing program. It may be realized.
- the streaming server eX "I03 may provide profile data to the visual processing device via the Internet eX10" I.
- Stream Mindasa The bar ex 103 may create a profile.
- the visual processing device can acquire the profile data via the Internet eX101, the visual processing device does not need to previously store the profile data used for the visual processing. It is also possible to reduce the capacity.
- profile data can be obtained from a plurality of servers connected via the Internet eX101, different visual processing can be realized. —A mobile phone will be described as an example.
- FIG. 45 is a diagram illustrating a mobile phone eX115 including the visual processing device of the above embodiment.
- the mobile phone eX115 has an antenna ex201 for transmitting and receiving radio waves to and from the base station ex110, a power camera unit ex203 for capturing images and still images from CCD cameras, etc.
- a camera unit eX203 a display unit eX202 such as a liquid crystal display for displaying data obtained by decoding data obtained by decoding the images received by the antenna ex201, etc., a main unit including operation keys eX204 group, Audio output unit eX208 such as a speaker for audio output, audio input unit ex205 such as a microphone for audio input, data of captured video or still image, data of received mail, video data or A recording medium eX207 for storing encoded data or decoded data such as still image data, and a slot for attaching the recording medium eX207 to a mobile phone ex115. It has a part eX206.
- the recording medium eX207 is a flash memory element that can be stored in a plastic case such as an SD card as a kind of electrically erasable and programmable read only memory (EE PROM), which is a nonvolatile memory that can be electrically rewritten and erased. It is stored.
- a plastic case such as an SD card as a kind of electrically erasable and programmable read only memory (EE PROM), which is a nonvolatile memory that can be electrically rewritten and erased. It is stored.
- E PROM electrically erasable and programmable read only memory
- the mobile phone ex 115 has a power supply circuit for the main control unit eX311, which is to control each part of the main unit equipped with the display unit eX202 and operation keys ex2O4.
- the demultiplexing unit eX308, the recording / reproducing unit ex307, the modulation / demodulation circuit unit eX306, and the audio processing unit ex305 communicate with each other via the synchronous bus ex313. Connected.
- the power supply circuit eX310 is a digital mobile phone with a camera, which supplies power to each part from the battery pack when the call is turned on and the power key is turned on by user operation. Is activated.
- the mobile phone ex115 based on the control of the main control unit ex311, consisting of a CPU, ROM, RAM, etc., converts the voice signal collected by the voice input unit eX205 in the voice call mode into a voice processing unit.
- the digital audio data is converted into digital voice data by eX 305, the spread spectrum processing is performed by the modulation / demodulation circuit section eX306, and the digital / analog conversion processing and the frequency conversion processing are performed by the transmission / reception circuit section ex301. Sent through 2 01.
- the mobile phone eX115 amplifies the received signal received by the antenna eX201 in the voice call mode, performs frequency conversion processing and analog digital conversion processing, and transmits the signal through the modulation / demodulation circuit section eX306. After performing a tram despreading process and converting the analog signal into an analog audio signal by an audio processing unit eX 305, the analog audio signal is output via an audio output unit eX 208.
- the text data of the e-mail input by operating the operation key eX204 of the main unit is transmitted to the main control unit ex via the operation input control unit eX304.
- the main control unit ex 31 1 performs the spread spectrum processing on the text data in the modulation / demodulation circuit unit ex 306, performs the digital / analog conversion processing and the frequency conversion processing in the transmission / reception circuit unit e X 301, and then transmits the data via the antenna e X 201. Transmit to base station ex 110.
- the image data captured by the camera unit ex203 is supplied to the image encoding unit ex312 via the camera interface: ex unit 303.
- the image data captured by the camera unit ex 203 can be directly displayed on the display unit ex 202 via the camera interface unit eX303 and the LCD control unit eX302.
- a certain image coding unit ex 312 converts the image data supplied from the camera unit ex 203 into coded image data by compression coding, and sends this to the demultiplexing unit eX 308.
- the mobile phone ex1 The voice collected by the voice input unit eX205 during imaging in 03 is sent to the demultiplexing unit eX308 as digital voice data via the voice processing unit ex305.
- the demultiplexing unit eX308 multiplexes the encoded image data supplied from the image encoding unit ex312 and the audio data supplied from the audio processing unit eX305 in a predetermined manner, The resulting multiplexed data is spread-spectrum-processed by the modulation / demodulation circuit section eX306, and subjected to digital-analog conversion processing and frequency conversion processing by the transmission / reception circuit section eX301, followed by the antenna eX201. To send over.
- the received signal received from the base station ex110 through the antenna eX201 is received by the modem circuit ex306.
- the spectrum despreading processing is performed, and the multiplexed data obtained as a result is sent to the demultiplexing unit eX308.
- the demultiplexing unit ex308 demultiplexes the multiplexed data to form an encoded bit stream of the image data.
- the encoded image data is divided into an encoded bit stream of the audio data, and the encoded image data is supplied to the image decoding unit ex 3 O 9 via the synchronous bus eX3 13 and the audio data is processed by the audio processing unit e. Supply X305.
- the image decoding unit e X 3 0 9 generates replayed moving image data by decoding the encoded bit stream of the image data, and this is generated via the LCD control unit e X 3 0 2
- the video data included in the moving image file linked to the homepage is displayed, for example.
- the audio processing unit ex305 simultaneously converts the audio data into an analog audio signal, and supplies the analog audio signal to the audio output unit ex208, whereby, for example, a moving image file linked to a homepage is output. Is reproduced.
- the image decoding unit eX309 may include the visual processing device of the above embodiment.
- the broadcast station ex 409 uses the coding bit rate of video information.
- the trim is transmitted via radio waves to a communication or broadcast satellite ex410.
- the broadcasting satellite ex 410 that received it transmitted a broadcast wave, and received this wave with a home antenna eX406 equipped with satellite broadcasting reception equipment, and a TV (receiver) eX4
- a device such as 01 or set-top box (STB) eX407 decodes the encoded bit stream and reproduces it.
- a device such as a television (receiver) ex401 or a set-top box (STB) eX407 may include the visual processing device described in the above embodiment. Further, the visual processing method of the above embodiment may be used. Further, a visual processing program may be provided. In addition, the visual processing device described in the above embodiment is also included in a playback device ex 403 that reads and decodes an encoded bit stream recorded on a storage medium eX402, such as a CD or DVD, which is a recording medium. It is possible to implement visual processing methods and visual processing programs. In this case, the reproduced video signal is displayed on the monitor eX404.
- a storage medium eX402 such as a CD or DVD
- the visual processing device described in the above embodiment is provided in a set-top box eX407 connected to a cable eX405 for a cable TV or an antenna eX406 of a satellite / terrestrial broadcast, A configuration in which a visual processing method and a visual processing program are implemented and reproduced on a TV monitor eX408 may be considered.
- the visual processing device described in the above embodiment may be incorporated in the television instead of the set-top box.
- a car ex 4 12 having an antenna e X 4 1 1 receives a signal from a satellite ex 10 or a base station e X 10 7 or the like, and receives a signal from a car ex 4 1 2. It is also possible to play a moving image on a display device such as 4 13.
- the image signal can be encoded and recorded on a recording medium.
- a recorder eX420 such as a DVD recorder for recording an image signal on a DVD disk eX421 and a disk recorder for recording on a hard disk.
- it can be recorded in the SD force field eX422. If the recorder ex420 is equipped with the decoding device of the above-described embodiment, it interpolates and reproduces the image signal recorded on the DVD disc eX421 or the SD card eX422 to monitor eX42. 408 can be displayed.
- the configuration of the car navigation eX413 is, for example, a camera section eX203, a camera interface section eX303, and an image encoding section e in the configuration shown in FIG.
- a configuration excluding x 312 is conceivable, and the same is conceivable for the computer e X 11 1 @television (receiver) e X 401 or the like.
- the above-mentioned terminals such as the mobile phone eX114 are not only transmission and reception terminals having both an encoder and a decoder, but also transmission terminals having only an encoder and reception terminals having only a decoder. There are three possible implementation formats.
- the visual processing device, the visual processing method, and the visual processing program described in the above embodiment can be used in any of the above-described apparatus systems, and the effects described in the above embodiment can be obtained.
- the present invention can also be expressed as follows.
- Input signal processing means for performing spatial processing on the input image signal and outputting the processed signal
- Signal computing means for outputting an output signal based on computation for emphasizing a difference between respective values obtained by converting the image signal and the processing signal by a predetermined transformation
- a visual processing device comprising:
- the signal calculation means calculates a value A of the image signal, a value B of the processed signal, a conversion function F1, an inverse conversion function F2 of the conversion function F1, and an enhancement function F3, by the following equation F2 (F 1 (A) + F3 (F 1 (A) -F 1 (B))) to calculate the value C of the output signal,
- the visual processing device according to attachment 1.
- the conversion function F 1 is a logarithmic function.
- the visual processing device according to attachment 2.
- the inverse transformation function F 2 is a gamma correction function.
- the signal calculation means performs signal space conversion means for converting the signal space of the image signal and the processed signal, and performs enhancement processing on a difference signal between the converted image signal and the converted processed signal.
- Enhancement processing means for performing, and inverse transformation means for performing an inverse transformation of a signal space on the addition signal of the converted image signal and the difference signal after the enhancement processing, and outputting the output signal,
- the visual processing device according to any one of supplementary notes 2 to 4.
- Input signal processing means for performing spatial processing on the input image signal and outputting the processed signal
- a signal calculation means for outputting an output signal based on a calculation for enhancing a ratio between the image signal and the processing signal
- a visual processing device comprising:
- the signal calculation means outputs the output signal based on the calculation that further performs dynamic range compression of the image signal.
- the signal calculation means outputs the value A of the image signal, the value B of the processed signal, the dynamic range compression function F4, and the enhancement function F5 based on the formula F4 (A) * F5 (A / B). Calculate the signal value C,
- the visual processing device according to attachment 6 or 7.
- the dynamic range compression function F 4 is a monotonically increasing function
- the dynamic range compression function F 4 is a power function.
- the exponent of the power function in the dynamic range compression function F4 is divided into a target contrast value, which is a target value of contrast when performing image display, and an actual contrast value, which is a contrast value in a display environment when performing image display. Determined based on
- Appendix 1 The visual processing device according to 2.
- the enhancement function F 5 is a power function
- Additional visual processing device according to any one of 8 to 13.
- the exponent of the power function in the enhancement function F5 is based on a target contrast value which is a target value of contrast when performing image display and an actual contrast value which is a contrast value in a display environment when performing image display. Determined
- the exponent of the power function in the enhancement function F5 is a value that monotonically decreases with respect to the value A of the image signal when the value A of the image signal is larger than the value B of the processing signal.
- APPENDIX 14 The visual processing device according to 4 or 15.
- the exponent of the power function in the enhancement function F5 is a value that monotonically increases with respect to the value A of the image signal when the value ⁇ of the image signal is smaller than the value B of the processing signal.
- the exponent of the power function in the enhancement function F5 is a value that monotonically increases with respect to the value A of the image signal when the value A of the image signal is larger than the value B of the processing signal.
- APPENDIX 14 The visual processing device according to 4 or 15.
- the exponent of the power function in the enhancement function F5 is a value that monotonically increases with respect to the absolute value of the difference between the value A of the image signal and the value B of the processing signal.
- APPENDIX 14 The visual processing device according to 4 or 15.
- At least one of the maximum value and the minimum value of the enhancement function F5 is limited within a predetermined range
- the signal calculation means is an enhancement processing means for performing enhancement processing on a division processing signal obtained by dividing the image signal by the processing signal, and the output based on the image signal and the division processing signal subjected to the enhancement processing.
- the visual processing device according to appendix 8, further comprising output processing means for outputting a signal.
- the output processing means performs a multiplication process of the image signal and the emphasized division processing signal.
- Appendix 21 The visual processing device according to 1.
- the output processing means includes a DR compression means for performing dynamic range (DR) compression on the image signal, and a multiplication process of the DR-compressed image signal and the emphasized division signal.
- Appendix 21 The visual processing device according to 1.
- a first conversion means to be a number
- a second conversion means for converting the output signal in a third predetermined range into a fourth predetermined range to obtain output image data
- the second predetermined range is determined based on a target contrast value which is a target value of contrast when performing image display,
- the third predetermined range is determined based on an actual contrast value that is a contrast value in a display environment when displaying an image.
- Additional visual processing device according to any one of 8 to 23.
- the dynamic range compression function F 4 is a function for converting the image signal in the second predetermined range to the output signal in the third predetermined range.
- APPENDIX 24 The visual processing device according to 4.
- the first conversion means converts each of the minimum value and the maximum value of the first predetermined range into the minimum value and the maximum value of the second predetermined range
- the second conversion means converts each of the minimum value and the maximum value of the third predetermined range into each of the minimum value and the maximum value of the fourth predetermined range;
- the visual processing device according to attachment 24 or 25.
- the conversion in the first conversion means and the second conversion means is a linear conversion, respectively.
- Appendix 26 The visual processing device according to 6.
- APPENDIX 2 The visual processing device according to any one of 4 to 2-7.
- the setting means includes: storage means for storing a dynamic range of a display device for displaying an image; and a measuring means for measuring luminance of ambient light in a display environment when the image is displayed. Including steps,.
- the setting means includes a measuring means for measuring the luminance at the time of black level display and at the time of white level display in a display environment of a display device that performs image display.
- Input signal processing means for performing spatial processing on the input image signal and outputting the processed signal
- Signal calculating means for outputting an output signal based on an operation for enhancing a difference between the image signal and the processed signal according to a value of the image signal
- a visual processing device comprising:
- the signal calculation means outputs the output signal based on a calculation of adding a value obtained by dynamic range compression of the image signal to a value emphasized by the calculation to be emphasized.
- the signal calculation means calculates a value A of the image signal, a value B of the processed signal, an enhancement amount adjustment function F6, an enhancement function F7, and a dynamic range compression function F8 by the following equation: F8 (A) + F6 ( A)
- the dynamic range compression function F 8 is a monotonically increasing function
- the dynamic range compression function F 8 is an upwardly convex function
- Appendix 35 The visual processing device according to 5.
- the dynamic range compression function F 8 is a power function
- Appendix 3 The visual processing device according to 3.
- the signal calculation means includes enhancement processing means for performing enhancement processing according to a pixel value of the image signal with respect to a difference signal between the image signal and the processed signal, and the image signal and the enhanced difference signal.
- Output processing means for outputting the output signal based on
- Appendix 3 The visual processing device according to 3.
- the output processing means performs an addition process of the image signal and the enhanced difference signal.
- the output processing unit includes a DR compression unit that performs dynamic range (DR) compression on the image signal, and performs an addition process of the DR-compressed image signal and the emphasized difference signal.
- DR dynamic range
- Input signal processing means for performing spatial processing on the input image signal and outputting the processed signal
- a visual processing device comprising: a signal calculation unit that outputs an output signal based on a calculation of adding a value obtained by correcting the gradation of the image signal to a value that emphasizes a difference between the image signal and the processing signal.
- the signal calculation means calculates a value of the image signal A, a value of the processed signal B, an enhancement function F 11, and a gradation correction function F 12 by using a formula F 1 2 (A) + F 1 1 (A — Based on B)
- the output signal value C is calculated based on
- Appendix 41 The visual processing device according to 1.
- the signal calculating means performs an enhancement process for performing an enhancement process on a difference signal between the image signal and the processed signal, and performs an addition process on the gradation-corrected image signal and the enhanced difference signal.
- the visual processing device according to attachment 42, further comprising addition processing means for outputting as an output signal.
- the second predetermined range is determined on the basis of a target contrast value that is a target value of the contrast when performing image display,
- the third predetermined range is determined based on an actual contrast value that is a contrast value in a display environment when displaying an image.
- a first conversion means for converting input image data in a first predetermined range into a second predetermined range to be an image signal
- the second predetermined range is determined based on a target contrast value that is a target value of contrast when performing image display,
- the third predetermined range is determined based on an actual contrast value that is a contrast value in a display environment when displaying an image.
- a visual processing program for causing a computer to perform visual processing, wherein a first conversion step of converting input image data in a first predetermined range into a second predetermined range to be an image signal;
- the second predetermined range is determined on the basis of a target contrast value that is a target value of the contrast when performing image display,
- the third predetermined range is determined based on an actual contrast value that is a contrast value in a display environment when displaying an image.
- a visual processing method for a computer
- the visual processing device includes input signal processing means and signal calculation means.
- the input signal processing means performs spatial processing on the input image signal and outputs a processed signal.
- the signal calculation means outputs an output signal based on a calculation that emphasizes a difference between respective values obtained by converting the image signal and the processed signal by a predetermined conversion.
- the spatial processing refers to a process of applying a low-pass spatial filter to an input image signal, or an average value, a maximum value, or a minimum value of a pixel of interest and surrounding pixels of the input image signal.
- the emphasis operation is, for example, an operation for adjusting a gain, an operation for suppressing an excessive contrast, an operation for suppressing a noise component having a small amplitude, and the like (hereinafter, the same applies in this section).
- the visual processing device of the present invention it is possible to emphasize the difference between the image signal and the processing signal after converting them into different spaces. As a result, for example, it is possible to realize enhancement corresponding to visual characteristics.
- the visual processing device is the visual processing device according to Supplementary Note 1, wherein the signal calculation unit includes a value A of the image signal, a value B of the processed signal, a conversion function F1, and an inverse of the conversion function F1. Calculate the output signal value C based on the formula F 2 (F 1 (A) + F 3 (F 1 (A) -F 1 (B))) for the conversion function F 2 and the enhancement function F 3 .
- the enhancement function F 3 is, for example, a function that adjusts the gain, a function that suppresses excessive contrast, or a function that suppresses a noise component having a small amplitude.
- the value C of the output signal indicates that: That is, the value A of the image signal and the value B of the processed signal are converted to values in another space by the conversion function F1.
- the difference between the value of the converted image signal and the value of the processed signal indicates, for example, a sharp signal in another space.
- the difference between the processed image signal and the converted image signal that has been emphasized by the enhancement function F3 is added to the converted image signal.
- the value C of the output signal indicates a value in which the sharp signal component in another space is emphasized.
- processing such as edge enhancement and contrast enhancement in another space can be performed by using the value A of the image signal and the value B of the processing signal converted to another space.
- the visual processing device according to attachment 3 is the visual processing device according to attachment 2, wherein the conversion function F 1 is a logarithmic function.
- human visual characteristics are generally logarithmic. Therefore, if the image signal and the processed signal are converted into a logarithmic space and processed, the processing suitable for the visual characteristics can be performed.
- the visual processing device of the present invention it is possible to enhance the contrast with a high visual effect or to compress the dynamic range to maintain the local contrast.
- the visual processing device according to attachment 4 is the visual processing device according to attachment 2, wherein the inverse conversion function F 2 is a gamma correction function.
- image signals are subjected to gamma correction using a gamma correction function according to the gamma characteristics of the device that inputs and outputs the image signals.
- the visual processing device of the present invention it is possible to remove the gamma correction of the image signal by the conversion function F 1 and perform processing based on linear characteristics. As a result, optical blur correction can be performed.
- the visual processing device is the visual processing device according to any of Supplementary Notes 2 to 4, wherein the signal operation unit includes a signal space conversion unit, an enhancement processing unit, and an inverse conversion unit. ing.
- the signal space conversion means converts the signal space of the image signal and the processing signal.
- the enhancement processing means performs enhancement processing on the difference signal between the converted image signal and the converted processed signal.
- the inverse conversion means performs an inverse conversion of the signal space on the addition signal of the image signal after conversion and the difference signal after enhancement processing, and outputs an output signal.
- the signal space conversion means converts the signal space between the image signal and the processed signal using the conversion function F1.
- the enhancement processing means performs an enhancement process on a difference signal between the converted image signal and the converted processed signal using the enhancement function F3.
- the inverse transform means performs an inverse transform of a signal space on an added signal of the converted image signal and the difference signal after the enhancement processing using the inverse transform function F2.
- the visual processing device includes input signal processing means and signal calculation means.
- the input signal processing means performs spatial processing on the input image signal and outputs a processing signal.
- the signal calculation means outputs an output signal based on a calculation that emphasizes the ratio between the image signal and the processing signal.
- the ratio between the image signal and the processed signal represents the sharp component of the image signal. For this reason, for example, visual processing that emphasizes the sharp component can be performed.
- the visual processing device is the visual processing device according to Supplementary Note 6, wherein the signal calculation unit outputs the signal based on a calculation that further performs a dynamic range compression of the image signal. Output force signal.
- the visual processing device of the present invention for example, it is possible to compress the dynamic range while enhancing the sharp component of the image signal represented by the ratio of the image signal to the processed signal.
- the visual processing device is the visual processing device according to Supplementary Note 6 or 7, wherein the signal calculation unit includes a value A of the image signal, a value B of the processed signal, a dynamic range compression function F4, and emphasis.
- the value C of the output signal is calculated based on the formula F4 (A) * F5 (A / B).
- the value C of the output signal indicates the following. That is, the division amount (A / B) between the value A of the image signal and the value B of the processing signal represents, for example, a sharp signal.
- F 5 (A / B) represents, for example, the amount of enhancement of the sharp signal.
- local contrast can be enhanced while performing dynamic range compression as necessary.
- the visual processing device according to attachment 9 is the visual processing device according to attachment 8, wherein the dynamic range compression function F 4 is a direct proportional function with a proportionality factor of 1.
- This contrast enhancement is an enhancement process suitable for visual characteristics.
- the visual processing device according to attachment 10 is the visual processing device according to attachment 8, in which the dynamic range compression function F 4 is a monotonically increasing function.
- the visual processing device of the present invention it is possible to enhance local contrast while performing dynamic range compression using the dynamic range compression function F4, which is a monotonically increasing function.
- the visual processing device according to attachment 11 is the visual processing device according to attachment 10 in which the dynamic range compression function F 4 is an upwardly convex function.
- a dynamic range compression function that is It is possible to enhance local contrast while performing dynamic range compression using F4.
- the visual processing device described in appendix 12 is the visual processing device described in appendix 8, and the dynamic range compression function F 4 is a power function.
- the visual processing device of the present invention it is possible to enhance local contrast while performing dynamic range conversion using the dynamic range compression function F 4 that is a power function.
- the visual processing device is the visual processing device according to appendix 12 in which the exponent of the power function in the dynamic range compression function F4 is a target that is the target value of contrast when performing image display. It is determined based on the contrast value and the actual contrast value that is the contrast value in the display environment when the image is displayed.
- the target contrast value is a target value of contrast when performing image display, and is, for example, a value determined by a dynamic range of a display device that performs image display.
- the actual contrast value is a contrast value in a display environment when displaying an image, and is, for example, a value determined by the contrast of an image displayed by a display device when ambient light is present.
- the dynamic range compression function F4 can compress the image signal having the dynamic range equal to the target contrast value into the dynamic range equal to the actual contrast value.
- the visual processing device according to attachment 14 is the visual processing device according to any of attachments 8 to 13, wherein the enhancement function F5 is a power function.
- the visual processing device of the present invention it is possible to enhance the local contrast using the enhancement function F5 which is a power function, and it is possible to visually convert the dynamic range.
- the enhancement function F5 which is a power function
- the visual processing device is the visual processing device according to Supplementary Note 14 in which the exponent of the power function in the enhancement function F5 is a target value which is a target value of a contrast when displaying an image. It is determined based on a contrast value and an actual contrast value which is a contrast value in a display environment when displaying an image.
- local enhancement is performed using an enhancement function F5 which is a power function.
- F5 which is a power function.
- the contrast can be emphasized, and the dynamic range can be visually converted.
- the visual processing device according to attachment 16 is the visual processing device according to attachment 14 or 15, wherein the exponent of the power function in the enhancement function F5 is such that the value A of the image signal is smaller than the value B of the processing signal. Is larger than the value A of the image signal.
- the visual processing device of the present invention it is possible to weaken local contrast enhancement in a high-luminance portion among target pixels having higher luminance than surrounding pixels in an image signal. Therefore, so-called overexposure is suppressed in the visually processed image.
- the visual processing device according to attachment 17 is the visual processing device according to attachment 14 or 15, wherein the exponent of the power function in the enhancement function F5 is such that the value A of the image signal is smaller than the value B of the processing signal. Is also a value that monotonically increases with respect to the value A of the image signal when is also small.
- the visual processing device of the present invention it is possible to weaken the local contrast enhancement in the low-luminance portion of the pixel of interest whose luminance is lower than that of the surrounding pixels in the image signal. Therefore, so-called black crushing is suppressed in the visually processed image.
- the visual processing device according to attachment 18 is the visual processing device according to attachment 14 or 15, wherein the exponent of the power function in the enhancement function F5 is such that the value A of the image signal is smaller than the value B of the processing signal. Is larger than the value A of the image signal.
- the visual processing device of the present invention it is possible to weaken local contrast enhancement in a low-luminance portion among target pixels having higher luminance than surrounding pixels in an image signal. For this reason, in the visually processed image, the deterioration of the SN ratio is suppressed.
- the visual processing device according to attachment 19 is the visual processing device according to attachment 14 or 15, wherein the exponent of the power function in the enhancement function F5 is a value A of the image signal and a value B of the processing signal. Is a value that increases monotonically with respect to the absolute value of the difference between.
- the value that increases monotonically with respect to the absolute value of the difference between the value A of the image signal and the value B of the processed signal is defined as the ratio of the value A of the image signal to the value B of the processed signal becomes closer to 1. If it increases, it will be decided: ⁇ ).
- the local contrast in the pixel of interest having a small difference in brightness from the surrounding pixels in the image signal is particularly emphasized, and the local contrast in the pixel of interest having a large difference in brightness from the surrounding pixels in the image signal is enhanced. Can be overemphasized.
- the visual processing device according to attachment 20 is the visual processing device according to any of attachments 14 to 19, wherein at least one of the maximum value or the minimum value of the enhancement function F5 is within a predetermined range. Within.
- the amount of local contrast enhancement can be limited to an appropriate range.
- the visual processing device is the visual processing device according to attachment 8, wherein the signal calculation means includes an enhancement processing means and an output processing means.
- the enhancement processing means performs enhancement processing on the division processing signal obtained by dividing the image signal by the processing signal.
- the output processing means outputs an output signal based on the image signal and the enhanced division processing signal.
- the enhancement processing means performs enhancement processing on the division processing signal obtained by dividing the image signal by the processing signal, using the enhancement function F5.
- the output processing means outputs an output signal based on the image signal and the division processing signal.
- the visual processing device according to attachment 22 is the visual processing device according to attachment 21 in which the output processing means performs a multiplication process on the image signal and the emphasized division processing signal.
- the dynamic range compression function F 4 is, for example, a direct proportional function of the proportional coefficient 1.
- the visual processing device is the visual processing device according to attachment 21 in which the output processing unit includes a DR compression unit that performs dynamic range (DR) compression on the image signal, A multiplication process is performed on the DR-compressed image signal and the emphasized division process signal.
- DR dynamic range
- the DR compression means performs dynamic range compression of the image signal using the dynamic range compression function F4.
- the visual processing device is the visual processing device according to any of attachments 8 to 23, further comprising a first conversion unit and a second conversion unit. 1st conversion hand
- the stage converts the input image data in the first predetermined range into a second predetermined range to generate an image signal.
- the second conversion means converts an output signal in a third predetermined range into a fourth predetermined range to obtain output image data.
- the second predetermined range is determined based on a target contrast value which is a target value of contrast when displaying an image.
- the third predetermined range is determined based on an actual contrast value which is a contrast value in a display environment when displaying an image.
- the visual processing device of the present invention it is possible to locally maintain the target contrast value while compressing the dynamic range of the entire image to the actual contrast value that has been reduced by the presence of ambient light. Therefore, the visual effect of the visually processed image is improved.
- the visual processing device according to attachment 25 is the visual processing device according to attachment 24, wherein the dynamic range compression function F 4 outputs an image signal in the second predetermined range to an output in the third predetermined range. It is a function that converts to a signal.
- the dynamic range of the entire image is compressed to the third predetermined range by the dynamic range compression function F4.
- the visual processing device is the visual processing device according to Supplementary Note 24 or 25, wherein the first conversion unit converts each of the minimum value and the maximum value in the first predetermined range into a first value and a second value. 2 is converted into a minimum value and a maximum value in a predetermined range. The second conversion means converts each of the minimum value and the maximum value of the third predetermined range into each of the minimum value and the maximum value of the fourth predetermined range.
- the visual processing device according to Supplementary Note 27 is the visual processing device according to Supplementary Note 26, wherein the conversions in the first conversion unit and the second conversion unit are each linear conversion.
- the visual processing device according to attachment 28 is the visual processing device according to any of attachments 24 to 27, further comprising setting means for setting a third predetermined range.
- the third predetermined range can be set according to the display environment of the display device that performs image display. For this reason, it is possible to more appropriately correct the ambient light.
- the visual processing device is the visual processing device according to Supplementary Note 28, wherein the setting unit is a storage unit that stores a dynamic range of the display device that performs image display. And measuring means for measuring the luminance of ambient light in the display environment when displaying images.
- the visual processing device of the present invention it is possible to measure the brightness of ambient light and determine the actual contrast value from the measured brightness and the dynamic range of the display device.
- the visual processing device according to attachment 30 is the visual processing device according to attachment 28, in which the setting means is configured to display the black level and the white level in the display environment of the display device that performs image display. Measuring means for measuring the brightness is included.
- the visual processing device of the present invention it is possible to determine the actual contrast value by measuring the luminance during the black level display and during the white level display in the display environment.
- the visual processing device includes an input signal processing means and a signal calculation means.
- the input signal processing means performs spatial processing on the input image signal and outputs a processed signal.
- the signal calculation means outputs an output signal based on a calculation that emphasizes the difference between the image signal and the processed signal according to the value of the image signal.
- the visual processing device of the present invention for example, it is possible to enhance the sharp component of the image signal, which is the difference between the image signal and the processed signal, according to the value of the image signal. For this reason, it is possible to perform appropriate enhancement from the dark part to the bright part of the image signal.
- the visual processing device is the visual processing device according to Supplementary Note 31, wherein the signal calculation unit performs a dynamic range compression of the image signal with respect to the value emphasized by the calculation to enhance. An output signal is output based on the operation of adding.
- the visual processing device according to attachment 3 3 is the visual processing device according to attachment 31 or 32, and the signal calculation means includes image signal value A, processing signal value B, enhancement amount adjustment function F 6
- the value C of the output signal is calculated based on the formula F 8 (A) + F 6 (A) * F 7 (A ⁇ B).
- the value C of the output signal indicates the following. That is, the difference (A ⁇ B) between the value A of the image signal and the value B of the processing signal represents, for example, a sharp signal.
- F 7 (A-B) represents, for example, the enhancement amount of the sharp signal. Further, the enhancement amount is adjusted by the enhancement amount adjustment function F 6 according to the value A of the image signal, and Is added to the image signal that has been subjected to the dynamic range compression in accordance with.
- c also consists dark portion such as reducing the strength metering is possible to maintain the contrast to the bright part were the dynamic range compression Even in this case, it is possible to maintain the local contrast from the dark part to the light part.
- the visual processing device according to attachment 34 is the visual processing device according to attachment 33, wherein the dynamic range compression function F 8 is a direct proportional function with a proportionality factor of 1.
- the visual processing device of the present invention it is possible to emphasize the contrast uniformly from the dark part to the bright part of the image signal.
- the visual processing device according to attachment 35 is the visual processing device according to attachment 33, wherein the dynamic range compression function F 8 is a monotonically increasing function.
- the visual processing device of the present invention it is possible to maintain local contrast while performing dynamic range compression using the dynamic range compression function F8 that is a monotonically increasing function.
- the visual processing device according to attachment 36 is the visual processing device according to attachment 35, in which the dynamic range compression function F 8 is a convex function.
- the visual processing device of the present invention it is possible to maintain local contrast while performing dynamic range compression using the dynamic range compression function F8 that is an upwardly convex function.
- the visual processing device according to attachment 37 is the visual processing device according to attachment 33, wherein the dynamic range compression function F8 is a power function.
- the visual processing device of the present invention it is possible to maintain local contrast while performing dynamic range conversion using the dynamic range compression function F 8 that is a power function.
- the visual processing device is the visual processing device according to attachment 33, wherein the signal operation unit includes an enhancement processing unit and an output processing unit.
- the emphasis processing means performs emphasis processing on a difference signal between the image signal and the processed signal according to the pixel value of the image signal.
- the output processing means outputs an output signal based on the image signal and the emphasized difference signal.
- the emphasis processing means performs the emphasis processing using the emphasis function F7 whose emphasis amount has been adjusted by the emphasis amount adjustment function F6.
- the output processing means outputs an output signal based on the image signal and the difference signal.
- the visual processing device according to attachment 39 is the visual processing device according to attachment 38, wherein the output processing means performs an addition process between the image signal and the enhanced difference signal.
- the dynamic range compression function F 8 is, for example, a direct proportional function with a proportional coefficient of 1.
- the visual processing device is the visual processing device according to attachment 38, wherein the output processing unit includes a DR compression unit that performs dynamic range (DR) compression on the image signal, Addition processing is performed on the DR-compressed image signal and the emphasized difference signal.
- DR dynamic range
- the DR compression means performs dynamic range compression of the image signal using the dynamic range compression function F8.
- the visual processing device described in Appendix 41 includes input signal processing means and signal calculation means.
- the input signal processing means performs spatial processing on the input image signal and outputs a processed signal.
- the signal calculation means outputs an output signal based on a calculation of adding a value obtained by correcting the gradation of the image signal to a value that emphasizes the difference between the image signal and the processing signal.
- the difference between the image signal and the processed signal represents the sharp component of the image signal.
- sharp component enhancement and image signal tone correction are performed independently. For this reason, it is possible to enhance a certain sharp component regardless of the gradation correction amount of the image signal.
- the visual processing device described in appendix 42 is the visual processing device described in appendix 41, and the signal calculation means includes image signal value A, processing signal value B, enhancement function F11, tone correction For the function F 1 2, the value C of the output signal is calculated based on the formula F 1 2 (A) + F 1 1 (A ⁇ B).
- the value C of the output signal indicates the following. That is, the difference (A ⁇ B) between the value A of the image signal and the value B of the processed signal represents, for example, a sharp signal.
- F 11 (A—B) represents, for example, a sharp signal enhancement process. The Further, it indicates that the tone-corrected image signal and the emphasized sharp signal are added.
- the visual processing device is the visual processing device according to attachment 42, wherein the signal calculation means includes enhancement processing means and addition processing means.
- the enhancement processing means performs enhancement processing on the difference signal between the image signal and the processing signal.
- the addition processing means adds the tone-corrected image signal and the enhanced difference signal, and outputs the result as an output signal.
- the enhancement processing means performs the enhancement processing on the difference signal using the enhancement function F11.
- the addition processing means adds the image signal that has been subjected to the gradation correction processing using the gradation correction function F12 and the difference signal that has been subjected to the enhancement processing.
- the visual processing method includes a first conversion step, a signal calculation step, and a second conversion step.
- first conversion step input image data in a first predetermined range is converted into a second predetermined range to obtain an image signal.
- the signal calculation step is based on a calculation including at least one of a calculation for performing dynamic range compression of the image signal and a calculation for enhancing a ratio between the image signal and a processed signal obtained by spatially processing the image signal. Outputs the output signal of the range.
- an output signal in a third predetermined range is converted into a fourth predetermined range to obtain output image data.
- the second predetermined range is determined based on a target contrast value that is a target value of contrast when displaying an image.
- the third predetermined range is determined based on an actual contrast value which is a contrast value in a display environment when displaying an image.
- the visual processing method of the present invention for example, it is possible to maintain the target contrast value locally while compressing the dynamic range of the entire image to the actual contrast value reduced by the presence of ambient light. This improves the visual effect of the visually processed image.
- the visual processing device includes a first conversion unit, a signal calculation unit, and a second conversion unit.
- the first conversion means converts input image data in a first predetermined range
- the image signal is converted into a second predetermined range to be an image signal.
- the signal calculation means is configured to perform the third measurement based on a calculation including at least one of a calculation for performing dynamic range compression of the image signal and a calculation for enhancing a ratio between the image signal and a processed signal obtained by spatially processing the image signal.
- the output signal of the range is output.
- the second conversion means converts an output signal in a third predetermined range into a fourth predetermined range to obtain output image data.
- the second predetermined range is determined based on a target contrast value that is a target value of contrast when performing image display.
- the third predetermined range is determined based on an actual contrast value which is a contrast value in a display environment when displaying an image.
- the visual processing device of the present invention for example, it is possible to locally maintain the target contrast value while compressing the dynamic range of the entire image to the actual contrast value reduced by the presence of environmental light. This improves the visual effect of the visually processed image.
- the visual processing program according to Supplementary Note 46 is a visual processing program for causing a computer to perform visual processing.
- the visual processing program includes a first conversion step, a signal calculation step, and a second conversion step. This is done for
- the first conversion step converts input image data in a first predetermined range into a second predetermined range to obtain an image signal.
- the signal calculation step is performed based on a calculation including at least one of a calculation for compressing a dynamic range of the image signal and a calculation for enhancing a ratio between the image signal and a processed signal obtained by spatially processing the image signal. Outputs the signal.
- an output signal in a third predetermined range is converted into a fourth predetermined range to obtain output image data.
- the second predetermined range is determined based on a target contrast value that is a target value of contrast when displaying an image.
- the third predetermined range is determined based on an actual contrast value that is a contrast value in a display environment when displaying an image.
- the visual processing program of the present invention for example, it is possible to locally maintain the target contrast value while compressing the dynamic range of the entire image to the actual contrast value reduced by the presence of ambient light. Therefore, the visual effect of the visually processed image is improved. (Industrial applicability)
- the visual processing device of the present invention it is possible for a person who views a visually processed image to obtain an image with a higher visual effect, and particularly to a visual processing device, in particular, spatial processing or gradation processing of an image signal. It is useful as a visual processing device that performs visual processing such as.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/571,296 US20070109447A1 (en) | 2003-09-11 | 2004-09-10 | Visual processing device, visual processing method, visual processing program, and semiconductor device |
EP04773246.6A EP1667065B1 (en) | 2003-09-11 | 2004-09-10 | Visual processing apparatus, visual processing method, visual processing program, and semiconductor device |
KR1020117000704A KR101089426B1 (ko) | 2003-09-11 | 2004-09-10 | 시각 처리 장치, 시각 처리 방법, 시각 처리 프로그램 및 반도체 장치 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003320060 | 2003-09-11 | ||
JP2003-320060 | 2003-09-11 | ||
JP2004084118 | 2004-03-23 | ||
JP2004-084118 | 2004-03-23 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005027042A1 true WO2005027042A1 (ja) | 2005-03-24 |
Family
ID=34315662
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/013602 WO2005027042A1 (ja) | 2003-09-11 | 2004-09-10 | 視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20070109447A1 (ja) |
EP (1) | EP1667065B1 (ja) |
JP (3) | JP4688945B2 (ja) |
KR (2) | KR101089426B1 (ja) |
WO (1) | WO2005027042A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8194999B2 (en) | 2007-01-30 | 2012-06-05 | Fujitsu Limited | Image generating apparatus, image generating method and computer product |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR100640063B1 (ko) * | 2005-02-18 | 2006-10-31 | 삼성전자주식회사 | 외부조도를 고려한 영상향상방법 및 장치 |
TW200631432A (en) * | 2005-02-21 | 2006-09-01 | Asustek Comp Inc | Display system and displaying method capable of auto-adjusting display brightness |
JP4872508B2 (ja) | 2006-07-28 | 2012-02-08 | ソニー株式会社 | 画像処理装置および画像処理方法、並びにプログラム |
KR100782505B1 (ko) * | 2006-09-19 | 2007-12-05 | 삼성전자주식회사 | 이동통신 단말기의 명암색을 이용한 영상 표시 방법 및장치 |
KR100827239B1 (ko) | 2006-10-17 | 2008-05-07 | 삼성전자주식회사 | 영상의 시인성을 향상시키는 장치 및 방법 |
US20100277452A1 (en) * | 2007-02-23 | 2010-11-04 | Sony Corporation | Mobile display control system |
US20080204599A1 (en) * | 2007-02-23 | 2008-08-28 | Sony Corporation | Mobile display control system |
JP4858610B2 (ja) * | 2007-02-28 | 2012-01-18 | 株式会社ニコン | 画像処理方法 |
JP4894595B2 (ja) * | 2007-04-13 | 2012-03-14 | ソニー株式会社 | 画像処理装置および方法、並びに、プログラム |
JP2009071621A (ja) * | 2007-09-13 | 2009-04-02 | Panasonic Corp | 画像処理装置及びデジタルカメラ |
JP4314305B1 (ja) * | 2008-02-04 | 2009-08-12 | シャープ株式会社 | 鮮鋭化画像処理装置、方法、及びソフトウェア |
JP5275122B2 (ja) * | 2008-05-30 | 2013-08-28 | パナソニック株式会社 | ダイナミックレンジ圧縮装置、ダイナミックレンジ圧縮方法、プログラム、集積回路および撮像装置 |
JP5169652B2 (ja) * | 2008-09-08 | 2013-03-27 | セイコーエプソン株式会社 | 画像処理装置、画像表示装置、画像処理方法及び画像表示方法 |
JP5487610B2 (ja) * | 2008-12-18 | 2014-05-07 | ソニー株式会社 | 画像処理装置および方法、並びにプログラム |
US8654140B2 (en) * | 2008-12-26 | 2014-02-18 | Seiko Epson Corporation | Image processor, image display device, and image processing method |
US8606009B2 (en) * | 2010-02-04 | 2013-12-10 | Microsoft Corporation | High dynamic range image generation and rendering |
KR101389932B1 (ko) | 2011-11-29 | 2014-04-29 | 연세대학교 산학협력단 | 이미지 톤 매핑 장치 및 방법 |
JP5933332B2 (ja) * | 2012-05-11 | 2016-06-08 | シャープ株式会社 | 画像処理装置、画像処理方法、画像処理プログラム、および画像処理プログラムを記憶した記録媒体 |
JP5514344B2 (ja) | 2012-05-15 | 2014-06-04 | シャープ株式会社 | 映像処理装置、映像処理方法、テレビジョン受像機、プログラム、及び記録媒体 |
KR101947125B1 (ko) * | 2012-11-27 | 2019-02-13 | 엘지디스플레이 주식회사 | 타이밍 컨트롤러 및 그 구동 방법과 이를 이용한 표시장치 |
RU2563333C2 (ru) * | 2013-07-18 | 2015-09-20 | Федеральное государственное унитарное предприятие "Научно-производственное объединение автоматики имени академика Н.А. Семихатова" | Бесплатформенная инерциальная навигационная система |
KR102111777B1 (ko) * | 2013-09-05 | 2020-05-18 | 삼성디스플레이 주식회사 | 영상 표시장치 및 그의 구동 방법 |
CN105379263B (zh) | 2013-11-13 | 2017-09-22 | 杜比实验室特许公司 | 用于指导图像的显示管理的方法和设备 |
RU2548927C1 (ru) * | 2013-12-05 | 2015-04-20 | Федеральное государственное унитарное предприятие "Научное объединение автоматики имени академика Н.А. Семихатова" | Система астронавигации |
JP6335614B2 (ja) * | 2014-04-25 | 2018-05-30 | キヤノン株式会社 | 画像処理装置、その制御方法、及びプログラム |
JP6523151B2 (ja) * | 2015-12-09 | 2019-05-29 | 富士フイルム株式会社 | 表示装置 |
US10043456B1 (en) * | 2015-12-29 | 2018-08-07 | Amazon Technologies, Inc. | Controller and methods for adjusting performance properties of an electrowetting display device |
US10664960B1 (en) * | 2019-04-15 | 2020-05-26 | Hanwha Techwin Co., Ltd. | Image processing device and method to perform local contrast enhancement |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0732669A1 (en) | 1995-03-14 | 1996-09-18 | Eastman Kodak Company | A method for precompensation of digital images for enhanced presentation on digital displays with limited capabilities |
JPH1065930A (ja) * | 1996-08-19 | 1998-03-06 | Fuji Xerox Co Ltd | カラー画像処理方法およびカラー画像処理装置 |
JPH10334218A (ja) * | 1997-06-02 | 1998-12-18 | Canon Inc | 画像処理装置およびその方法、並びに、記録媒体 |
JP2002095021A (ja) * | 2000-09-13 | 2002-03-29 | Seiko Epson Corp | 補正カーブ生成方法、画像処理方法、画像表示装置および記録媒体 |
JP2002204372A (ja) * | 2000-12-28 | 2002-07-19 | Canon Inc | 画像処理装置およびその方法 |
JP2002536677A (ja) * | 1999-02-01 | 2002-10-29 | マイクロソフト コーポレイション | 表示装置と表示条件情報とを用いる方法及び装置 |
JP2003108109A (ja) * | 2001-09-27 | 2003-04-11 | Seiko Epson Corp | 画像表示システム、プログラム、情報記憶媒体および画像処理方法 |
US6618045B1 (en) | 2000-02-04 | 2003-09-09 | Microsoft Corporation | Display device with self-adjusting control parameters |
Family Cites Families (43)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US660774A (en) * | 1900-03-23 | 1900-10-30 | Thomas H Hicks | Amalgamator. |
DE3024459A1 (de) * | 1979-07-03 | 1981-01-08 | Crosfield Electronics Ltd | Pyramideninterpolation |
US4837722A (en) * | 1986-05-14 | 1989-06-06 | Massachusetts Institute Of Technology | Digital high speed 3-dimensional interpolation machine |
JPH0348980A (ja) * | 1989-07-18 | 1991-03-01 | Fujitsu Ltd | 輪郭強調処理方式 |
JP2663189B2 (ja) * | 1990-01-29 | 1997-10-15 | 富士写真フイルム株式会社 | 画像のダイナミックレンジ圧縮処理方法 |
JP2752309B2 (ja) * | 1993-01-19 | 1998-05-18 | 松下電器産業株式会社 | 表示装置 |
JP3196864B2 (ja) * | 1993-04-19 | 2001-08-06 | 富士写真フイルム株式会社 | 画像のダイナミックレンジ圧縮処理方法 |
US5483360A (en) * | 1994-06-06 | 1996-01-09 | Xerox Corporation | Color printer calibration with blended look up tables |
US5479926A (en) * | 1995-03-10 | 1996-01-02 | Acuson Corporation | Imaging system display processor |
US6094185A (en) * | 1995-07-05 | 2000-07-25 | Sun Microsystems, Inc. | Apparatus and method for automatically adjusting computer display parameters in response to ambient light and user preferences |
JP3003561B2 (ja) * | 1995-09-25 | 2000-01-31 | 松下電器産業株式会社 | 階調変換方法及びその回路と画像表示方法及びその装置と画像信号変換装置 |
EP1156451B1 (en) * | 1995-09-29 | 2004-06-02 | Fuji Photo Film Co., Ltd. | Image processing method and apparatus |
JPH09275496A (ja) * | 1996-04-04 | 1997-10-21 | Dainippon Screen Mfg Co Ltd | 画像の輪郭強調処理装置および方法 |
US6351558B1 (en) * | 1996-11-13 | 2002-02-26 | Seiko Epson Corporation | Image processing system, image processing method, and medium having an image processing control program recorded thereon |
US6453069B1 (en) * | 1996-11-20 | 2002-09-17 | Canon Kabushiki Kaisha | Method of extracting image from input image using reference image |
KR100261214B1 (ko) * | 1997-02-27 | 2000-07-01 | 윤종용 | 영상처리 시스템의 콘트라스트 확장장치에서 히스토그램 등화방법 및 장치 |
JP2951909B2 (ja) * | 1997-03-17 | 1999-09-20 | 松下電器産業株式会社 | 撮像装置の階調補正装置及び階調補正方法 |
JP3585703B2 (ja) * | 1997-06-27 | 2004-11-04 | シャープ株式会社 | 画像処理装置 |
US6147664A (en) * | 1997-08-29 | 2000-11-14 | Candescent Technologies Corporation | Controlling the brightness of an FED device using PWM on the row side and AM on the column side |
US6069597A (en) * | 1997-08-29 | 2000-05-30 | Candescent Technologies Corporation | Circuit and method for controlling the brightness of an FED device |
GB2335326B (en) * | 1997-10-31 | 2002-04-17 | Sony Corp | Image processing apparatus and method and providing medium. |
US6411306B1 (en) * | 1997-11-14 | 2002-06-25 | Eastman Kodak Company | Automatic luminance and contrast adustment for display device |
JP2001527372A (ja) * | 1997-12-31 | 2001-12-25 | ジェンテクス・コーポレーション | 車両視覚システム |
US6323869B1 (en) * | 1998-01-09 | 2001-11-27 | Eastman Kodak Company | Method and system for modality dependent tone scale adjustment |
JP3809298B2 (ja) * | 1998-05-26 | 2006-08-16 | キヤノン株式会社 | 画像処理方法、装置および記録媒体 |
US6643398B2 (en) * | 1998-08-05 | 2003-11-04 | Minolta Co., Ltd. | Image correction device, image correction method and computer program product in memory for image correction |
US6275605B1 (en) * | 1999-01-18 | 2001-08-14 | Eastman Kodak Company | Method for adjusting the tone scale of a digital image |
US6580835B1 (en) * | 1999-06-02 | 2003-06-17 | Eastman Kodak Company | Method for enhancing the edge contrast of a digital image |
JP2001111858A (ja) * | 1999-08-03 | 2001-04-20 | Fuji Photo Film Co Ltd | 色修正定義作成方法、色修正定義作成装置、および色修正定義作成プログラム記憶媒体 |
JP4076302B2 (ja) * | 1999-08-31 | 2008-04-16 | シャープ株式会社 | 画像の輝度補正方法 |
US7006668B2 (en) * | 1999-12-28 | 2006-02-28 | Canon Kabushiki Kaisha | Image processing method and image processing apparatus |
JP3758452B2 (ja) * | 2000-02-28 | 2006-03-22 | コニカミノルタビジネステクノロジーズ株式会社 | 記録媒体、並びに、画像処理装置および画像処理方法 |
US6813041B1 (en) * | 2000-03-31 | 2004-11-02 | Hewlett-Packard Development Company, L.P. | Method and apparatus for performing local color correction |
US6822762B2 (en) * | 2000-03-31 | 2004-11-23 | Hewlett-Packard Development Company, L.P. | Local color correction |
JP4081219B2 (ja) * | 2000-04-17 | 2008-04-23 | 富士フイルム株式会社 | 画像処理方法及び画像処理装置 |
JP4605987B2 (ja) * | 2000-08-28 | 2011-01-05 | セイコーエプソン株式会社 | プロジェクタ、画像処理方法および情報記憶媒体 |
US6483245B1 (en) * | 2000-09-08 | 2002-11-19 | Visteon Corporation | Automatic brightness control using a variable time constant filter |
US6856704B1 (en) * | 2000-09-13 | 2005-02-15 | Eastman Kodak Company | Method for enhancing a digital image based upon pixel color |
US6915024B1 (en) * | 2000-09-29 | 2005-07-05 | Hewlett-Packard Development Company, L.P. | Image sharpening by variable contrast mapping |
US7023580B2 (en) * | 2001-04-20 | 2006-04-04 | Agilent Technologies, Inc. | System and method for digital image tone mapping using an adaptive sigmoidal function based on perceptual preference guidelines |
US6826310B2 (en) * | 2001-07-06 | 2004-11-30 | Jasc Software, Inc. | Automatic contrast enhancement |
JP3752448B2 (ja) * | 2001-12-05 | 2006-03-08 | オリンパス株式会社 | 画像表示システム |
JP2003242498A (ja) * | 2002-02-18 | 2003-08-29 | Konica Corp | 画像処理方法および画像処理装置ならびに画像出力方法および画像出力装置 |
-
2004
- 2004-09-10 EP EP04773246.6A patent/EP1667065B1/en not_active Expired - Lifetime
- 2004-09-10 WO PCT/JP2004/013602 patent/WO2005027042A1/ja active Application Filing
- 2004-09-10 KR KR1020117000704A patent/KR101089426B1/ko active IP Right Grant
- 2004-09-10 KR KR1020067005004A patent/KR101027849B1/ko active IP Right Grant
- 2004-09-10 US US10/571,296 patent/US20070109447A1/en not_active Abandoned
-
2009
- 2009-05-12 JP JP2009115325A patent/JP4688945B2/ja not_active Expired - Fee Related
-
2010
- 2010-12-24 JP JP2010288267A patent/JP4745458B2/ja not_active Expired - Fee Related
-
2011
- 2011-04-04 JP JP2011083128A patent/JP5300906B2/ja not_active Expired - Lifetime
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0732669A1 (en) | 1995-03-14 | 1996-09-18 | Eastman Kodak Company | A method for precompensation of digital images for enhanced presentation on digital displays with limited capabilities |
JPH1065930A (ja) * | 1996-08-19 | 1998-03-06 | Fuji Xerox Co Ltd | カラー画像処理方法およびカラー画像処理装置 |
JPH10334218A (ja) * | 1997-06-02 | 1998-12-18 | Canon Inc | 画像処理装置およびその方法、並びに、記録媒体 |
JP2002536677A (ja) * | 1999-02-01 | 2002-10-29 | マイクロソフト コーポレイション | 表示装置と表示条件情報とを用いる方法及び装置 |
US6618045B1 (en) | 2000-02-04 | 2003-09-09 | Microsoft Corporation | Display device with self-adjusting control parameters |
JP2002095021A (ja) * | 2000-09-13 | 2002-03-29 | Seiko Epson Corp | 補正カーブ生成方法、画像処理方法、画像表示装置および記録媒体 |
JP2002204372A (ja) * | 2000-12-28 | 2002-07-19 | Canon Inc | 画像処理装置およびその方法 |
JP2003108109A (ja) * | 2001-09-27 | 2003-04-11 | Seiko Epson Corp | 画像表示システム、プログラム、情報記憶媒体および画像処理方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1667065A4 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8194999B2 (en) | 2007-01-30 | 2012-06-05 | Fujitsu Limited | Image generating apparatus, image generating method and computer product |
Also Published As
Publication number | Publication date |
---|---|
EP1667065A1 (en) | 2006-06-07 |
JP2009213155A (ja) | 2009-09-17 |
JP2011090706A (ja) | 2011-05-06 |
KR101027849B1 (ko) | 2011-04-07 |
KR20060121875A (ko) | 2006-11-29 |
EP1667065A4 (en) | 2009-06-03 |
EP1667065B1 (en) | 2018-06-06 |
JP2011172260A (ja) | 2011-09-01 |
KR101089426B1 (ko) | 2011-12-07 |
JP4745458B2 (ja) | 2011-08-10 |
JP5300906B2 (ja) | 2013-09-25 |
KR20110007630A (ko) | 2011-01-24 |
JP4688945B2 (ja) | 2011-05-25 |
US20070109447A1 (en) | 2007-05-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4688945B2 (ja) | 視覚処理装置、視覚処理方法、テレビジョン、携帯情報端末、カメラおよびプロセッサ | |
JP4410304B2 (ja) | 視覚処理装置、視覚処理方法、画像表示装置、テレビジョン、携帯情報端末、カメラおよびプロセッサ | |
JP4857360B2 (ja) | 視覚処理装置、視覚処理方法、テレビジョン、携帯情報端末、カメラおよびプロセッサ | |
JP4157592B2 (ja) | 視覚処理装置、表示装置、視覚処理方法、プログラムおよび集積回路 | |
JP4440245B2 (ja) | 視覚処理装置、表示装置および集積回路 | |
JP2008159069A5 (ja) | ||
JP4437150B2 (ja) | 視覚処理装置、表示装置、視覚処理方法、プログラムおよび集積回路 | |
WO2005027041A1 (ja) | 視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置 | |
JP4414307B2 (ja) | 視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置 | |
JP4126297B2 (ja) | 視覚処理装置、視覚処理方法、視覚処理プログラム、集積回路、表示装置、撮影装置および携帯情報端末 | |
JP2006024176A5 (ja) | ||
JP4094652B2 (ja) | 視覚処理装置、視覚処理方法、プログラム、記録媒体、表示装置および集積回路 | |
JP4437149B2 (ja) | 視覚処理装置、視覚処理方法、プログラム、記録媒体、表示装置および集積回路 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200480026253.0 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BW BY BZ CA CH CN CO CR CU CZ DK DM DZ EC EE EG ES FI GB GD GE GM HR HU ID IL IN IS KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NA NI NO NZ OM PG PL PT RO RU SC SD SE SG SK SL SY TM TN TR TT TZ UA UG US UZ VC YU ZA ZM |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SZ TZ UG ZM ZW AM AZ BY KG MD RU TJ TM AT BE BG CH CY DE DK EE ES FI FR GB GR HU IE IT MC NL PL PT RO SE SI SK TR BF CF CG CI CM GA GN GQ GW ML MR SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1020067005004 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2004773246 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2004773246 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007109447 Country of ref document: US Ref document number: 10571296 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 1020067005004 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 10571296 Country of ref document: US |