WO2005027043A1 - 視覚処理装置、視覚処理方法、視覚処理プログラム、集積回路、表示装置、撮影装置および携帯情報端末 - Google Patents
視覚処理装置、視覚処理方法、視覚処理プログラム、集積回路、表示装置、撮影装置および携帯情報端末 Download PDFInfo
- Publication number
- WO2005027043A1 WO2005027043A1 PCT/JP2004/013605 JP2004013605W WO2005027043A1 WO 2005027043 A1 WO2005027043 A1 WO 2005027043A1 JP 2004013605 W JP2004013605 W JP 2004013605W WO 2005027043 A1 WO2005027043 A1 WO 2005027043A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- visual processing
- value
- processing device
- input
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 3300
- 230000000007 visual effect Effects 0.000 title claims abstract description 1863
- 238000003672 processing method Methods 0.000 title claims description 87
- 238000003384 imaging method Methods 0.000 title claims description 37
- 238000006243 chemical reaction Methods 0.000 claims description 695
- 238000004364 calculation method Methods 0.000 claims description 123
- 238000001514 detection method Methods 0.000 claims description 95
- 230000007423 decrease Effects 0.000 claims description 23
- 238000004891 communication Methods 0.000 claims description 22
- 230000002708 enhancing effect Effects 0.000 claims description 20
- 230000005540 biological transmission Effects 0.000 claims description 12
- 230000006870 function Effects 0.000 description 652
- 230000006835 compression Effects 0.000 description 166
- 238000007906 compression Methods 0.000 description 166
- 238000012937 correction Methods 0.000 description 162
- 230000000694 effects Effects 0.000 description 122
- 230000002093 peripheral effect Effects 0.000 description 97
- 238000010586 diagram Methods 0.000 description 92
- 238000012986 modification Methods 0.000 description 76
- 230000004048 modification Effects 0.000 description 76
- 238000000034 method Methods 0.000 description 65
- 230000008859 change Effects 0.000 description 62
- 230000008569 process Effects 0.000 description 43
- 229910052717 sulfur Inorganic materials 0.000 description 40
- 230000001965 increasing effect Effects 0.000 description 38
- 239000008186 active pharmaceutical agent Substances 0.000 description 37
- 238000003860 storage Methods 0.000 description 36
- 239000011159 matrix material Substances 0.000 description 35
- 230000007613 environmental effect Effects 0.000 description 26
- 230000015654 memory Effects 0.000 description 25
- 230000009466 transformation Effects 0.000 description 25
- 230000002829 reductive effect Effects 0.000 description 22
- 229920001690 polydopamine Polymers 0.000 description 19
- 230000003247 decreasing effect Effects 0.000 description 18
- 238000004519 manufacturing process Methods 0.000 description 16
- 230000009471 action Effects 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 9
- 238000011156 evaluation Methods 0.000 description 8
- 238000012886 linear function Methods 0.000 description 8
- 229920006395 saturated elastomer Polymers 0.000 description 8
- 238000009826 distribution Methods 0.000 description 7
- 230000006872 improvement Effects 0.000 description 7
- 238000000926 separation method Methods 0.000 description 7
- 230000005477 standard model Effects 0.000 description 7
- 230000001629 suppression Effects 0.000 description 7
- 238000005259 measurement Methods 0.000 description 6
- 230000015572 biosynthetic process Effects 0.000 description 5
- 238000005286 illumination Methods 0.000 description 5
- 238000003786 synthesis reaction Methods 0.000 description 5
- 230000003321 amplification Effects 0.000 description 4
- 238000005282 brightening Methods 0.000 description 4
- 230000010354 integration Effects 0.000 description 4
- 238000003199 nucleic acid amplification method Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000036961 partial effect Effects 0.000 description 4
- 238000001228 spectrum Methods 0.000 description 4
- 230000037303 wrinkles Effects 0.000 description 4
- 125000002066 L-histidyl group Chemical group [H]N1C([H])=NC(C([H])([H])[C@](C(=O)[*])([H])N([H])[H])=C1[H] 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 230000001186 cumulative effect Effects 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 230000007480 spreading Effects 0.000 description 3
- 238000003892 spreading Methods 0.000 description 3
- 230000015556 catabolic process Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000009795 derivation Methods 0.000 description 2
- 238000000605 extraction Methods 0.000 description 2
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000000873 masking effect Effects 0.000 description 2
- GKAOGPIIYCISHV-UHFFFAOYSA-N neon atom Chemical group [Ne] GKAOGPIIYCISHV-UHFFFAOYSA-N 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 229910052760 oxygen Inorganic materials 0.000 description 2
- 238000001454 recorded image Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 230000003936 working memory Effects 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 235000005078 Chaenomeles speciosa Nutrition 0.000 description 1
- 240000000425 Chaenomeles speciosa Species 0.000 description 1
- 201000005569 Gout Diseases 0.000 description 1
- 241000254158 Lampyridae Species 0.000 description 1
- 241001197153 Remaster Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 150000001768 cations Chemical class 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- -1 for example Substances 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000009738 saturating Methods 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
- G06T5/75—Unsharp masking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Definitions
- Visual processing device visual processing method, visual processing program, integrated circuit
- the present invention relates to a visual processing device, and more particularly to a visual processing device that performs visual processing such as spatial processing or gradation processing of an image signal. Further, the present invention relates to a visual processing method, a visual processing program, an integrated circuit, a display device, a photographing device, and a portable information terminal.
- spatial processing and gradation processing are known. Spatial processing is to perform processing of a pixel of interest using pixels around the pixel of interest to which a filter is applied.
- techniques for performing contrast enhancement of the original image and dynamic range (D R) compression using a spatially processed image signal are known.
- contrast enhancement the difference between the original image and the blur signal (sharp component of the image) is added to the original image to sharpen the image.
- DR compression a part of the blur signal is subtracted from the original image to compress the dynamic range.
- Gradation processing is a process of converting pixel values using a look-up table (L U T) for each target pixel regardless of the surrounding pixels of the target pixel, and is sometimes called gamma correction.
- LUT that sets the gradation of the gradation level having a high appearance frequency (large area) in the original image.
- the LUT is determined for each of the gradation processing (histogram equalization method) that determines and uses one LUT for the entire original image, and the image area obtained by dividing the original image into multiple parts.
- Gradation processing local histogram equalization method
- is known for example, Japanese Patent Application Laid-Open No. 2 00 0-5 7 3 35 (page 3, FIG. 13 to FIG. 1). See Fig. 6.
- the gradation processing for determining and using the LUT for each of the image areas obtained by dividing the original image into a plurality will be described with reference to FIGS. Fig. 104 shows a visual processing device 300 that determines and uses an LUT for each of the image regions obtained by dividing the original image into a plurality of regions.
- the visual processing device 300 includes an image dividing unit 301 that divides an original image input as an input signal IS into a plurality of image regions Sm (1 ⁇ m ⁇ n, where n is the number of divisions of the original image), and each image region Sm.
- the tone conversion curve deriving unit 310 includes a histogram creating unit 302 that creates a brightness histogram Hm in each image area Sm, and a tone conversion for each image area Sm from the created brightness histogram Hm.
- a gradation curve creating unit 303 that creates a curve Cm is included.
- the image dividing unit 301 divides the original image input as the input signal IS into a plurality (n) of image areas (see FIG. 105 (a)).
- the histogram creation unit 302 creates a brightness histogram Hm for each image area Sm (see FIG. 10).
- Each brightness histogram Hm shows the distribution of brightness values of all pixels in the image area Sm. That is, in the lightness histogram Hm shown in FIGS. 10 (a) to (d), the horizontal axis represents the lightness level of the input signal IS, and the vertical axis represents the number of pixels.
- the tone curve generation unit 303 accumulates the “pixel count” of the brightness histogram Hm in the order of brightness, and sets this accumulated curve as the tone conversion curve Cm (see FIG. 107).
- the horizontal axis indicates the brightness value of the pixel in the image area Sm in the input signal IS
- the vertical axis indicates the brightness value of the pixel in the image area Sm in the output signal OS.
- the gradation processing unit 304 loads the gradation conversion curve Cm and converts the brightness value of the pixel in the image area Sm in the input signal IS based on the gradation conversion curve Cm. By doing so, the gradient of the frequently occurring gradation is set in each block, and the contrast per block is improved.
- FIG. 1 Edge enhancement and contrast enhancement using unsharp masking
- a visual processing device 400 for performing The visual processing device 400 shown in FIG. 1 includes a spatial processing unit 4 01 that performs spatial processing on the input signal IS and outputs an unsharp signal US, and subtracts the unsharp signal US from the input signal IS to obtain a differential signal DS.
- the subtraction unit 402 that outputs the signal and the enhancement signal 403 that performs the enhancement processing of the differential signal DS and outputs the enhancement processing signal TS, and the addition that adds the input signal IS and the strong processing signal TS and outputs the output signal OS Part 404.
- Fig. 109 shows enhancement functions R1 to R3.
- the horizontal axis represents the difference signal DS
- the vertical axis represents the enhancement processing signal TS.
- the enhancement function R1 is an enhancement function that is linear with respect to the difference signal DS.
- the enhancement function R2 is a nonlinear J enhancement function with respect to the difference signal DS, and is a function that suppresses excessive contrast.
- a greater suppression effect is exhibited for an input X having a large absolute value (X is the value of the differential signal DS).
- the enhancement function R 2 is represented by a graph having a smaller slope with respect to an input X having a larger absolute value.
- the bow tone function R 3 is a nonlinear enhancement function with respect to the difference signal DS, and suppresses a noise component with a small amplitude.
- a greater suppression effect is exhibited for an input X having a small absolute value (X is the value of the differential signal DS).
- the enhancement function R 3 is represented by a graph having a larger ⁇ for an input X having a larger absolute value.
- the enhancement processing unit 403 uses any one of these enhancement functions R 1 to R 3.
- the difference signal DS is a sharp component of the input signal IS.
- the intensity of the difference signal DS is converted and added to the input signal IS. For this reason, the edge and contrast of the input signal IS are enhanced in the output signal OS.
- FIG. 110 shows a visual processing device 406 that improves local contrast (intensity) (see, for example, Japanese Patent No. 2832 954 (page 2, FIG. 5)).
- the visual processing device shown in FIG. 1 10 406,, spatial processing unit 407, subtraction unit 408, first conversion unit 409, multiplication unit 41 ⁇ , second conversion unit 41 1, and addition unit 4 1 and 2.
- Spatial processing section 407 Unsharp signal US is output.
- the subtracting unit 408 subtracts the unsharp signal US from the input signal IS and outputs a differential signal DS.
- the first conversion unit 409 outputs an amplification coefficient signal GS that locally amplifies the difference signal DS based on the intensity of the unsharp signal US.
- the multiplier 410 multiplies the difference signal DS by the amplification coefficient signal GS, and outputs a contrast enhancement signal HS obtained by locally amplifying the difference signal DS.
- the second conversion unit 41 1 locally corrects the intensity of the unsharp signal US and outputs a corrected unsharp signal AS.
- the adder 41 2 calculates the contrast enhancement signal HS and the corrected unsharp signal AS and outputs the output signal OS.
- the amplification factor signal GS is a non-linear weighting factor that optimizes the contrast locally for the portion of the input signal IS where the contrast is not appropriate. For this reason, in the input signal IS, the appropriate part of the contrast is output as it is, and the inappropriate part is output as appropriate.
- FIG. 11 shows a visual processing device 416 that performs dynamic range compression (see, for example, Japanese Patent Laid-Open No. 2001-298619 (page 3, FIG. 9) :).
- Figure "Visual processing device shown in I 1 1 41 6 [In addition, the spatial processing unit 41 7 that performs spatial processing on the input signal IS and outputs the unsharp signal US, and inverts the unsharp signal US using the LUT It includes a LUT calculation unit 418 that outputs the converted LUT processing signal LS, and an addition unit 4 19 that adds the input signal IS and the LUT processing signal LS and outputs the output signal OS.
- the signal processing signal! _3 is added to the input signal I S to compress the dynamic range of the low frequency component of the input signal I S (frequency component lower than the cut frequency of the spatial processing unit 417). For this reason, the high frequency component is retained while compressing the dynamic range of the input signal IS. (Disclosure of Invention)
- an object of the present invention is to provide a visual processing device having a hardware configuration that does not depend on the visual processing to be realized.
- the visual processing device includes input signal processing means and visual processing means.
- the input signal processing means performs predetermined processing on the input image signal and outputs a processed signal.
- the visual processing means converts the input image signal based on the conversion means that gives a conversion relationship between the input image signal and the processed signal and the output signal that is the visually processed image signal, and converts the output signal to Output.
- the predetermined process is, for example, a direct or indirect process for the image signal, and includes a process for converting the pixel value of the image signal, such as a spatial process or a gradation process.
- the visual processing device of the present invention visual processing is performed using conversion means that provides a conversion relationship between the image signal and the output signal visually processed from the processing signal.
- the conversion means is, for example, a look-up table (LUT) that stores the value of the output signal with respect to the value of the image signal and the processed signal, or outputs the output signal with respect to the value of the image signal and the processed signal.
- the visual processing device is the visual processing device according to claim 1, wherein the processing signal performs predetermined processing on the pixel of interest included in the image signal and the surrounding pixels of the pixel of interest. Signal.
- the predetermined processing is, for example, spatial processing using surrounding pixels with respect to the target pixel, and processing for deriving an average value, maximum value, minimum value, etc. between the target pixel and the surrounding pixels. is there.
- the visual processing device for example, even when visual processing is performed on the same target pixel of interest, different visual processing can be realized due to the influence of surrounding pixels.
- the visual processing device is the visual processing device according to claim 1, wherein the conversion relationship provided by the conversion means is at least a part of the image signal or a small amount of the processing signal. At least a part of the output signal is non-linear.
- the non-linear relationship is expressed by a non-linear function in which at least part of the value of the output signal is a variable whose value is at least part of the image signal or at least part of the processed signal. This means that it is difficult to formulate with functions.
- the visual processing device of the present invention for example, it is possible to realize visual processing that matches the visual characteristics of the image signal or visual processing that matches the nonlinear characteristics of the device that outputs the output signal.
- the visual processing device is the visual processing device according to claim 3, wherein the conversion relationship given by the conversion means is such that both the image signal and the processing signal and the output signal are nonlinear. is there.
- both the image signal and the processed signal and the output signal are in a non-linear relationship.
- the value of the output signal is a non-linear relationship in which the value of the image signal and the value of the processed signal are two variables. It means that it is expressed as a function, or is difficult to formulate with a function.
- the visual processing device of the present invention for example (even if the value of the image signal is the same, if the value of the processing signal is different, it is possible to realize different visual processing depending on the value of the processing signal. It becomes.
- the visual processing device is the visual processing device according to any one of claims 1 to 4, wherein the conversion relationship given by the conversion means is a value calculated from the image signal and the processing signal. It is determined based on the calculation to be emphasized.
- the value calculated from the image signal and the processed signal is, for example, a value obtained by four arithmetic operations of the image signal and the processed signal, or a value obtained by converting the image signal or the processed signal with a certain function. It is a value obtained by doing.
- Examples of computations that emphasize are computations that adjust gain, computations that suppress excessive contrast, and computations that suppress noise components with small amplitudes.
- the visual processing device of the present invention it is possible to emphasize the value calculated from the image signal and the processed signal.
- the visual processing device is the visual processing device according to claim 5,
- the emphasized operation is a non-linear function.
- the visual processing device of the present invention for example, it is possible to realize enhancement that matches the visual characteristics of the image signal, or enhancement that matches the spring characteristics of the device that outputs the output signal.
- the visual processing device is the visual processing device according to claim 5 or 6, wherein the operation to be emphasized is a conversion using a value obtained by converting an image signal or a processing signal.
- the visual processing device is the visual processing device according to any one of claims 5 to 7, wherein the emphasis calculation is an enhancement function that emphasizes a difference between respective conversion values obtained by converting the image signal and the processing signal. is there.
- the enhancement function is, for example, a function that adjusts the gain, a function that suppresses excessive contrast, a function that suppresses small amplitude noise components, and the like.
- the visual processing device of the present invention it is possible to emphasize the difference between the image signal and the processing signal after converting them into different spaces. As a result, for example, it is possible to realize enhancement corresponding to visual characteristics.
- the visual processing device is the visual processing device according to any one of claims 5 to 8, wherein the enhancement operation is an enhancement function that enhances a ratio between the image signal and the processed signal.
- the ratio between the image signal and the processed signal represents the sharp component of the image signal. For this reason, for example, visual processing that emphasizes the sharp component can be performed.
- the visual processing device is the visual processing device according to claim 1 or 2, wherein the conversion relationship given by the conversion means is determined based on conversion that changes brightness.
- the visual processing device is the visual processing device according to claim 10, wherein the conversion for changing the brightness is a conversion for changing the level or gain of the image signal.
- changing the level of the image signal means, for example, applying an offset to the image signal, changing the gain of the image signal, or performing other calculations using the image signal as a variable. Means changing the value of.
- Changing the gain of the image signal means changing the coefficient multiplied by the image signal.
- the visual processing device is the visual processing device according to claim 10, wherein the conversion for changing the brightness is a conversion determined based on the processing signal.
- the visual processing device of the present invention for example, even if the value of the surface image signal is the same, if the value of the processing signal is different, different conversions can be realized according to the value of the processing signal. Become.
- the visual processing device is the visual processing device according to claim 10, wherein the conversion for changing the brightness is a conversion for outputting an output signal that monotonously decreases with respect to the processing signal.
- the processing signal is a spatially processed image signal
- a dark and large area in the image is converted to a bright area
- a bright and large area in the image is converted. Is converted to dark. For this reason, for example, it is possible to correct backlight correction or whiteout.
- the visual processing device is the visual processing device according to any one of claims 1 to 13, wherein the conversion means converts the relationship between the image signal and the output signal into a plurality of gradations. Stored as a tone conversion curve group consisting of conversion curves.
- the gradation conversion curve group is a set of gradation conversion curves for applying gradation processing to pixel values such as luminance and brightness of an image signal.
- the visual processing device of the present invention it is possible to perform gradation processing of an image signal using a gradation conversion curve selected from a plurality of gradation conversion curves. This makes it possible to perform more appropriate gradation processing.
- the visual processing device is the visual processing device according to claim 14, wherein the processing signal selects a plurality of gradation conversion curve group forces and corresponding gradation conversion curves. Signal.
- the processing signal is a signal for selecting a gradation conversion curve, for example, empty This is an inter-processed image signal or the like.
- the visual processing device of the present invention it is possible to perform gradation processing of an image signal using the gradation conversion curve selected by the processing signal.
- the visual processing device is the visual processing device according to claim 15, wherein the value of the processing signal includes at least one gradation conversion curve included in a plurality of gradation conversion curves. Is associated.
- At least one gradation conversion curve used for gradation processing is selected according to the value of the processing signal.
- At least one gradation conversion curve is selected according to the value of the processing signal. Furthermore, the gradation processing of the image signal is performed using the selected gradation conversion curve.
- the visual processing device is the visual processing device according to any one of claims 1 to 16, wherein the conversion means includes a norec-up table (hereinafter referred to as LUT), and the LUT In the field, profile data created in advance by a predetermined calculation is registered.
- LUT norec-up table
- visual processing power is performed using LUT in which profile data created in advance is registered.
- processing such as creating profile data is not necessary, and the execution speed of visual processing can be increased.
- the visual processing device according to claim 18 is the visual processing device according to claim 17, wherein L U T can be changed by registration of profile data.
- profile data is LUT data that realizes different visual processing.
- the visual processing to be realized can be variously changed by registering profile data. That is, various visual processing can be realized without changing the hardware configuration of the visual processing device.
- the visual processing device according to claim 19 is the visual processing device according to claim 17 or 18, further comprising profile data registration means for causing the visual processing means to register profile data.
- the profile data registration means registers the pre-calculated profile data in the visual processing means according to the visual processing.
- the visual processing to be realized can be variously changed by registering profile data. That is, various visual processing can be realized without changing the hardware configuration of the visual processing device.
- the visual processing device according to claim 20 is the visual processing device according to claim 19, wherein the visual processing means obtains profile data created by an external device.
- Profile data is created in advance by an external device.
- An external device is, for example, a computer having a program and CPU that can create profile data.
- the visual processing means acquires profile data. Acquisition is performed, for example, via a network or a recording medium.
- the visual processing means executes visual processing using the acquired profile data. In the visual processing device of the present invention, visual processing can be executed using profile data created by an external device.
- the visual processing device according to claim 21 is the visual processing device according to claim 20, wherein the LUT can be changed by the acquired profile data.
- the acquired profile data is newly registered as LUT. This makes it possible to change LUT and realize different visual processing.
- the visual processing device is the visual processing device according to claim 20 or 21, wherein the visual processing means acquires profile data via a communication network.
- the communication network is, for example, a connection means capable of communication such as a dedicated line, a public line, the Internet, and L A N, which may be wired or wireless.
- visual processing can be realized using profile data acquired via a communication network.
- the visual processing device is the visual processing device according to claim 17, further comprising profile data creating means for creating profile data.
- the profile data creation means creates profile data using characteristics such as image signals and processing signals.
- visual processing can be realized using the profile file data created by the profile data creation means.
- the visual processing device is the visual processing device according to claim 23, wherein the profile data creation means creates profile data based on a histogram of gradation characteristics of an image signal. .
- visual processing is realized using profile data created based on a histogram of gradation characteristics of an image signal. Therefore, it is possible to realize appropriate visual processing according to the characteristics of the image signal.
- the visual processing device is the visual processing device according to claim 17, wherein the profile data registered in LUT is switched according to a predetermined condition.
- visual processing is realized by using profile data switched according to a predetermined condition. This makes it possible to achieve more appropriate visual processing.
- the visual processing device is the visual processing device according to claim 25, wherein the predetermined condition is a condition relating to brightness.
- the visual processing device of the present invention it is possible to realize more appropriate visual processing under the conditions of brightness.
- the visual processing device according to claim 27 is the visual processing device according to claim 26, wherein the brightness is the brightness of the image signal.
- the visual processing device of the present invention it is possible to realize more appropriate visual processing under the conditions relating to the brightness of the image signal.
- the visual processing device is the visual processing device according to claim 27, further comprising lightness semi-U determining means for determining the brightness of the image signal.
- the profile data registered in UT is switched according to the judgment result of the lightness half IJ determination method.
- the lightness determination means is based on pixel values such as brightness and lightness of the image signal, for example.
- the brightness of the image signal is determined.
- the profile data can be switched according to the judgment result.
- the visual processing device is the visual processing device according to claim 26, further comprising lightness input means for inputting a condition relating to brightness.
- the profile data registered in LUT is switched according to the input result of the brightness input means.
- the brightness input means is, for example, a switch connected by wire or wireless that allows the user to input a condition related to brightness.
- the visual processing device of the present invention it is possible for the user to determine the condition relating to the brightness and to switch the profile data via the brightness input means. For this reason, it is possible to realize visual processing appropriate for the user.
- the visual processing device is the visual processing device according to claim 29, wherein the brightness input means is the brightness of the output environment of the output signal or the brightness of the input environment of the input signal. To input.
- the brightness of the output environment is, for example, the brightness of the ambient light around the medium that outputs the output signal, such as a computer, a TV, a digital power camera, a mobile phone, or a PDA, or the medium that outputs the output signal, such as printer paper.
- the brightness of the input environment is, for example, the brightness of the medium itself that receives input signals such as scanner paper.
- the visual processing device of the present invention for example, it is possible for the user to determine conditions relating to room brightness and the like, and to switch profile data via the brightness input means. This makes it possible to achieve visual processing that is more appropriate for Reuser.
- the visual processing device is the visual processing device according to claim 26, further comprising lightness detection means for detecting at least two types of brightness.
- the profile data registered in the LUT is switched according to the detection result of the brightness detection means.
- the brightness detection means is, for example, a means for detecting the brightness of an image signal based on pixel values such as brightness and brightness of the image signal, a brightness of an output environment such as a photosensor or the brightness of the input environment. And a means for detecting a condition relating to brightness input by the user.
- the brightness of the output environment is, for example, the brightness of the ambient light around the medium that outputs the output signal, such as a computer, a TV, a digital camera, a mobile phone, or a PDA, or the medium that outputs the output signal, such as printer paper.
- the brightness of the input environment is, for example, the brightness of the medium itself that receives input signals such as scanner paper.
- the visual processing device of the present invention at least two types of brightness are detected, and profile data is switched accordingly. This makes it possible to achieve more appropriate visual processing.
- the visual processing device is the visual processing device according to claim 31, wherein the brightness detected by the brightness detection means is the brightness of the image signal and the brightness of the output environment of the output signal. Or the brightness of the input environment of the input signal.
- more appropriate visual processing can be realized according to the brightness of the image signal and the brightness of the output environment of the output signal or the brightness of the input environment of the input signal. .
- the visual processing device is the visual processing device according to claim 25, further comprising profile data selection means for selecting profile data registered in LUT.
- the profile data registered in the LUT is switched according to the selection result of the profile data selection means.
- the profile data selection means allows the user to select profile data.
- the visual processing device implements visual processing using the selected profile data.
- the user can select a profile according to his / her preference to realize visual processing.
- the visual processing device according to claim 34 is the visual processing device according to claim 33, and the profile data selection means is an input device for selecting a profile.
- the input device is, for example, a switch built in the visual processing device or connected by wire or wireless.
- a user or an input device can be used to select a desired profile.
- the visual processing device is the visual processing device according to claim 25, further comprising image characteristic judging means for judging the image characteristic of the image signal.
- the profile data registered in LUT is switched according to the determination result of the image characteristic determination means.
- the image characteristic determining means determines image characteristics such as luminance, brightness, or spatial frequency of the image signal.
- the visual processing device implements visual processing using the profile data switched according to the determination result of the image characteristic determination means.
- the image characteristic judging means automatically selects the profile data corresponding to the image characteristic. Therefore, visual processing can be realized using more appropriate profile data for the image signal.
- the visual processing device is the visual processing device according to claim 25, further comprising user identification means for identifying a user.
- the profile data registered in LUT is switched according to the identification result of the user identification means.
- the user identification means is, for example, an input device for identifying the user or a camera.
- visual processing suitable for the user identified by the user identifying means can be realized.
- the visual processing device is the visual processing device according to claim 17, wherein the visual processing means performs an interpolation operation on a value stored in LUT and outputs an output signal.
- LUT stores a value for the resolution of the image signal at a predetermined interval or the value of the processing signal. By interpolating the LUT value corresponding to the interval containing the input image signal value or processed signal value, the output signal value for the input image signal value or processed signal value is output. .
- the visual processing device is the visual processing device according to claim 37, wherein the interpolation operation is an image signal represented by a binary number or at least one lower bit of the processing signal. Linear interpolation based on the value of ⁇ .
- the LUT stores a value corresponding to the value of the upper bit of the image signal or processing signal.
- the visual processing means linearly complements the LUT value corresponding to the interval including the input image signal or processing signal value with the value of the lower bit of the image signal or processing signal, and outputs the output signal. Output.
- the visual processing device of the present invention it is possible to realize more accurate visual processing while storing LUT with a smaller storage capacity.
- the visual processing device is the visual processing device according to any one of claims 1 to 38, wherein the input signal processing means performs spatial processing on the image signal.
- the input signal processing means performs spatial processing on the image signal.
- the visual processing device is the visual processing device according to claim 39, wherein the input signal processing means generates an unsharp signal from the image signal.
- the unsharp signal means a signal obtained by performing spatial processing directly or indirectly on the image signal.
- the visual processing device of the present invention it is possible to realize visual processing by L U T using an image signal and an unsharp signal.
- the visual processing device is the visual processing device according to claim 39 or 40, wherein an average value, a maximum value, or a minimum value of the image signal is derived in the spatial processing.
- the average value may be, for example, a simple average of image signals or a weighted average.
- visual processing can be realized by L U L using the image signal and the average value, maximum value, or minimum value of the image signal.
- the visual processing device is the visual processing device according to any one of claims 1 to 41, wherein the visual processing means uses the input image signal and processing signal, Spatial processing and gradation processing are performed.
- the visual processing method includes an input signal processing step and a visual processing step.
- the input signal processing step performs predetermined processing on the input image signal and outputs a processed signal.
- the visual processing step converts the input image signal and outputs the output signal based on a conversion means that gives a relationship between the input image signal and the processed signal and the output signal that is the visually processed image signal. To do.
- the predetermined process is, for example, a direct or indirect process for the image signal, and includes a process for converting the pixel value of the image signal, such as a spatial process or a gradation process.
- visual processing is performed using conversion means that provides a conversion relationship between the image signal and the processed signal and the visually processed output signal.
- conversion means that provides a conversion relationship between the image signal and the processed signal and the visually processed output signal.
- the visual processing program according to claim 44 is a visual processing program for performing a visual processing method by a computer, and causes a computer to perform a visual processing method including an input signal processing step and a visual processing step. is there.
- the input signal processing step performs predetermined processing on the input image signal and outputs a processed signal.
- the visual processing step converts the input image signal based on a conversion means that gives a relationship between the input image signal and the processed signal and the output signal that is the visually processed image signal. Output.
- the predetermined processing is, for example, direct or indirect processing for the image signal, and includes processing for converting the pixel value of the image signal, such as spatial processing or gradation processing.
- visual processing is performed using conversion means that provides a conversion relationship between the image signal and the processed signal and the visually processed output signal.
- conversion means that provides a conversion relationship between the image signal and the processed signal and the visually processed output signal.
- the integrated circuit according to claim 45 includes the visual processing device according to any one of claims 1 to 42.
- a display device includes the visual processing device according to any one of claims 1 to 42 and display means for displaying an output signal output from the visual processing device.
- the imaging device wherein the image processing unit performs visual processing using an image unit that captures an image and an image captured by the imaging unit as an image signal. And a processing device.
- the portable information terminal is a data receiving means for receiving communication or broadcast image data, and performs visual processing using the received image data as an image signal. And a display means for displaying an image signal visually processed by the visual processing device.
- the portable information terminal shoots an image using a recording means and performs visual processing using the image taken by the photographing means as an image signal.
- a visual processing device and data transmission means for transmitting the visually processed image signal are provided.
- the portable information terminal of the present invention it is possible to obtain the same effect as the visual processing device according to any one of claims 1 to 42.
- the visual processing device of the present invention it is possible to provide a visual processing device having a hardware configuration that does not depend on the realized visual processing.
- FIG. 1 is a block diagram (first embodiment) for explaining the structure of the visual processing device 1.
- FIG. 2 is an example of profile data (first embodiment).
- FIG. 3 is a flowchart illustrating the visual processing method (first embodiment).
- FIG. 4 is a block diagram (first embodiment) illustrating the structure of the visual processing unit 500.
- FIG. 5 is an example of profile data (first embodiment).
- FIG. 6 is a block diagram (first embodiment) for explaining the structure of the visual processing device 5 20.
- FIG. 7 is a block diagram (first embodiment) for explaining the structure of the visual processing device 5 25.
- FIG. 8 is a block diagram (first embodiment) for explaining the structure of the visual processing device 5 30.
- FIG. 9 is a block diagram (first embodiment) for explaining the refinement of the profile data registration device 70 1.
- FIG. 10 is a flowchart (first embodiment) for explaining the visual processing profile creation method.
- FIG. 11 is a block diagram (first embodiment) for explaining the structure of the visual processing device 91.
- FIG. 12 is a graph (first embodiment) showing the relationship between the input signal I S ′ and the output signal O S ′ when the change degree function f k (z) is changed.
- FIG. 13 is a graph (first embodiment) showing the degree-of-change functions f l (z) and f 2 (z).
- FIG. 15 is a block diagram (first embodiment) for explaining the structure of the visual processing device 11.
- FIG. 16 is a block diagram (first embodiment) for explaining the structure of the visual processing device 21.
- FIG. 17 is an explanatory diagram (first embodiment) for explaining two dynamic range compression functions F 4 i.
- FIG. 18 is an explanatory diagram for explaining the enhancement function F 5 (first embodiment).
- FIG. 19 is a block diagram for explaining the structure of the visual processing device 31 (first embodiment).
- FIG. 20 is a block diagram illustrating the structure of the visual processing device 41 (first embodiment).
- FIG. 21 is a block diagram for explaining the structure of the visual processing device 51 (first embodiment).
- FIG. 22 is a block diagram (first embodiment) for explaining the structure of the visual processing device 61.
- FIG. 23 is a block diagram for explaining the structure of the visual processing device 71 (first embodiment).
- FIG. 24 is a block diagram for explaining the structure of the visual processing device 600 (second embodiment).
- FIG. 25 is a graph (second embodiment) for explaining the conversion by the equation M 20.
- FIG. 26 is a graph (second embodiment) for explaining conversion by the equation M 2.
- FIG. 27 is a graph (second embodiment) for explaining the conversion by the equation M 2 1.
- FIG. 28 is a flowchart for explaining the visual processing method (second embodiment).
- FIG. 29 is a graph showing the tendency of the function QM (A) (second embodiment).
- FIG. 30 is a graph showing the tendency of the function QT 2 (A) (second embodiment).
- FIG. 31 is a graph (second embodiment) showing the tendency of the function QT 3 (A).
- FIG. 32 is a graph (second embodiment) showing the tendency of the function or 4 (A, B).
- Fig. 3 3 is a block diagram for explaining the structure of the actual contrast setting section 6 0 5 as a modified example.
- FIG. 34 is a block diagram (second embodiment) for explaining the structure of the actual contrast setting unit 605 as a modified example.
- FIG. 35 is a flowchart (second embodiment) for explaining the operation of the control unit 60 05 e.
- FIG. 36 is a block diagram (second embodiment) for explaining the structure of a visual processing device 600 including the color difference correction processing unit 60 8.
- FIG. 37 is an explanatory diagram for explaining the outline of the color difference correction process (second embodiment).
- FIG. 38 is a flowchart illustrating the estimation calculation in the color difference correction processing unit 60 8 (second embodiment).
- FIG. 39 is a block diagram (second embodiment) for explaining the structure of a visual processing device 600 as a modified example.
- FIG. 40 is a block diagram (third embodiment) for explaining the structure of the visual processing device 9 10.
- FIG. 41 is a block diagram for explaining the structure of the visual processing device 9 20 (third embodiment).
- FIG. 42 is a block diagram (third embodiment) for explaining the structure of the visual processing device 9 2 0 ′.
- FIG. 43 is a block diagram (third embodiment) for explaining the structure of the visual processing device 9 20 ′′.
- FIG. 44 is a block diagram for explaining the structure of the visual processing device 101 (fourth embodiment).
- FIG. 45 is an explanatory diagram for explaining the image region P m (fourth embodiment).
- FIG. 46 is an explanatory diagram (fourth embodiment) for explaining the brightness histogram H m.
- FIG. 47 is an explanatory diagram for explaining the gradation conversion curve C m (fourth embodiment).
- FIG. 48 is a flowchart for explaining the visual processing method (fourth embodiment).
- FIG. 49 is a block diagram (fifth embodiment) for explaining the structure of the visual processing device 1 1 1.
- FIG. 50 is an explanatory diagram (fifth embodiment) for explaining the gradation conversion curve candidates GM to Gp.
- FIG. 51 is an explanatory diagram (fifth embodiment) for explaining the two-dimensional LUT 1441.
- FIG. 52 is an explanatory diagram (fifth embodiment) for explaining the operation of the gradation correction unit 115.
- FIG. 53 is a flowchart (fifth embodiment) for explaining the visual processing method.
- FIG. 54 is an explanatory diagram (fifth embodiment) for explaining a modification of the selection of the gradation conversion curve Cm.
- FIG. 55 is an explanatory diagram (fifth embodiment) for explaining gradation processing as a modification.
- FIG. 56 is a block diagram (fifth embodiment) for explaining the structure of the gradation processing execution unit 144.
- FIG. 57 is an explanatory diagram (fifth embodiment) for explaining the relationship between the curve parameters P 1 and P 2 and the tone conversion curve candidates Gil to Gp.
- FIG. 58 is an explanatory diagram (fifth embodiment) for explaining the relationship between the curve parameters P 1 and P 2 and the selection signal Sm.
- FIG. 59 is an explanatory diagram (fifth embodiment) for explaining the relationship between the curve parameters P 1 and P 2 and the selection signal Sm.
- FIG. 60 is an explanatory diagram (fifth embodiment) for explaining the relationship between the curve parameters P 1 and P 2 and the tone conversion curve candidates G 1 to G p.
- FIG. 61 is an explanatory diagram (fifth embodiment) for explaining the relationship between the curve parameters P 1 and P 2 and the selection signal Sm.
- FIG. 62 is a block diagram (sixth embodiment) for explaining the structure of the visual processing device 121. 3 ⁇ 4) o
- FIG. 63 is an explanatory diagram (sixth embodiment) for explaining the operation of the selection signal correction unit 124.
- FIG. 64 is a flowchart for explaining a visual processing method (sixth embodiment). The
- FIG. 65 is a block diagram (seventh embodiment) for explaining the structure of the visual processing device 16 1.
- FIG. 66 is an explanatory diagram (seventh embodiment) for explaining the spatial processing of the spatial processing unit 162.
- FIG. 67 is a table (seventh embodiment) for explaining the weighting coefficient [W i j].
- FIG. 68 is an explanatory diagram (seventh embodiment) for explaining the effect of the visual processing by the visual processing device 1 61.
- FIG. 69 is a block diagram (seventh embodiment) for explaining the structure of the visual processing device 96 1.
- FIG. 70 is an explanatory diagram (seventh embodiment) for explaining the spatial processing of the spatial processing unit 962.
- FIG. 71 is a table (seventh embodiment) for explaining the weighting coefficient [W i ⁇ ].
- FIG. 72 is a block diagram (ninth embodiment) for explaining the overall configuration of the content supply system.
- FIG. 73 is an example (9th embodiment) of a mobile phone equipped with the visual processing device of the present invention.
- FIG. 74 is a block HI (9th embodiment) for explaining the configuration of a mobile phone.
- FIG. 75 is an example of a digital broadcasting system (9th embodiment).
- FIG. 76 is a block IU (first embodiment) illustrating the structure of the display device 70.
- FIG. 77 is a block diagram for explaining the structure of the image processing device 7 23 (first embodiment).
- FIG. 78 is a block diagram (first embodiment) for explaining the structure of the profile information output unit 7 47.
- FIG. 79 is a block diagram for explaining the structure of the color visual processing device 7 45 (first embodiment).
- FIG. 80 is a block diagram for explaining the structure of the visual processing device 75 3 (first embodiment). It is.
- FIG. 81 is an explanatory diagram for explaining the operation of the visual processing device 75 3 as a modification (first embodiment).
- FIG. 82 is a block diagram (10th embodiment) for explaining the structure of the visual processing device 75 3 a.
- FIG. 83 is a block diagram (10th embodiment) for explaining the structure of the visual processing device 753b.
- Fig. 8 4 is a block diagram for explaining the structure of the visual processing device 7 5 3 c (first embodiment)
- FIG. 85 is a block diagram (first embodiment) for explaining the structure of the image processing device 7 70.
- FIG. 86 is a block diagram for explaining the structure of the user input unit 7 72 (first embodiment).
- FIG. 87 is a block diagram for explaining the structure of the image processing apparatus 800 (first embodiment).
- FIG. 88 is an example of the format ⁇ ⁇ of the input image signal d 3 62 (first embodiment).
- FIG. 89 is a block diagram (first embodiment) illustrating the structure of the attribute determination unit 80 2.
- FIG. 90 is an example of the format ⁇ of the input image signal d 3 62 (first embodiment).
- FIG. 91 shows an example of the format ⁇ ⁇ of the input image signal d 3 62 (first embodiment).
- FIG. 92 shows an example of the format ⁇ ⁇ of the input image signal d 3 62 (first embodiment).
- FIG. 93 is an example of the format ⁇ of the input image signal d 3 62 (first embodiment).
- FIG. 94 is an example of the format ⁇ of the input image signal d 3 62 (first embodiment).
- FIG. 95 is a block diagram (first embodiment) for explaining the structure of the imaging device 8 20.
- FIG. 96 is a block diagram (first embodiment) illustrating the structure of the image processing device 8 32.
- FIG. 97 is a block diagram (first embodiment) for explaining the structure of the image processing device 8 86.
- FIG. 98 is an example of the format of the output image signal d 3 61 (first embodiment).
- FIG. 99 is a block IU (first embodiment) illustrating the structure of the image processing device 8 94.
- FIG. 100 is a block diagram illustrating the structure of an image processing device 8 96 (first embodiment)
- FIG. 10 is a block diagram for explaining the structure of an image processing apparatus 8 98 (first embodiment)
- FIG. 10 is a block diagram for explaining the structure of the image processing device 8 70 (first embodiment)
- FIG. 103 is an explanatory diagram (first embodiment) for explaining the operation of the image processing apparatus 8700.
- FIG. 10 is a block diagram (background art) for explaining the structure of the visual processing device 300.
- FIG. 10 is a block diagram (background art) for explaining the structure of the visual processing device 300.
- FIG. 10 is an explanatory diagram (background art) for explaining the image area S m.
- FIG. 10 is an explanatory diagram (background art) explaining the brightness histogram H m.
- FIG. 10 is an explanatory diagram (background art) explaining the gradation conversion curve C m.
- FIG. 10 is a block diagram (background art) for explaining the structure of a visual processing device 400 using unsharp masking.
- FIG. 10 is an explanatory diagram (background art) for explaining the enhancement functions R 1 to R 3.
- FIG. 10 is a block diagram (background art) for explaining the structure of the visual processing device 4 ⁇ 06 that improves the local contrast.
- Figure 11 is a block diagram (background art) that explains the structure of a visual processing device 4 16 that performs dynamic range compression.
- first to first embodiments as the best mode of the present invention will be described.
- a visual processing device using two-dimensional LUT will be described.
- a visual processing device that corrects ambient light when ambient light is present in the environment for displaying an image will be described.
- a visual processing device 1 using a two-dimensional LUT as a first embodiment of the present invention will be described with reference to FIGS.
- a modification of the visual processing device will be described with reference to FIGS.
- a visual processing device that realizes visual processing equivalent to the visual processing device 1 will be described with reference to FIGS.
- the visual processing device 1 is a device that performs visual processing such as spatial processing and gradation processing of image signals.
- the visual processing device 1 constitutes an image processing device together with a device that performs color processing of an image signal in a device that handles images such as a computer, a television, a digital camera, a mobile phone, a PDA, a printer, and a scanner.
- a device that handles images such as a computer, a television, a digital camera, a mobile phone, a PDA, a printer, and a scanner.
- Figure 1 shows the basic configuration of the visual processing device 1 that performs visual processing on the image signal (input signal IS) and outputs a visually processed image (output signal OS).
- the visual processing device 1 performs spatial processing on the luminance value of each pixel of the original image acquired as the input signal IS and outputs an unsharp signal US, and the input signal IS and unsharp for the same pixel.
- a visual processing unit 3 that performs visual processing of the original image using the signal US and outputs an output signal OS is provided.
- the spatial processing unit 2 obtains the unsharp signal US using, for example, a low-pass spatial filter that passes only the low-pass space of the input signal IS.
- Low-pass spatial filters include FIR (Finite Impulse Re spones) type low-pass spatial filters or IIR (lnfinite I mp ulse Re spones) type low-pass spatial filters that are commonly used to generate unsharp signals. May be used.
- the visual processing unit 3 has a two-dimensional LUT 4 that gives the relationship between the input signal IS and the unsharp signal US and the output signal OS, and the two-dimensional LUT 4 is applied to the input signal IS and the unsharp signal US. Refer to output signal OS.
- Matrix data called profile data is registered in the 2D LUT4.
- the profile data has a row (or column) corresponding to each pixel value of the input signal IS and a column (or row) corresponding to each pixel value of the unsharp signal US.
- the pixel value of the output signal OS corresponding to the combination of the input signal IS and the unsharp signal US is stored.
- the profile data is registered in the two-dimensional LUT 4 by the profile data registration device 8 built in or connected to the visual processing device 1.
- the profile data registration device 8 stores a plurality of profile data created in advance by a personal computer (PC) or the like.
- a plurality of profile data that realizes contrast enhancement, D-range compression processing, or gradation correction (for details, see ⁇ Profile Data> below) is stored.
- the visual processing device 1 uses the profile data registration device 8 to provide a profile for the 2D LUT 4. It is possible to implement various visual processing by changing the registered contents of the data.
- An example of profile data is shown in Figure 2.
- the profile data shown in FIG. 2 is profile data for causing the visual processing device 1 to realize processing equivalent to the visual processing device 400 shown in FIG. In Fig. 2, the profile data is expressed in a 64X64 matrix format.
- the upper 6 bits of the luminance value of the input signal IS expressed in 8 bits are displayed in the row direction ( In the horizontal direction, the value of the upper 6 bits of the luminance value of the unsharp signal US expressed in 8 bits is shown.
- the value of the output signal OS is indicated by 8 bits as a matrix element for two luminance values.
- the value C obtained by Equation M11 may be a negative value.
- the value of the profile data element corresponding to the value A of the input signal IS and the value B of the unsharp signal US may be 0.
- the value C obtained by Equation M11 may be saturated. In other words, the maximum value of 255 that can be expressed in 8 bits may be exceeded.
- the element of profile data corresponding to the value A of the input signal IS and the value B of the unsharp signal US may be the value 255. In FIG.
- each element of the profile data obtained in this way is displayed in contour lines.
- the function R5 is an amplification factor from the unsharp signal US in the first conversion unit 409.
- the function R 6 is a function for outputting the signal GS, and the function R 6 is a function for outputting the corrected unsharp signal AS from the unsharp signal US in the second conversion unit 4 11.
- the visual processing device 4 1 shown in Fig. 11 1 1 It is possible to realize processing equivalent to 6.
- the function R 8 is a function for outputting the L U T processing signal L S from the unsharp signal U S.
- FIG. 3 shows a flowchart for explaining the visual processing method in the visual processing device 1.
- the visual processing method shown in FIG. 3 is realized by hardware in the visual processing device 1 and performs visual processing of the input signal I S (see FIG. 1).
- the input signal I S is spatially processed by a low-pass spatial filter (step S I 1), and an unsharp signal U S is acquired. Further, the value of the two-dimensional LUT 4 with respect to the input signal IS and the unsharp signal US is referred to, and the output signal OS is output (step S12). The above processing is performed for each pixel input as the input signal IS.
- Each step of the visual processing method shown in FIG. 3 may be realized as a visual processing program by a computer or the like.
- the same density pixel exists at different locations in the image.
- Brightness conversion is performed. More specifically, when the dark background of the person in the image is brightened, the hair of the person with the same density also becomes brighter.
- the visual processing device 1 uses the value A of the input signal IS and the unsharp signal. Profile data created based on 2D function corresponding to US value B Visual processing is performed using the data. For this reason, pixels of the same density that exist in different locations in the image can be brightened or darkened, including the surrounding information, instead of being converted uniformly, making it ideal for each region in the image. Brightness can be adjusted. More specifically, the background of the same density can be brightened without changing the density of the human hair in the image.
- the visual processing device 1 uses 2D LUT 4 to perform visual processing of the input signal IS.
- the visual processing device 1 does not depend on the visual processing effect to be realized, and has a hardware configuration.
- the visual processing device 1 can be configured with general-purpose / wordware, which is effective in reducing hardware costs.
- Profile data registered in the 2D L U 4 can be changed by the profile data registration device 8. For this reason, various visual processing can be realized by changing the profile data in the visual processing device 1 (or without changing the hard-duty configuration of the visual processing device 1. In addition, the visual processing device 1 can realize spatial processing and gradation processing at the same time.
- the registered profile data of 2D L U ⁇ 4 can be calculated in advance 7b. Once the profile data is created, it takes a long time to perform visual processing using it, even if it achieves complicated processing. For this reason, even if it is configured with hardware or software, even if it is a visual processing with a complicated configuration, if the visual processing device 1 is used, the processing time is reduced due to the complexity of the visual processing. It is possible to speed up visual processing without depending on it.
- the 6 4 X 6 4 matrix profile data has been described.
- the effect of the present invention does not depend on the size of the profile data.
- 2D LUT 4 can have profile data for all combinations of values that the input signal IS and unsharp signal US can take. It is.
- the profile data may be in a 256 x 256 matrix format. In this case, the memory capacity required for the 2D LUT 4 increases, More accurate visual processing can be realized.
- the profile data consists of the upper 6-bit value of the luminance value of the input signal IS expressed in 8 bits and the upper 6-bit value of the luminance value of the unsharp signal US expressed in 8 bits. It explained that the value of the output signal OS for and was stored.
- the visual processing device 1 further includes an interpolation unit that linearly interpolates the value of the output signal OS based on the elements of the adjacent profile data and the magnitudes of the lower two bits of the input signal IS and the unsharp signal US. You may have.
- the interpolation unit may be provided in the visual processing unit 3 and output a value obtained by linear interpolation of the value stored in the two-dimensional LUT 4 as the output signal OS.
- FIG. 4 shows a visual processing unit 500 including an interpolation unit 501 as a modification of the visual processing unit 3.
- the visual processing unit 500 receives a two-dimensional LUT4 that gives the relationship between the input signal IS and the unsharp signal US and the pre-interpolation output signal NS, the pre-interpolation output signal NS, the input signal IS, and the unsharp signal US.
- an interpolation unit 501 for outputting an output signal OS.
- Two-dimensional LUT4 interpolates between the upper 6 bits of the luminance value of the input signal IS expressed in 8 bits and the upper 6 bits of the luminance value of the unsharp signal US expressed in 8 bits. Stores the value of the previous output signal NS. The value of the output signal NS before interpolation is stored as an 8-bit value, for example.
- the 2D LUT4 outputs the values of the four pre-interpolation output signals NS corresponding to the sections that include these values. To do.
- the interval including each value is (the value of the upper 6 bits of the input signal IS, the value of the upper 6 bits of the unsheared signal US), and the value exceeding the value of the upper 6 bits of the input signal IS (Minimum 6-bit value, upper 6-bit value of unsharp signal US), (Upper 6-bit value of input signal IS, Unsharp signal US 6-bit value exceeding upper 6-bit value) (Bit value), (minimum 6-bit value exceeding the upper 6-bit value of the input signal IS, minimum 6-bit value exceeding the upper 6-bit value of the unsharp signal US)
- This is the interval surrounded by the four pre-interpolation output signals N stored for the combination.
- the interpolator 501 receives the lower 2 bits of the input signal IS and the lower 2 bits of the unsharp signal US. Using these values, the four pre-interpolations output by the 2D LUT4 are output. The value of the output signal NS is linearly interpolated. More specifically, using the lower 2 bits of the input signal IS and the lower 2 bits of the unsharp signal US, the weighted average of the values of the four output signals NS before interpolation is calculated, and the output signal OS is output.
- the interpolation unit 501 may perform linear interpolation only for either the input signal IS or the unsharp signal US.
- the average value (simple average or weighted average) of the input signal IS between the target pixel and the surrounding pixels of the target pixel is the maximum value for the input signal IS for the target pixel.
- the minimum value or the median value may be output as the unsharp signal US.
- the average value, maximum value, minimum value, or median value of only the peripheral pixels of the target pixel may be output as the unsharp signal US.
- the values are generated based on the linear function M11 for each of the value A of the input signal IS and the value B of the unsharp signal US up to the value Ci of each element of the profile data.
- the value C of each element of the profile data may be created based on a non-linear function with respect to the value A of the input signal I S.
- visual processing according to the visual characteristics or output signal OS is output.
- Visual processing suitable for the nonlinear characteristics of devices that handle images such as computers, televisions, digital cameras, mobile phones, PD-8s, printers, and scanners.
- the value C of each element of the profile data is created based on a nonlinear function, that is, a two-dimensional nonlinear function, for each of the value A of the input signal IS and the value B of the unsharp signal US. Also good.
- the profile data is represented in a 6 4 X 6 4 matrix format, and the upper 6 bits of the luminance value of the input signal IS represented in 8 bits in the column direction (vertical direction).
- the upper 6-bit value of the luminance value of the unsharp signal US expressed in 8 bits is shown.
- the value of the output signal OS is shown as 8 bits as a matrix element for two luminance values.
- the conversion function F 1 is a common logarithmic function.
- the inverse transformation function F 2 is an exponential function (antilog) as an inverse function of the common logarithmic function.
- the enhancement function F3i is one of the enhancement functions R1 to R3 described with reference to FIG.
- the value C obtained by Equation M14 may be a negative value.
- the value of the profile data element corresponding to the value A of the input signal I S and the value B of the unsharp signal U S may be 0.
- the value C obtained by Equation Ml 4 may be saturated. In other words, the maximum value of 255 that can be expressed in 8 bits may be exceeded.
- the element of the profile data corresponding to the value A of the input signal I S and the value B of the unsharp signal U S may be the value 255.
- the elements of the profile data obtained in this way are displayed in contour lines. A more detailed explanation of the non-linear profile data is given in ⁇ Profile Data> below.
- the profile data provided in the two-dimensional LUT 4 may include a plurality of gradation conversion curves (gamma curves) that realize gradation correction of the input signal IS.
- Each tone conversion curve is a monotonically increasing function, such as a gamma function having a different gamma coefficient, and is associated with the value of the unsharp signal US. The association is performed, for example, so that a gamma function having a large gamma coefficient is selected for a small unsharp signal us.
- This The sharp signal US plays a role as a selection signal for selecting at least one gradation conversion curve from the gradation conversion curve group A included in the profile data.
- the profile data registration device 8 is built in or connected to the visual processing device 1, stores a plurality of profile data created in advance by a PC or the like, and explained that the registration contents of the two-dimensional LUT 4 are changed.
- the profile data stored in the profile data registration device 8 is created by a PC installed outside the visual processing device 1.
- the profile data registration device 8 acquires profile data from a PC via a network or a recording medium.
- the profile data registration device 8 registers a plurality of profile data to be stored in the two-dimensional LUT 4 according to predetermined conditions. This will be described in detail with reference to FIGS. Note that portions having substantially the same functions as those of the visual processing device 1 described with reference to FIG.
- FIG. 6 shows a block diagram of a visual processing device 520 that determines an image of the input signal IS and switches profile data to be registered in the two-dimensional LUT 4 based on the determination result.
- the visual processing device 520 includes a profile data registration unit 521 having the same function as the profile file data registration device 8 in addition to the structure similar to that of the visual processing device 1 shown in FIG. Further, the visual processing device 520 includes an image determination unit 522.
- the image determination unit 522 receives the input signal IS and outputs the determination result SA of the input signal IS.
- the profile data registration unit 521 receives the determination result SA and outputs the profile data PD selected based on the determination result SA.
- the image determination unit 5 2 2 determines an image of the input signal IS. In the image determination, the brightness of the input signal IS is determined by obtaining pixel values such as the luminance and brightness of the input signal IS.
- the profile data registration unit 5 2 1 acquires the determination result S A and switches and outputs the profile data PD based on the determination result S A. More specifically, for example, when the input signal IS is determined to be bright, a profile for compressing the dynamic range is selected. As a result, it is possible to maintain contrast even for an overall bright image. In addition, considering the characteristics of the device that displays the output signal OS, a profile is selected so that the output signal OS with an appropriate dynamic range is output.
- the visual processing device 5 20 can realize appropriate visual processing according to the input signal IS.
- the image determination unit 5 2 2 may determine not only pixel values such as luminance and brightness of the input signal IS but also image characteristics such as spatial frequency.
- FIG. 7 shows a block diagram of the visual processing device 5 25 for switching the profile data to be registered in the two-dimensional LUT 4 based on the input result of the input device 6 for inputting the condition relating to the brightness.
- the visual processing device 5 25 includes a profile data registration unit 5 26 having the same function as the profile data registration device 8 in addition to the same structure as the visual processing device 1 shown in FIG. Furthermore, the visual processing device 5 2 5 includes an input device 5 2 7 connected by wire or wirelessly. More specifically, the input device 5 2 7 is an input button or an input button provided in an image handling device itself such as a computer, a TV, a digital camera, a mobile phone, a PDA, a printer, and a scanner that output an output signal OS. Realized as a remote control for each device.
- the input device 5 2 7 is an input device for inputting brightness-related conditions. For example, it has switches such as “bright J“ dark ”.
- the input device 5 2 7 outputs the input result SB by the operation of the user.
- the profile data registration unit 5 2 6 acquires the input result S B and switches and outputs the profile data P D based on the input result S B. More specifically, for example, when the user inputs “bright”, a profile for compressing the dynamic range of the input signal IS is selected and output as profile data PD. This makes it possible to maintain the contrast even when the environment where the device for displaying the output signal OS is placed is in the “bright j state”.
- the visual processing device 5 2 5 can realize appropriate visual processing in accordance with the input from the input device 5 2 7.
- the conditions related to brightness include not only the conditions related to the brightness of ambient light around the medium that outputs output signals such as computers, televisions, digital cameras, mobile phones, and PDAs, but also output signals such as printer paper. It may be a condition related to the brightness of the medium itself. In addition, for example, conditions relating to the brightness of the medium itself for inputting an input signal such as scanner paper may be used.
- These may be input automatically not only by a switch but also by a photo sensor.
- the input device 5 27 may be a device for directly operating the profile switching on the profile data registration unit 5 26 instead of inputting the brightness condition.
- the input device 5 27 may display a list of profile data and allow the user to select in addition to the condition regarding brightness.
- the input device 5 27 may be a device for identifying a user.
- the input device 5 27 may be a camera for identifying a user or a device for inputting a user name.
- profile data that suppresses an excessive change in luminance is selected.
- FIG. 8 shows a block diagram of a visual processing device 5 30 that switches profile data to be registered in the two-dimensional LUT 4 based on detection results from a brightness detection unit for detecting two types of brightness.
- the visual processing device 5 30 includes a profile data registration unit 5 31 having the same function as the profile data registration device 8 in addition to the same structure as the visual processing device 1 shown in FIG. Further, the visual processing device 5 3 0 includes a brightness detection unit 5 3 2.
- the lightness detection unit 5 3 2 includes an image determination unit 5 2 2 and input devices 5 2 7 and 7b.
- the image determination unit 5 2 2 and the input device 5 2 7 are the same as described with reference to FIGS.
- the lightness detection unit 5 3 2 receives the input signal I S and outputs the determination result S A from the image determination unit 5 2 2 and the input result S B from the input device 5 2 7 as detection results.
- the profile data registration unit 5 31 receives the judgment result S A and the input result S B as inputs, and switches and outputs the profile data PD based on the judgment result S A and the input result S B. More specifically, for example, when the ambient light is in a “bright” state and the input signal IS is also determined to be bright, a profile that compresses the dynamic range of the input signal IS is selected, and the profile data PD Is output as. This makes it possible to maintain the contrast when displaying the output signal OS.
- the visual processing device 5 30 can realize appropriate visual processing.
- each profile data registration unit may not be provided integrally with the visual processing device.
- the profile data registration unit may be connected to the visual processing device via the network as a server having a plurality of profile data or as a plurality of servers having the respective profile data.
- the network is, for example, a connection capable of communication such as a dedicated line, a public line, the Internet, and a LAN. Means, which may be wired or wireless.
- the judgment result SA and the input result SB are also transmitted from the visual processing device side to the profile data registration unit side through the same network.
- the profile data registration device 8 includes a plurality of profile data and realizes different visual processing by switching registration to the two-dimensional LUT 4.
- the visual processing device 1 may include a plurality of two-dimensional LUTs in which profile data for realizing different visual processing is registered. In this case, the visual processing device 1 realizes different visual processing by switching the input to each 2D LUT or switching the output from each 2D LUT. Also good.
- the profile data registration device 8 may be a device that generates new profile data based on a plurality of profile data and registers the generated profile data in the two-dimensional LUT 4.
- FIG. 9 is a block diagram mainly illustrating a profile data registration device 7 0 1 as a modified example of the profile data registration device 8.
- the profile data registration device 7 0 1 is a device for switching the profile data registered in the two-dimensional LUT 4 of the visual processing device 1.
- the profile data registration device 70 1 includes a profile data registration unit 70 2 in which a plurality of profile data is registered, and a profile creation practical unit 7 0 3 that generates new profile data based on the plurality of profile data.
- a parameter input unit 7 06 for inputting parameters for generating new profile file data, and a control unit 7 0 5 for controlling each unit.
- the profile data registration unit 70 2 includes a plurality of profile data registration devices 8 or a plurality of profiles similar to the profile data registration units shown in FIGS.
- the aisle data is registered, and the selected profile data selected by the control signal C “I 0” from the control unit 7 0 5 is read out.
- two selections are made from the profile data registration unit 7 0 2 Assume that profile data is read out, and that the first selected profile data d 1 0 and the second selected profile data d 1 1 respectively.
- the profile data read from the profile data registration unit 70 2 is determined by the input of the parameter input unit 7 06.
- information on the desired visual processing effect, the degree of processing, the visual environment of the processed image, and the like are input as parameters manually or by force such as a sensor.
- the control unit 70 5 designates the profile data to be read from the parameters input by the parameter input unit 70 6 by the control signal c 1 0, and sets the synthesis value of each profile data to the control signal c 12 Specify more.
- the profile creation execution unit 70 3 creates a profile generation unit 7 that creates new profile data d 6 from the first selection profile data d 1 0 and the second selection profile data d 1 1. Has 0-4.
- the profile generation unit 70 4 acquires the first selection profile data d 10 and the second selection profile data d 11 from the profile data registration unit 70 2. Further, a control signal c 12 specifying the degree of synthesis of each selected profile data is acquired from the control unit 70 5.
- the profile generator 7 0 4 receives the control signal c 1 2 for the value [m] of the first selection profile data d 1 0 and the value [n] of the second selection profile data d 1 1.
- the generation profile data d6 of value [I] is created using the specified degree of synthesis value [k].
- the two-dimensional LUT 4 acquires the generated profile data d 6 generated by the profile generation unit 70 4, and stores the acquired value at the address specified by the count signal c 11 of the control unit 7 05.
- the generated profile data d6 is associated with the same image signal value to which each selected profile data used to create the generated profile data d6 is associated.
- a visual processing profile creation method executed in a visual processing device including the profile data registration device 70 1 will be described with reference to FIG.
- the count signal c 1 0 from the control unit 70 5 designates the address of the profile data registration unit 70 2 at a constant count cycle, and the image signal value stored in the designated address is read ( Step S 7 0 1). Specifically, the control unit 7 O 5 outputs a count signal c 1 0 according to the parameter input by the parameter input unit 7 0 6.
- the count signal c 1 0 specifies the addresses of two profile data that realize different visual processing in the profile data registration unit 70 2.
- the first selection profile data d 10 and the second selection profile data d 11 are read from the profile data registration unit 70 2.
- the profile generation unit 70 4 acquires the control signal c 12 specifying the degree of synthesis from the control unit 70 5 (step S 7 0 2).
- the profile generator 7 0 4 combines the value [m] of the first selection profile data d 1 0 and the value [n] of the second selection profile data d 1 1 specified by the control signal c 1 2 Using the degree value [k], generation profile data d6 of value [I] is created (step S700).
- the generated profile data d 6 is written to the two-dimensional L U T 4 (step S 7 0 4).
- the address of the write destination is designated by the count signal G 11 from the control unit 70 5 given to the two-dimensional LUT 4.
- the controller 7 0 5 performs processing on all data of the selected profile data. It is determined whether or not the processing has been completed (step 7 0 5), and the processing from step S 7 0 1 to step S 7 0 5 is repeated until the processing is completed.
- the new profile data stored in 2D LUT 4 in this way is used to perform visual processing.
- the profile data registration unit 70 2 has only a small number of profile data, and can achieve visual processing with an arbitrary degree of processing.
- the storage capacity of the profile data registration unit 70 2 Can be reduced.
- the profile data registration device 70 1 may be provided not only in the visual processing device 1 shown in FIG. 1 but also in the visual processing devices shown in FIGS.
- the profile data registration unit 70 2, the profile creation execution unit 70 3 and the power are used in place of the respective profile data registration units 5 2 1, 5 2 6, 5 3 1 shown in FIGS.
- the parameter input unit 7 0 6 and the control unit 7 0 5 are used in place of the image half IJ fixing unit 5 2 2 in FIG. 6, the input device 5 2 7 in FIG. 7, and the brightness detection unit 5 3 2 in FIG. It's good.
- the visual processing device may be a device that converts the brightness of the input signal IS.
- a visual processing device 9 0 1 that converts brightness will be described with reference to FIG.
- the visual processing device 9 0 1 is a device for converting the brightness of the input signal IS ′, and performs a predetermined process on the input signal IS ′ and outputs a processing signal US ′. And a conversion unit 9 0 3 that converts the input signal IS ′ using the input signal IS ′ and the processing signal US ′.
- the processing unit 90 2 operates in the same manner as the spatial processing unit 2 (see FIG. 1) and performs spatial processing on the input signal IS ′. Note that an empty “fl process” as described in ⁇ Modification> (3) above may be used.
- the conversion unit 903 includes a two-dimensional LUT, and based on the input signal IS ′ (value [X]) and the processing signal US ′ (value [z]), the output signal OS ′ (value [y ]) Is output.
- the value of each element of the two-dimensional LUT included in the conversion unit 903 is the input signal IS with respect to the gain or offset determined according to the value of the function fk (z) related to the brightness change degree. It is determined by applying the value [X] of '.
- the function fk ( Z ) related to the brightness change degree is called the "change degree function”.
- this function is called “conversion function”, and the conversion functions (a) to (d) are shown as an example.
- Figures 12 (a) to (d) show the relationship between the input signal I S 'and the output signal OS' when the change degree function f k (z) is changed.
- the degree-of-change function f 1 (z) acts as the gain of the input signal I S '. For this reason, the value of the change function f 1 (z) changes the gain of the input signal I S 'and changes the value [y] of the output signal OS'.
- Figure 12 (a) shows the change in the relationship between the input signal I S 'and the output signal OS' when the value of the degree of change function f ⁇ Cz) changes.
- the degree of change function f l (z) increases (f l (z)> 1), the value [y] of the output signal increases. That is, the converted image becomes brighter.
- the degree of change function f 1 (z) becomes smaller (f 1 (z) ⁇ 1), the joy [y] of the output signal becomes smaller. In other words, the image after conversion becomes dark.
- the degree-of-change function f 1 (z) is a function that does not become less than the minimum value 35 value [0] in the domain of the value [z].
- the value [y] of the output signal exceeds the possible range by calculation of the conversion function (a), it may be clipped to the range of possible values. For example, if the value [1] is exceeded, the output signal value [y] may be clipped to the value [1], and if it is less than the value [0], the output signal value [y] ] Clip to value [0] May be.
- conversion functions (b) to (d) The same applies to the following conversion functions (b) to (d).
- the degree-of-change function f 2 (z) acts as an offset of the input signal I S '. For this reason, the offset of the input signal I S ′ changes and the value [y] of the output signal OS ′ changes depending on the value of the degree of change function f 2 (z).
- Figure 12 (b) shows the change in the relationship between the input signal I S 'and the output signal OS' when the value of the change function f 2 (z) changes.
- the output signal value [y] increases. That is, the converted image becomes brighter.
- the value [y] of the output signal decreases as the degree of change function f 2 (z) decreases (f 2 (z) ⁇ 0). In other words, the image after conversion becomes dark.
- the degree-of-change function f 1 (z) acts as the gain of the input signal IS '.
- the degree-of-change function f 2 (z) acts as an offset of the input signal IS '. For this reason, the gain of the input signal IS 'changes depending on the value of the change function f 1 (z), and the offset of the input signal IS' changes depending on the value of the change function f 2 (z).
- Figure 12 (c) shows the change in the relationship between the input signal I S 'and the output signal OS' when the values of the change degree function f 1 (z) and change degree function f 2 (z) change.
- the degree of change function f 1 (z) and the degree of change function f 2 (z) increase, the value [y] of the output signal increases. That is, the converted image becomes brighter.
- the change degree function f 1 (z) and the change degree function f 2 (z) become smaller, the value [y] of the output signal becomes smaller. In other words, the converted image becomes ugly.
- the degree-of-change function f 2 (z) determines "the power" of the power function J. For this reason, the input signal IS 'changes depending on the value of the change function f 2 (z), and the output signal O The value [y] of S 'changes.
- Figure 12 (d) shows the change in the relationship between the input signal I S 'and the output signal OS' when the value of the degree of change function f 2 (z) changes.
- the output signal value [y] increases. That is, the converted image becomes brighter.
- the value [y] of the output signal decreases as the degree of change function f 2 (z) decreases (f 2 (z) ⁇ 0). In other words, the image after conversion becomes dark.
- the degree of change function f 2 C z) is a value [0]
- no conversion is performed on the input signal IS '.
- the value [X] is the value of the input signal IS' [0] Normalized to the range of ⁇ [1] and is a nani value.
- visual processing of the input signal I S ′ is performed by a two-dimensional LUT having elements determined using any one of the conversion functions (a) to (d) described above.
- Each element of the 2D LUT stores a value [y] for a value [X] and a value [z]. Therefore, viewing angle processing for converting the brightness of the input signal I S ′ is realized based on the input signal I S ′ and the processing signal US ′.
- Figure "! 3 (a) to (b) shows examples of monotonically decreasing change functions f 1 (z) and f 2).
- Each of the three graphs (a 1 to a 3 and b 1 to b 3) Are both examples of monotonically decreasing functions.
- the degree-of-change function f 1 (z) is a function having a range that crosses the value [1], and the minimum value for the domain of value] is not less than the value [0].
- the degree-of-change function f 2 (z) is a function having a range that spans the value [0].
- the value [ Z ] of the processing signal US ' small is large.
- the value of the change function for small [z] is large.
- dark and large areas in the image are converted brightly. Therefore, for example, in an image photographed with backlight, the dark portion is improved for a dark and large area, and the visual effect is improved.
- the value [z] of the processing signal LI S ' is large.
- the value of the degree-of-change function for small [z] is small.
- a 2D LUT created based on the conversion functions (a) to (d) is used, a bright and large area in the image is converted to dark. Therefore, for example, in an image having a bright part such as the sky, the brightening and the large area are improved in whiteout, and the visual effect is improved.
- the above-described conversion function is an example, and may be an arbitrary function as long as the conversion has the same property.
- the 2D LUT is a value that is clipped to the range of values that can be handled as the output signal OS'. May be stored.
- the conversion unit 903 may output the output signal OS ′ by calculating the conversion functions (a) to (d) for the input signal I S ′ and the processing signal US ′.
- the visual processing device may include a plurality of spatial processing units and perform visual processing using a plurality of anchor signals having different levels of spatial processing.
- the visual processing device 905 is a device that performs visual processing of the human power signal IS ", and performs first predetermined processing on the input signal IS" and outputs a first processing signal U 1.
- the first processing unit 906a and the second processing unit 906b operate in the same manner as the spatial processing unit 2 (see FIG. 1) and perform spatial processing of the input signal IS ". Note that the above ⁇ Modification> (3) It is also possible to perform spatial processing as described in.
- the first processing unit 906a and the second processing unit 906b are different in the size of the area of the peripheral pixels used in the spatial processing.
- peripheral pixels included in the region of 30 pixels in the vertical direction and 30 pixels in the horizontal direction around the target pixel are used (small unsharp signal).
- 906 b uses peripheral pixels included in the area of 90 pixels vertically and 90 pixels horizontally from the pixel of interest (large unsharp signal) Note that the area of the peripheral pixels described here is just an example. However, it is preferable to generate an unsharp signal from a considerably wide area in order to fully exhibit the visual processing effect.
- the converter 908 includes a LUT and is based on the input signal IS "(value [X]), the first processed signal, U1 (value [zl]), and the second processed signal U2 (value [z2]). Output the message OS "(value [y]).
- the LUT provided in the conversion unit 903 outputs 4 to the value [X] of the input signal IS ", the value [zl] of the first processing signal U 1 and the value [z 2] of the second processing signal U 2.
- This is a 3D LUT that stores the value [y] of the word OS ".
- This 3D LUT can realize the processing described in the above embodiment and the following embodiment.
- the 3D LUT converts the brightness of the ⁇ input signal IS ". Case) and ⁇ When the input signal IS "is emphasized and converted >> Added ⁇ The brightness of the input signal IS" is converted >>
- the conversion unit 908 performs conversion so as to brighten the input signal IS “.
- the value [z 2] of the second processing signal U 2 is also small. If so, suppress the degree of brightening.
- the value of each element of the three-dimensional LUT provided in the conversion unit 903 is determined based on the following conversion function (e) or (f).
- the degree-of-change functions f 1 1 (z 1) and f 1 2 (z 2) are the same functions as the degree-of-change function f 1 (z) described in ⁇ Modification> (8) above.
- the change degree function f 1 1 (z 1) and the change degree function f 1 2 (z 2) are different functions.
- [f 1 1 (z 1) f 1 2 ( Z 2)] acts as a gain of the input signal IS ", and the value of the first processing signal U 1 and the value of the second processing signal U 2 As a result, the gain of the input signal IS "changes, and the value [y] of the output signal OS" changes.
- the degree-of-change functions f 21 (z 1) and f 22 (z 2) are the same functions as the degree-of-change function f 2 (z) described in ⁇ Modification> (8) above.
- the change degree function f 2 1 (z 1) and the change degree function f 22 (z 2) are different functions.
- [f 21 (z 1) — f 22 (z 2)] acts as the offset of the input signal IS ", and the values of the first processing signal U 1 and the second processing signal U 2 As a result, the offset of the input signal IS "changes and the value [y] of the output signal OS" changes.
- the processing in the conversion unit 98 is not limited to the processing using the three-dimensional LUT, and may perform the same calculation as the conversion functions (e) and (f).
- each element of the three-dimensional LUT need not be strictly determined based on the conversion functions (e) and (f).
- the conversion in the converter 9 0 8 is a conversion that emphasizes the input signal I S ", it is possible to independently emphasize a plurality of frequency components.
- the conversion further emphasizes the first processed signal U 1, it is possible to emphasize a shaded portion having a relatively high frequency, and if the conversion further emphasizes the second processed signal U 2. It is possible to emphasize the shading part having a low frequency.
- the visual processing device 1 can include profile data that realizes various visual processing in addition to those described above. Below, for the 1st to 7th profile data that realizes various visual processing, the formula that characterizes the profile data, and the configuration of the visual processing device that realizes visual processing equivalent to the visual processing device 1 having the profile data It shows.
- Each profile data is determined based on a mathematical formula including an operation that emphasizes a value calculated from the input signal IS and the unsharp signal us.
- the calculation to be emphasized is, for example, a calculation using a nonlinear enhancement function.
- the first profile data is determined based on an operation including a function that emphasizes a difference between respective conversion values obtained by performing predetermined conversion on the input signal IS and the unsharp signal US.
- the input signal IS and the unsharp signal US can be converted into different spaces, and the respective differences can be emphasized. This allows, for example, It is possible to realize emphasis and the like that match the visual characteristics.
- the conversion function F 1 is a common logarithmic function.
- the inverse transformation function F 2 is an exponential function (antilog) as an inverse function of the common logarithmic function.
- the enhancement function F3 is one of the enhancement functions R 1 to R 3 described with reference to FIG.
- FIG. 15 shows a visual processing device 11 equivalent to the visual processing device 1 in which the first profile data is registered in the two-dimensional LUT 4.
- the visual processing device 11 is a device that outputs the output signal o S based on an operation that emphasizes the difference between the converted values obtained by performing predetermined conversion on the input signal IS and the unsharp signal us. This makes it possible to enhance the difference between the input signal IS and the unsharp signal US after converting them to a separate space. For example, it is possible to realize enhancement suited to visual characteristics.
- the visual processing device 11 shown in Fig. 5 includes a spatial processing unit 12 that performs spatial processing on the luminance value of each pixel of the original image acquired as the input signal IS and outputs an unsharp signal US, and an input signal IS.
- a visual processing unit 13 that performs visual processing of the original image using the unsharp signal US and outputs an output signal OS is provided.
- the spatial processing unit 12 performs the same operation as the spatial processing unit 2 included in the visual processing device 1, description thereof is omitted.
- the visual processing unit 1 3 converts the signal space between the input signal IS and the unsharp signal US, and outputs the converted input signal TIS and the converted unsharp signal T US.
- Subtraction unit 1.7 that outputs signal TIS as the first input, converted unsharp signal TU S as the second input, and outputs the difference signal DS that is the difference between them, and the enhancement processing signal that has been subjected to the enhancement process by inputting the difference signal DS Emphasis processing unit 1 that outputs TS, conversion input signal TIS as first input, enhancement processing signal TS as second input
- an adder 19 that outputs an added signal PS obtained by adding the signals
- an inverse converter 20 that receives the added signal PS and outputs an output signal OS.
- the signal space conversion unit 14 includes a first conversion unit 15 that receives the input signal IS as an input and outputs a conversion input signal TIS, and a second conversion unit that receives an unsharp signal US as an input and outputs a conversion unsharp signal TUS as an output. 1 6 and further.
- the first conversion unit 15 converts the input signal of the value A into the conversion input signal T IS of the value F 1 (A) using the conversion function F 1.
- the second conversion unit 16 converts the unsharp signal US having the value B into the converted unsharp signal T US having the value F 1 (B) using the conversion function F 1.
- the subtractor 17 calculates the difference between the converted input signal TIS of the value F 1 (A) and the converted unsharp signal T US of the value F 1 (B), and the value F 1 (A) -F 1 ( The difference signal DS of B) is output.
- the enhancement processing unit 1 8 uses the enhancement function F3 to perform the enhancement processing signal of the value F3 (F 1 (A) -F 1 (B)) from the difference signal DS of the value F 1 (A) -F 1 (B) Output TS.
- the adder 19 adds the conversion input signal TIS having the value F 1 (A) and the enhancement processing signal TS having the value F3 (F 1 (A) -F 1 (B)) to obtain the value F 1 (A) + Add signal PS of F3 (F 1 (A) -F 1 (B)) is output.
- the inverse transform unit 2 O inversely transforms the addition signal PS of the value F 1 (A) + F3 (F 1 (A) -F 1 (B)) using the inverse transform function F 2 to obtain the value F 2 ( Outputs the output signal OS of F 1 (A) + F3 (F 1 (A) -F 1 (B))).
- the calculation using the transformation function F 1, the inverse transformation function F 2, and the enhancement function F 3 may be performed using a one-dimensional LUT for each function or without using the LUT. May be.
- the visual processing device 1 and the visual processing device 11 having the first profile data have the same visual processing effect.
- Visual processing using the transformed input signal TIS and transformed unsharp signal TUS transformed to the logarithmic space by the transformation function F 1 is realized.
- Human visual characteristics are logarithmic Visual processing suitable for visual characteristics can be realized by converting to log space and processing.
- Each visual processing device realizes contrast enhancement in logarithmic space.
- the conventional visual processing device 400 shown in FIG. 10 is generally used to perform edge (edge) enhancement using an unsharp signal US with a small degree of blur.
- the visual processing device 400 is under-enhanced in the bright part of the original image and over-enhanced in the dark part.
- Visual processing is not suitable. In other words, the correction in the brightening direction tends to be under-enhanced, and the correction in the darkening direction tends to have excessive bow firefly tone.
- visual processing is performed using the visual processing device 1 or the visual processing device 11 1, visual processing suitable for visual characteristics can be performed from the dark part to the bright part. It is possible to balance the emphasis with the emphasis in the direction of darkening.
- Equation ⁇ 1 if the value C of an element in the profile data obtained by Equation ⁇ 1 exceeds the range of 0 ⁇ C ⁇ 2 5 5, set the value of that element to 0 or 2 5 5 It is possible to prevent the corrected pixel signal from becoming negative and failing, or being saturated and failing. This is achieved regardless of the bit length used to represent the profile data elements.
- the conversion function F 1 is not limited to a logarithmic function.
- the conversion function F 1 was converted to remove the gamma complement IE (for example, gamma coefficient [0.45]) applied to the input signal IS, and the inverse conversion function F 2 was applied to the input signal IS. It may be a conversion that applies gamma correction.
- the gamma correction applied to the input signal IS can be removed and processing can be performed with linear characteristics. For this reason, optical blur correction can be performed.
- the visual processing unit 13 may calculate the above expression Ml without using the two-dimensional LUT 4 based on the input signal IS and the unsharp signal US.
- a one-dimensional LUT may be used in the calculation of each function F 1 to F 3.
- the second profile data is determined based on an operation including a function that emphasizes the ratio between the input signal IS and the unsharp signal US. This makes it possible to realize visual processing that emphasizes the shape component, for example.
- the second profile data is determined based on a calculation that performs dynamic range compression on the ratio between the emphasized input signal IS and the unsharp signal US. This makes it possible to realize visual processing that compresses the dynamic range while enhancing the sharp component, for example.
- the dynamic range compression function F4 is a monotonically increasing function such as an upward convex power function.
- F4 (X) ⁇ ⁇ (0 ⁇ r ⁇ 1).
- the enhancement function F 5 is a power function.
- F 5 (X) ⁇ ⁇ and (0 ⁇ 0? ⁇ 1).
- FIG. 16 shows a visual processing device 21 equivalent to the visual processing device 1 in which the second profile data is registered in the two-dimensional LUT 4.
- the visual processing device 21 is a device that outputs an output signal OS based on a calculation that emphasizes the ratio between the input signal IS and the unsharp signal US. Thereby, for example, it is possible to realize visual processing for emphasizing the sharp component.
- the visual processing device 21 outputs the output signal OS based on a calculation that performs dynamic range compression on the ratio of the emphasized input signal IS and unsharp signal US. This makes it possible to realize visual processing that compresses the dynamic range while enhancing the sharp component, for example.
- the visual processing device 21 shown in FIG. 16 includes a spatial processing unit 22 that performs spatial processing on the luminance value of each pixel of the original image acquired as the input signal IS and outputs an unsharp signal US, and the input signal IS and unsharp.
- a visual processing unit 23 that performs visual processing of the original image using the signal US and outputs an output signal OS is provided.
- the spatial processing unit 22 performs the same operation as the spatial processing unit 2 included in the visual processing device 1, description thereof is omitted.
- the visual processing unit 23 uses the input signal IS as the first input, the unsharp signal US as the second input, and the division unit 25 that outputs the division signal RS obtained by dividing the input signal IS by the unsharp signal US.
- An output processing unit 26 that receives the signal RS as an input and outputs the enhancement processing signal TS, and an output processing unit 27 that outputs the output signal OS using the input signal IS as the first input and the enhancement processing signal TS as the second input 27 And.
- the output processing unit 27 receives the input signal IS and outputs a DR compression signal DRS compressed by dynamic range (DR), a DR compression signal DRS as a first input, and a tone processing signal TS as a first input.
- a multiplication unit 29 that outputs an output signal OS is provided.
- the division unit 25 divides the input signal IS of value A by the unsharp signal US of value B, and outputs a division signal RS of value AZB.
- the enhancement processing unit 26 uses the enhancement function F5 to output the enhancement processing signal TS having the value F5 (A / B) from the division signal RS having the value AZB.
- the DR compression unit 28 outputs the DR compressed signal D RS having the value F4 (A) from the input signal IS having the value A using the dynamic range compression function F4.
- Multiplier 29 has the value F 4 Multiply the DR compressed signal DRS of (A) by the enhancement processing signal TS of value F 5 (A / B) and output the output signal OS of value F4 (A) * F 5 (A / B).
- the calculation using the dynamic range compression function F4 and the enhancement function F5 may be performed using a one-dimensional LUT for each function, or may be performed without using the LUT.
- the visual processing device 1 and the visual processing device 21 having the second profile data have the same visual processing effect.
- the dynamic range compression function F4 shown in Fig. 17 without saturating from the dark to the highlight.
- the target black level of the image signal before compression is L 0 and the maximum white level is L 1
- the dynamic range before compression L 1: L 0 is compressed into the dynamic range after compression Q 1: QO. Is done.
- the small contrast which is the ratio of the image signal level, decreases to (Q 1 / QO) * (L O / L 1) times due to the compression of the dynamic range.
- the dynamic range compression function F 4 is an upward convex power function or the like.
- the division signal RS of the value A_B that is, the sharp signal is enhanced by the enhancement function F 5 and multiplied by the DR compressed signal DRS.
- human vision has the property that if the local contrast is maintained, the same contrast can be seen even if the overall contrast is reduced.
- the visual processing device 1 and the visual processing device 21 having the second file data can realize visual processing that does not visually reduce contrast while compressing the dynamic range. .
- C A / ( ⁇ ⁇ 0 . 4)
- the value of ⁇ can be considered constant, so C is proportional to ⁇ . That is, the ratio of the change amount of value C to the change amount of value A is 1, and the local contrast does not change between the input signal IS and the output signal OS.
- the visual processing device 1 and the visual processing device 21 including the second profile data can realize visual processing that does not visually reduce contrast wrinkles while compressing the dynamic range.
- ⁇ that is particularly effective in the following situation, that is, a display with a narrow physical dynamic range, and a dark portion and a bright portion can be reproduced without blurring a high contrast image. It becomes possible. Also, for example, a high-contrast image is displayed on a TV projector in a bright environment, and a high-contrast print is obtained with low-density ink (a printer with only light colors). It becomes possible.
- the visual processing unit 231 may use the two-dimensional LUT 4 based on the input signal IS and the unsharp signal US to “compute the above equation M2 to T. In this case, In the calculation of each function F4, F5, a one-dimensional LUT may be used.
- Equation M2 if the value C of an element in the profile data obtained by Equation M2 is C> 255, the value C of that element may be set to 255.
- the third profile data is determined based on a calculation including a function that emphasizes the ratio between the input signal IS and the unsharp signal US. This makes it possible to realize visual processing that emphasizes the shape component, for example.
- the dynamic range compression function F4 is a direct proportional function with a proportionality factor of 1.
- FIG. 19 shows a visual processing device 31 equivalent to the visual processing device 1 in which the third profile data 2 is registered in the missing element LUT 4.
- the visual processing device 31 is a device that outputs the output signal OS based on a calculation that emphasizes the ratio between the input signal IS and the unsharp signal US. Thereby, for example, it is possible to realize visual processing for emphasizing the sharp component.
- the visual processing device 31 shown in FIG. 19 is different from the visual processing device 21 shown in FIG. 16 in that the DR compression unit 28 is not provided.
- the visual processing device 31 performs spatial processing on the luminance value of each pixel of the original image acquired as the input signal IS and outputs the unsharp signal US, and the input signal IS and the unsharp signal US. And a visual processing unit 32 that performs visual processing of the original image and outputs an output signal OS.
- the spatial processing unit 22 performs the same operation as the spatial processing unit 2 included in the visual processing device 1, description thereof is omitted.
- the visual processing unit 32 has a division unit 25 that outputs a division signal RS obtained by dividing the input signal IS by the unsharp signal US, with the input signal IS as the first input, the unsharp signal US as the second input, and the division.
- An enhancement processing unit 26 that receives the signal RS as an input and an enhancement processing signal TS as an output, and a multiplication unit 33 that outputs the output signal OS with the input signal IS as a first input and the enhancement processing signal TS as a second input. It has.
- the division unit 25 and the enhancement processing unit 26 perform the same operations as described for the visual processing device 21 shown in FIG.
- the multiplier 33 multiplies the input signal IS with the value A by the enhancement processing signal TS with the value F 5 (A / B), and outputs an output signal OS with the value A * F5 (A / B).
- the enhancement function F 5 is the same as that shown in FIG.
- the calculation using the enhancement function F 5 may be performed using a one-dimensional LUT for each function, as described for the visual processing device 21 shown in FIG. It may be done without.
- the visual processing device 1 and the visual processing device 31 including the third profile data have the same visual processing effect.
- the enhancement processing unit 26 performs enhancement processing of a sharp signal (divided signal RS) expressed as a ratio of the input signal IS and the unsharp signal US, and the enhanced sharp signal is processed.
- the signal is multiplied by the input signal IS.
- Enhancing the sharp signal expressed as the ratio of the input signal IS and the unsharp signal US is equivalent to calculating the difference between the input signal IS and the unsharp signal US in logarithmic space. That is, visual processing suitable for logarithmic human visual characteristics is realized.
- the amount of enhancement by the enhancement function F5 increases when the input signal IS is large (when it is bright) and decreases when it is small (when it is dark). Also, the amount of bow adjustment in the brightening direction is larger than the amount of enhancement in the darkening direction. For this reason, it is possible to achieve visual processing that is suitable for the visual characteristics, and natural visual processing with a good balance is realized.
- Equation M3 if the value C of an element in the profile data obtained by Equation M3 is G> 2 5 5, the value C of that element may be 2 5 5.
- the dynamic range of the input signal IS is not compressed, but the local contrast can be enhanced and the dynamic range can be visually compressed and expanded. .
- the fourth profile data is determined based on an operation including a function that emphasizes the difference between the input signal IS and the unsharp signal US according to the value of the input signal IS.
- the sharp component of the input signal IS can be enhanced according to the value of the input signal IS. For this reason, it is possible to appropriately enhance the input signal IS from the dark part to the bright part.
- the fourth profile data is determined based on an operation that adds a value obtained by compressing the input signal IS to a dynamic range to the emphasized value. This makes it possible to compress the dynamic range while enhancing the sharp component of the input signal IS according to the value of the input signal IS.
- the enhancement amount adjustment function F6 is a function that monotonously increases with respect to the value of the input signal IS. That is, when the value A of the input signal IS is small, the value of the enhancement amount adjustment function F6 is small, and when the value A of the input signal IS is large, the value of the enhancement amount adjustment function F6 is also large.
- the enhancement function F7 is one of the enhancement functions R1 to R3 described with reference to FIG.
- FIG. 20 shows a visual processing device 41 equivalent to the visual processing device 1 in which the fourth profile data is registered in the two-dimensional LUT 4.
- the visual processing device 41 is a device that outputs the output signal OS based on a calculation that emphasizes the difference between the input signal IS and the unsharp signal US according to the value of the input signal IS.
- the sharp component of the input signal IS can be emphasized according to the value of the input signal IS. For this reason, it is possible to appropriately enhance the input signal Is from the dark part to the bright part.
- the visual processing device 41 outputs the output signal OS based on an operation of adding the value obtained by compressing the dynamic range of the input signal IS to the emphasized value. This makes it possible to compress the dynamic range while enhancing the sharp component of the input signal I S according to the value of the input signal I S.
- the visual processing device 41 shown in FIG. 20 includes a spatial processing unit 42 that performs spatial processing on the luminance value of each pixel of the original image acquired as the input signal IS and outputs an unsharp signal US, and an input signal IS and an unsharp signal.
- a visual processing unit 43 that performs visual processing of the original image using the signal US and outputs an output signal OS is provided.
- the spatial processing unit 42 performs the same operation as the spatial processing unit 2 included in the visual processing device 1, description thereof is omitted.
- the visual processing unit 43 has the input signal IS as the first input and the unsharp signal US as the second input, and outputs a difference signal DS that is the difference between the subtraction unit 44 and the difference.
- the enhancement processing unit 45 that receives the signal DS and outputs the enhancement processing signal TS, the enhancement amount adjustment unit 46 that receives the input signal IS and outputs the enhancement amount adjustment signal IC, and the enhancement amount adjustment signal IC are the first.
- the multiplication processing unit 47 outputs the multiplication signal MS obtained by multiplying the enhancement amount adjustment signal IC and the enhancement processing signal TS, and the input signal IS as the first input.
- An output processing unit 48 that outputs the output signal OS is provided with the signal MS as the second input.
- the output processing unit 48 receives the input signal IS and outputs a DR compressed signal DRS compressed by dynamic range (DR), a DR compressed signal DRS as a first input, and a multiplication signal MS as a first input. 2 is provided, and an adder 50 ′ that outputs the output signal OS is provided.
- DR dynamic range
- MS multiplication signal
- the subtractor 44 calculates the difference between the input signal IS with the value A and the unsharp signal US with the value B, and outputs the difference signal DS with the value A ⁇ B.
- the enhancement processing unit 45 outputs the enhancement processing signal TS having the value F 7 (A-B) from the difference signal DS having the value A—B, using the enhancement function F 7.
- the enhancement amount adjustment unit 46 outputs the enhancement amount adjustment signal I C of the value F6 (A) from the input signal IS of the value A using the enhancement amount adjustment function F 6.
- the multiplication unit 47 multiplies the enhancement amount adjustment signal IC of the value F6 (A) by the enhancement processing signal TS of the value F7 (AB), and outputs a multiplication signal MS of the value F6 (A) * F7 (AB) To do.
- the DR compression unit 49 outputs the DR compressed signal DRS having the value F 8 (A) from the input signal IS having the value A using the dynamic range compression function F 8.
- the adder 50 adds the 1 ⁇ compressed signal 0 3 and the multiplication signal MS of the value F6 (A) * F 7 (AB) to the value F 8 (A) + F6 (A) * F 7 (AB ) Output signal OS is output.
- the calculation using the enhancement amount adjustment function F6, enhancement function F7, and dynamic range compression function F8 may be performed using a one-dimensional LUT for each function. It may be done.
- the visual processing device 1 and the visual processing device 41 including the fourth profile data have the same visual processing effect.
- the enhancement amount adjustment function F 6 is a monotonically increasing function, but can be a function in which the amount of increase in the function value decreases as the value A of the input signal I S increases. In this case, the value of the output signal OS is prevented from being saturated.
- the enhancement function F 7 is the strong function R 2 described with reference to FIG. 10, the enhancement amount when the absolute value of the differential signal D S is large can be suppressed. For this reason, it is possible to prevent the enhancement amount in a portion with high sharpness from being saturated, and it is possible to execute visual processing that is natural in visual terms.
- the visual processing unit 43 can calculate the upper self-expression M4 based on the input signal IS and the unsharp signal US without using the two-dimensional LUT 4. good.
- one-dimensional LUT may be used in the calculation of each function F6 to F8.
- the enhancement processing unit 45 is not particularly required.
- Equation M4 if the value C of an element in the profile data obtained by Equation M4 exceeds the range 0 ⁇ C ⁇ 2 5 5, the value G of that element may be set to 0 or 2 5 5.
- the fifth profile data is determined based on an operation including a function that emphasizes the difference between the input signal IS and the ann sharp signal US according to the value of the input signal IS.
- the sharp component of the input signal IS depends on the value of the input signal IS. Can be emphasized. For this reason, it is possible to properly enhance the input signal IS from the dark part to the bright part.
- the dynamic range compression function F 8 may be a direct proportional function with a proportional coefficient of 1.
- FIG. 21 shows a visual processing device 51 equivalent to the visual processing device 1 in which the fifth profile data is registered in the two-dimensional LUT 4.
- the visual processing device 51 is a device that outputs the output signal OS based on a calculation that emphasizes the difference between the input signal IS and the unsharp signal US according to the value of the input signal IS.
- the sharp component of the input signal IS can be emphasized according to the value of the input signal IS. For this reason, it is possible to appropriately enhance the input signal IS from the dark part to the bright part.
- the visual processing device 51 shown in FIG. 21 is different from the visual processing device 41 shown in FIG. 20 in that the DR compression unit 49 is not provided.
- portions that perform the same operations as those of the visual processing device 41 shown in FIG. 20 are denoted by the same reference numerals, and detailed description thereof is omitted.
- the visual processing device 51 includes a spatial processing unit 42 that performs spatial processing on the luminance value of each pixel of the original image acquired as the input signal IS and outputs an unsharp signal US, and an input signal IS and an unsharp signal US. And a visual processing unit 52 that performs visual processing of the original image and outputs an output signal OS.
- the spatial processing unit 42 performs the same operation as the spatial processing unit 2 included in the visual processing device 1, description thereof is omitted.
- the visual processing unit 52 receives the input signal IS as the first input, the unsharp signal US as the second input, and outputs the difference signal DS that is the difference between them, and the difference signal DS as an input.
- Emphasis processing signal TS to output the emphasis processing signal TS 4 5 and input signal
- Emphasis amount adjustment unit 46 that receives IS as input and outputs enhancement amount adjustment signal IC, enhancement amount adjustment signal IC as the first input, enhancement processing signal TS as the second input, enhancement amount adjustment signal IC and enhancement
- a multiplication unit 47 that outputs a multiplication signal MS multiplied by the processing signal TS, and an addition unit 53 that outputs the output signal OS with the input signal IS as a first input and the multiplication signal MS as a second input. ing.
- the subtraction unit 44, the enhancement processing unit 45, the enhancement amount adjustment unit 46, and the multiplication unit 47 perform the same operations as those described for the visual processing device 41 shown in FIG.
- the adder 53 adds the input signal IS with the value A and the multiplication signal MS with the value F6 (A) * F 7 (A— B), and adds the value A + F6 (A) * F 7 (A— B) Output signal OS.
- the visual processing device 1 and the visual processing device 51 having the fifth profile data have the same visual processing effect.
- the visual processing device 1 and the visual processing device 41 having the fourth profile data have the same visual processing effect as the visual processing device 41 and the visual processing device 41.
- the amount of enhancement of the differential signal DS is adjusted by the value A of the input signal IS. For this reason, it becomes possible to make the contrast enhancement amount from the dark part to the bright part uniform.
- the enhancement processing unit 45 need not be provided.
- the value C of the element in the profile data obtained by equation M5 is 0 ⁇ C ⁇ 2. If the range exceeds 55, the value C of the element may be 0 or 255.
- the sixth profile data is determined on the basis of an operation for gradation correction of a value obtained by adding the value of the input signal IS to the value obtained by enhancing the difference between the input signal IS and the unsharp signal US.
- an operation for gradation correction of a value obtained by adding the value of the input signal IS to the value obtained by enhancing the difference between the input signal IS and the unsharp signal US is determined on the basis of an operation for gradation correction of a value obtained by adding the value of the input signal IS to the value obtained by enhancing the difference between the input signal IS and the unsharp signal US.
- the value C of each element of the 6th profile data (value of the output signal OS) is calculated using the value A of the input signal IS, the value ⁇ of the unsharp signal US, the enhancement function F 9 and the tone correction function F 1 0.
- C F 1 0 (A + F9 (A ⁇ B)) (hereinafter referred to as equation M6).
- the enhancement function F 9 is one of the enhancement functions R 1 to R 3 described with reference to FIG.
- the gradation correction function F 10 is a function used in normal gradation correction, such as a gamma correction function, an S-shaped gradation correction function, and an inverse S-shaped gradation correction function.
- FIG. 22 shows a visual processing device 61 equivalent to the visual processing device 1 in which the sixth profile data is registered in the two-dimensional LUT 4.
- the visual processing device 61 outputs the output signal OS based on a calculation that performs gradation correction on the value obtained by adding the value of the input signal IS to the value that emphasizes the difference between the input signal IS and the unsharp signal US. Device. As a result, for example, it is possible to realize visual processing that performs gradation correction on the input signal Is in which the sharp component is emphasized.
- the visual processing device 61 shown in FIG. 22 includes a spatial processing unit 62 that performs spatial processing on the luminance value of each pixel of the original image acquired as the input signal IS and outputs an unsharp signal US, and an input signal IS and an unsharp A visual processing unit 63 that performs visual processing of the original image using the signal US and outputs an output signal OS is provided.
- the spatial processing unit 62 performs the same operation as the spatial processing unit 2 included in the visual processing device 1, description thereof is omitted.
- the visual processing unit 63 receives the input signal IS as the first input and the unsharp signal US as the second input.
- a subtraction unit 64 that outputs a difference signal DS that is a difference between them, an enhancement processing unit 65 that outputs an enhancement processing signal TS that is subjected to enhancement processing using the difference signal DS as input, and an input signal IS as a first
- An adder 66 that outputs the input signal and the emphasis processing signal TS as the second input and outputs the added signal PS, and a tone correction unit 67 that outputs the added signal PS and outputs the output signal OS are provided. .
- the subtractor 64 calculates the difference between the input signal IS with the value A and the unsharp signal US with the value B, and outputs the difference signal DS with the value A ⁇ B.
- the enhancement processing unit 65 outputs an enhancement processing signal T S having a value F 9 (A—B) from the difference signal DS having a value A—B, using the enhancement function F 9.
- the adder 66 adds the input signal IS with the value A and the enhancement processing signal TS with the value F9 (A-B), and outputs the addition signal PS with the value A + F 9 (A-B).
- the tone correction unit 67 outputs the output signal OS of the value F 1 0 (A + F 9 (AB)) from the addition signal PS of the value A + F9 (AB) using the tone correction function F 1 0. To do.
- the calculation using the enhancement function F 9 and the gradation correction function F 10 may be performed using a one-dimensional LUT for each function, or may be performed without using the LUT. .
- the visual processing device 1 and the visual processing device 61 having the sixth profile data have the same visual processing effect.
- the difference signal DS is enhanced by the enhancement function F9 and added to the input signal IS. For this reason, it is possible to enhance the contrast of the input signal IS. Furthermore, the gradation correction unit 67 executes gradation correction processing for the addition signal PS. For this reason, for example, it is possible to further enhance the contrast in a halftone having a high appearance frequency in the original image. In addition, for example, the entire addition signal PS can be brightened. As described above, it is possible to realize a combination of spatial processing and gradation processing at the same time.
- the visual processing unit 63 may calculate the above expression M6 without using the two-dimensional LUT 4 based on the input signal IS and the unsharp signal US.
- one-dimensional LUs may be used in the calculation of the functions F9 and F10.
- the element value C may be set to 0 or 255.
- the seventh profile data is determined on the basis of a calculation in which the input signal IS is subjected to gradation correction for the value obtained by enhancing the difference between the input signal IS and the unsharp signal US.
- enhancement of the sharp component and gradation correction of the input signal IS are performed independently. For this reason, it is possible to enhance a certain sharp component regardless of the gradation correction amount of the input signal IS.
- the enhancement function F 11 is any one of the enhancement functions R 1 to R 3 described with reference to FIG.
- the gradation correction function F 1 2 is, for example, a gamma correction function, an S-shaped gradation correction function, or an inverse S-shaped gradation correction function.
- FIG. 23 shows a visual processing device 71 equivalent to the visual processing device 1 in which the seventh profile data is registered in the two-dimensional LUT 4.
- the visual processing device 71 is a device that outputs the output signal OS based on an operation of adding a value obtained by correcting the gradation of the input signal IS to a value that emphasizes the difference between the input signal IS and the unsharp signal US.
- sharp component enhancement and gradation correction of the input signal IS It is done independently of the positive. Therefore, it is possible to enhance a certain sharp component regardless of the gradation correction amount of the input signal IS.
- the visual processing device 71 shown in FIG. 23 includes a spatial processing unit 72 that performs spatial processing on the luminance value of each pixel of the original image acquired as the input signal IS and outputs the unsharp signal US, and the input signal IS and the unsharp A visual processing unit 73 that performs visual processing of the original image using the signal US and outputs an output signal OS is provided.
- the spatial processing unit 72 performs the same operation as the spatial processing unit 2 included in the visual processing device 1, description thereof is omitted.
- the visual processing unit 73 uses the input signal IS as the first input and the unsharp signal US as the second input, and outputs a difference signal DS that is the difference between the subtraction unit 74 and the difference signal DS as input.
- the emphasis processing unit 75 that outputs the enhanced emphasis signal TS, the tone correction unit 76 that outputs the tone correction signal GC with the input signal IS as an input, and the tone correction signal GC And an enhancement processing signal TS as a second input, and an adder 77 that outputs an output signal OS.
- the subtracting unit 74 calculates a difference between the input signal IS having the value A and the unsharp signal US having the value B, and outputs a difference signal DS having the value A ⁇ B.
- the enhancement processing unit 75 outputs the enhancement processing signal TS having the value F 11 (A ⁇ B) from the difference signal DS having the value A—B, using the enhancement function F 11.
- the gradation correction unit 76 outputs the gradation correction signal GC having the value F 1 2 (A) from the input signal IS having the value A using the gradation correction function F 12.
- the adding unit 77 adds the gradation correction signal GC having the value F 1 2 (A) and the enhancement processing signal TS having the value F 1 1 (A—B) to obtain the value F 1 2 (A) + F 1 1 Outputs the output signal OS of (AB).
- the calculation using the enhancement function F 11 and the gradation correction function F 12 may be performed using a one-dimensional LUT for each function, or may be performed without using a LUT.
- the visual processing device 1 and the visual processing device 71 having the seventh profile data have the same visual processing effect.
- the input signal IS is subjected to gradation correction by the gradation correction unit 76 and then added to the enhancement processing signal TS. For this reason, it is possible to enhance the local contrast by adding the subsequent enhancement processing signal TS even in the region where the gradation change of the gradation correction function F 1 2 is small, that is, in the region where the contrast is reduced. Become.
- the visual processing unit 73 may calculate the above expression M7 without using the two-dimensional LUT 4 based on the input signal IS and the unsharp signal US.
- a one-dimensional LUT may be used in the calculation of the respective functions F 1 1 and F 1 2.
- the value C of an element in the profile data obtained by Equation M7 exceeds the range of O C ⁇ 255, the value C of that element may be set to 0 or 255.
- each element of the first to seventh profile data stores values calculated based on the equations M1 to M7.
- the value calculated by the formulas Ml to M7 exceeds the range of values that can be stored in the profile data, the element value may be limited.
- some values may be arbitrary. For example, if the value of the input signal IS is large but the value of the unsharp signal US is small, such as a small light part in a dark night scene (such as a neon part in a night scene), the visually processed input signal IS The value of has a small effect on the image quality. In this way, in the portion where the value after visual processing has little influence on the image quality, the value stored in the profile data may be an approximate value of the value calculated by the equations M1 to M7 or an arbitrary value. Even if the value stored in the profile data is an approximate value of the value calculated by equations M1 to M7, or an arbitrary value, it is stored for the input signal IS and unsharp signal US with the same value.
- the values that are displayed maintain a monotonically increasing or monotonically decreasing relationship with the values of the input signal IS and the unsharp signal US.
- the values stored in the profile data for the same value of the input signal IS and the unsharp signal US indicate an outline of the profile data characteristics. Therefore, it is desirable to tune the profile data while maintaining the above relationship in order to maintain the characteristics of the 2D LUT.
- a visual processing device 6 0 0 as a second embodiment of the present invention will be described with reference to FIGS. 24 to 39.
- the visual processing device 600 is a visual processing device that performs visual processing on an image signal (input signal IS) and outputs a visual processing image (output signal OS), and a display device (not shown) that displays the output signal OS. It is a device that performs visual processing according to the installed environment (hereinafter referred to as the display environment).
- the visual processing device 600 is a device that improves the reduction in the “visual contrast” of the display image due to the influence of ambient light in the display environment by visual processing using human visual characteristics. is there.
- the visual processing device 60 constitutes an image processing device together with a device that performs image signal color processing in a device that handles images such as a computer, a television, a digital camera, a mobile phone, a PDA, a printer, and a scanner.
- Fig. 24 shows the basic configuration of the visual processing device 600.
- the visual processing device 600 includes a target contrast converter 6 0 1, a converted signal processor 6 0 2, an actual contrast converter 6 0 3, a target contrast setting unit 6 0 4, and an actual contrast setting unit 6 0 It consists of five.
- the target contrast ⁇ converter 6 0 1 uses the input signal IS as the first input, the target contrast C 1 set in the target contrast setting unit 6 0 4 as the second input, The target contrast signal JS is output.
- the definition of target contrast C1 will be described later.
- the conversion signal processing unit 602 uses the target contrast signal JS as the first input, the target contrast C 1 as the second input, and the actual contrast C 2 set in the actual contrast setting unit 605 as the third input, and performs visual processing.
- the visual processing signal KS which is the target contrast signal JS, is output. The definition of actual contrast C2 will be described later.
- the actual contrast conversion unit 603 uses the visual processing signal KS as the first input, the actual contrast C2 as the second input, and the output signal OS as the output.
- the target contrast setting unit 604 and the actual contrast setting unit 605 allow the user to set the values of the target contrast C 1 and the actual contrast ⁇ C 2 via the input interface or the like.
- the target contrast conversion unit 601 converts the input signal I S input to the visual processing device 600 into a target contrast signal J S suitable for contrast expression.
- the luminance value of the image input to the visual processing device 600 is represented by the gradation of the value [0.0 to 1.0].
- the target contrast conversion unit 601 uses the target contrast C 1 (value [m]) to convert the input signal IS (value [P]) according to “Formula M2 OJ and target contrast ⁇ signal JS (value [A])
- the value [m] of the target contrast C1 is set as a contrast value so that the display image displayed by the display device can be seen with the best contrast.
- FIG. Fig. 25 is a graph showing the relationship between the value of the input signal I S (horizontal axis) and the value of the target contrast signal J S (vertical axis).
- the target contrast conversion unit 6 0 1 uses the target contrast signal in which the input signal IS in the range of the value [0.0 to 1.0] is in the range of the value [1 Zm to 1.0]. Converted to JS.
- the conversion signal processing unit 60 2 compresses the dynamic range and outputs the visual processing signal K S while maintaining the local contrast of the input target contrast signal J S. Specifically, the converted signal processing unit 60 2 does not regard the input signal IS (see FIG. 16) in the visual processing device 21 shown in the first embodiment as the target contrast signal JS, and the output signal OS (see FIG. (See 1 6) has the same structure and effect as the visual processing signal KS.
- the converted signal processing unit 60 2 outputs the visual processing signal K S based on a calculation that emphasizes the ratio between the target contrast signal J S and the unsharp signal U S. This makes it possible to implement visual processing that emphasizes sharp components, for example.
- the converted signal processing unit 60 2 outputs a visual processing signal K S based on an operation for performing dynamic range compression on the ratio of the emphasized target contrast signal J S and the unsharp signal US. This enables, for example, visual processing that compresses the dynamic range while enhancing the sharp component.
- the conversion signal processing unit 60 2 performs spatial processing on the luminance value of each pixel in the target contrast signal JS and outputs an unsharp signal US, and the target contrast signal JS and the unsharp signal US. And a visual processing unit 6 2 3 for performing visual processing on the target contrast signal JS and outputting the visual processing signal KS.
- the visual processing unit 623 includes a division unit 625, an enhancement processing unit 626, and an output processing unit 627 having a 0 compression unit 628 and a multiplication unit 629.
- the division unit 625 outputs a division signal RS obtained by dividing the target contrast signal J S by the unsharp signal US, with the target contrast signal J S as the first input, the unsharp signal US as the second input.
- the enhancement processing unit 626 outputs the enhancement processing signal TS with the division signal RS as the first input, the target contrast C1 as the second input, and the actual contrast C2 as the third input.
- the output processing unit 627 uses the target contrast signal JS as the first input, the enhancement processing signal TS as the second input, the target contrast C 1 as the third input, and the actual contrast C2 as the fourth input. Is output.
- the DR compression unit 628 uses the target contrast signal JS as the first input, the target contrast C1 as the second input, and the actual contrast C2 as the third input, and DR compression with dynamic range (DR) compression. Outputs signal DRS.
- Multiplier 629 receives DR compressed signal DRS as a first input and enhancement processing signal T S as a second input, and outputs visual processing signal K S.
- the conversion signal processing unit 602 converts the target contrast signal JS (value [A]) using the ⁇ expression M2J, Outputs the visual processing signal KS (value [C]).
- the value [B] is the value of the unsharp signal U S obtained by spatially processing the target contrast signal J S.
- the spatial processing unit 622 performs spatial processing on the target contrast signal JS having a value [A] and outputs an unsharp signal US having a value [B].
- the division unit 625 divides the target contrast signal JS having the value [A] by the unsharp signal US having the value [B] and outputs the division signal RS having the value [AZB].
- the enhancement processing unit 6 26 outputs the enhancement processing signal TS of the value [F 5 (A ZB)] from the division signal RS of the value [A / B] using the enhancement function F 5.
- the DR compression unit 628 outputs the DR compression signal DRS having the value [F4 (A)] from the target contrast signal JS having the value [A] using the dynamic range compression function F4.
- the multiplication unit 629 multiplies the DR compressed signal DRS having the value [F4 (A)] and the enhancement processing signal TS having the value [F 5 (A / B)] to obtain the value [F4 (A) * F5 (A / B )] Visual processing signal KS is output.
- the calculation using the dynamic range compression function F 4 and the enhancement function F 5 may be performed using a one-dimensional LUT for each function, or may be performed without using a LUT.
- the visual dynamic range of the visual processing signal K S is determined by the value of the dynamic range compression function F 4.
- FIG. 26 is a graph showing the relationship between the value of the target contrast signal J S (horizontal axis) and the value obtained by applying the dynamic range compression function F 4 to the target contrast signal J S (vertical axis).
- the dynamic range of the target contrast signal J S is compressed by the dynamic range compression function F4. More specifically, the target contrast signal J S in the range of the value [1 / m to 1.0] is converted into the range of the value [1 / n to 1.0] by the dynamic range compression function F4.
- the actual contrast C2 value [n] is set as the visual contrast value of the displayed image under the ambient light of the display environment. That is, the value [n] of the actual contrast C2 can be determined as a value obtained by reducing the value [m] of the target contrast ⁇ C1 by the influence of the luminance of the ambient light in the display environment.
- dynamic range means the ratio between the minimum and maximum signal values.
- the local contrast change in the visual processing signal KS is expressed as the ratio of the amount of change before and after conversion between the target contrast signal JS value [A] and the visual processing signal KS value [C].
- the value [B] of the unshaped signal US locally ie, in a narrow range, can be regarded as constant.
- the ratio of the change amount of the value G and the change amount of the value A in Equation M2 is 1, and the local contrast between the target contrast signal J S and the visual processing signal KS does not change.
- the conversion signal processing unit 6002 can realize visual processing that does not reduce the visual contrast while compressing the dynamic range of the target contrast signal J S.
- the details of the actual contrast conversion unit 603 will be described with reference to FIG.
- the actual contrast conversion unit 603 converts the visual processing signal KS into image data in a range that can be input to a display device (not shown).
- the range of image data that can be input to the display device is, for example, image data in which the luminance value of an image is represented by gradations of values [0.0 to 1.0].
- the actual contrast conversion unit 603 converts the visual processing signal KS (value [C]) using the actual contrast C 2 (value [n]) according to “Equation M21” and outputs the output signal OS (value [Q]). Output.
- FIG. 27 is a graph showing the relationship between the value of the visual processing signal KS (horizontal axis) and the value of the output signal OS (vertical axis).
- the actual contrast conversion unit 603 converts the visual processing signal KS in the range of values [1 / n to 1.0] into the output signal OS in the range of values [0.0 to 1.0]. Is done.
- the value of the output signal OS decreases with respect to the value of each visual processing signal KS. This decrease is based on whether the brightness of the displayed image is ambient light. It corresponds to the influence received from.
- the output signal OS is converted to a value [0]. Further, in the actual contrast conversion unit 603, when a visual processing signal KS having a value [1] or more is input, the output signal OS is converted to a value [1].
- the visual processing device 600 has the same effect as the visual processing device 21 described in the first embodiment. Hereinafter, effects characteristic of the visual processing device 600 will be described.
- the output signal OS When the ambient light is present in the display environment displaying the output signal OS of the visual processing device 600, the output signal OS is visually affected by the ambient light.
- the output signal OS is a signal that has been subjected to processing for correcting the influence of ambient light by the actual contrast converter 603.
- the output signal OS displayed on the display device is viewed as a display image having the characteristics of the visual processing signal KS.
- the characteristics of the visual processing signal KS are the same as the output signal OS of the visual processing device 21 described in the first embodiment (see Fig. 16) and the dynamic range of the entire image while maintaining local contrast. Is compressed.
- the visual processing signal KS has a dynamic range (corresponding to the actual contrast C 2) that can be displayed under the influence of ambient light while maintaining the target contrast ⁇ C 1 where the display image is optimally displayed locally. It is a compressed signal.
- the visual processing device 600 can maintain the visual contrast by the process using the visual characteristics while correcting the contrast wrinkles that decrease due to the presence of ambient light.
- a visual processing method that produces the same effect as the visual processing device 600 will be described with reference to FIG. Note that the specific processing of each step is the same as the processing in the visual processing device 600, and the description thereof is omitted.
- the set target contrast C 1 and A real contrast C 2 is obtained (step S 601).
- the acquired target contrast C 1 is used to convert the input signal IS (step S 602), and the target contrast signal JS is output.
- spatial processing is performed on the target contrast signal JS (step S603), and an unsharp signal US is output.
- the target contrast signal JS is divided by the unsharp signal US (step S604), and the division signal RS is output.
- the division signal RS is emphasized by the enhancement function F 5 which is a “power function” having an index determined by the target contrast C 1 and the actual contrast C 2 (step S 605), and the enhancement processing signal TS is output.
- the target contrast signal JS is redynamic range compressed by the dynamic range compression function F4 which is an “power function” having an index determined by the target contrast C1 and the actual contrast C2 (step S606), DR The compressed signal D RS is output.
- the emphasis processing signal TS output in step S605 and the DR compressed signal DRS output in step S606 are multiplied (step S607), and the visual processing signal KS is output.
- the actual contrast C 2 is used to convert the visual processing signal KS.
- Step S608 the output signal OS is output.
- the processing from step S602 to step S608 is repeated for all the pixels of the input signal IS (step S609).
- Each step of the visual processing method shown in FIG. 28 may be realized as a visual processing program in the visual processing device 600 or another computer. Further, the processing from step S604 to step S607 may be performed at a Lee degree by calculating the equation M2.
- the conversion signal processing unit 602 outputs the visual processing signal KS based on Equation (2).
- the conversion signal processing unit 602 may output the visual processing signal KS based only on the dynamic range enhancement function F4.
- the conversion signal processing unit 602 as a modification need not include the spatial processing unit 622, the division unit 625, the enhancement processing unit 626, and the multiplication unit 629, and need only include the DR compression unit 628.
- the conversion signal processing unit 602 as a modified example can output a visual processing signal KS compressed to a dynamic range that can be displayed under the influence of ambient light.
- the exponent of the enhancement function F 5 may be a function of the value [A] of the target contrast signal J S or the value [B] of the unsharp signal US.
- the exponent of the enhancement function F 5 is a function of the value [A] of the target contrast signal JS and decreases monotonously when the value [A] of the target contrast signal JS is larger than the value [B] of the unsharp signal US Function. More specifically, the index of the enhancement function F 5 is expressed as Q (A) * (1 -r), and the function Q (A) is expressed as the value of the target contrast signal JS [A ] Is a monotonically decreasing function. The maximum value of the function Q (A) is [1. 0].
- the enhancement function F5 reduces the amount of enhancement of local contrast in the high luminance part. For this reason, when the luminance of the pixel of interest is higher than the luminance of surrounding pixels, excessive enhancement of local contrast in the high luminance portion is suppressed. In other words, the luminance value of the pixel of interest is suppressed from being saturated to a high luminance and so-called whiteout.
- the exponent of the enhancement function F 5 is a function of the value [A] of the target contrast signal JS, and increases monotonously when the value [A] of the target contrast signal JS is smaller than the value [B] of the unsharp signal US Function. More specifically, the index of the enhancement function F5 is expressed as 2 (A) * (1-7, and the function QT 2 (A) is the value of the target contrast signal JS [A] as shown in Fig. 30 Is a monotonically increasing function. The maximum value of function ⁇ 2 (A) is [1. 0].
- the enhancement function F5 reduces the amount of enhancement of the local contrast in the low luminance part. For this reason, when the luminance of the pixel of interest is lower than the luminance of surrounding pixels, excessive enhancement of local contrast in the low luminance portion is suppressed. In other words, the luminance value of the target pixel is saturated to a low luminance, and the so-called blackened state is suppressed.
- the exponent of the enhancement function F 5 is a function of the target contrast signal JS value [A] and increases monotonously when the target contrast signal JS value [A] is greater than the unsharp signal US value [B]. Function. More specifically, the exponent of the enhancement function F 5 is expressed as 3 (A) * (1 -r), and the function 3 (A) is a value of the target contrast signal JS [A ] Is a monotonically increasing function. The maximum value of function 3 (A) is [1. 0].
- the enhancement function F5 reduces the amount of enhancement of the local contrast in the low luminance part. For this reason, when the luminance of the pixel of interest is higher than the luminance of surrounding pixels, excessive enhancement of local contrast in the low luminance part is suppressed.
- the low luminance part in the image has a relatively low signal level because of its low signal level. However, by performing such processing, it is possible to suppress the degradation of the SN ratio.
- the exponent of the enhancement function F5 is a function of the value [A] of the target contrast signal JS and the value [B] of the unsharp signal US, and is relative to the absolute value of the difference between the value [A] and the value [B]. And a monotonically decreasing function.
- the exponent of the enhancement function F 5 can be said to be a function that increases as the ratio of the value [A] to the value [B] approaches 1. More specifically, the exponent of the enhancement function F5 is expressed as a4 (A, B) * (1 -r), and the function 0? 4 (A, B) has the value [A— B] is a monotonically decreasing function with respect to the absolute value.
- An upper limit or a lower limit may be provided for the calculation result of the enhancement function F 5 in the above ⁇ 1 >> to ⁇ 4 >>. Specifically, when the value [F5 (A / B)] exceeds a predetermined upper limit value, the predetermined upper limit value is adopted as the calculation result of the enhancement function F5. When the value CF5 (A / B)] exceeds the predetermined lower limit value, the predetermined lower limit value is adopted as the calculation result of the enhancement function F5.
- the amount of local contrast enhancement by the enhancement function F 5 can be limited to an appropriate range, and excessive or too little contrast enhancement is suppressed.
- ⁇ 1 >> to ⁇ 5 >> can be applied in the same way to the calculation using the enhancement function F5 in the first embodiment (for example, the first embodiment ⁇ profile data> (2 Or (3) etc.)
- the value [A] is the value of the input signal IS
- the value [B] is the value of the unsharp signal US obtained by spatially processing the input signal IS.
- the converted signal processing unit 602 has the same configuration as the visual processing device 21 shown in the first embodiment.
- the conversion signal processing unit 602 as a modification may have the same configuration as the visual processing device 31 (see FIG. 19) shown in the first embodiment.
- the conversion signal processing unit 602 as a modified example is realized by regarding the input signal IS in the visual processing device 31 as the target contrast signal JS and the output signal OS as the visual processing signal KS.
- the transformation signal processing unit 602 as a modified example performs visual processing based on the expression M 3 J for the target contrast signal JS (value [A]) and the unsharp signal US (value [B]).
- Equation M3 does not compress the dynamic range for the input signal IS, but can emphasize local contrast.
- This local contrast enhancement effect makes it possible to give the impression that the “visual” dynamic range is compressed or stretched.
- the enhancement function F 5 is a “power function”, and its exponent is the function Of 1 (A), 2 described in the above ⁇ Modification> (ii) ⁇ 1 >> to ⁇ 4 >>
- the function may have the same tendency as (A), Of 3 (A), Of 4 (A, B).
- the calculation result of the enhancement function F 5 may have an upper limit or a lower limit.
- the target contrast ⁇ setting unit 604 and the actual contrast ⁇ setting unit 605 allow the user to set the values of the target contrast C 1 and the actual contrast C 2 through input in, turf: c chair, etc.
- the target contrast setting unit 604 and the actual contrast setting unit 605 may be capable of automatically setting the values of the target contrast C 1 and the actual contrast C 2.
- the actual contrast setting unit 605 for automatically setting the value of the actual contrast C2 will be described.
- Figure 33 shows the actual contrast setting unit 605 that automatically sets the value of actual contrast C2.
- the actual contrast setting unit 605 includes a luminance measurement unit 605a, a storage unit 605b, and a calculation unit 605c.
- the luminance measuring unit 605a is a luminance sensor that measures the luminance value of ambient light in the display environment of the display that displays the output signal OS.
- the storage unit 605 b stores white luminance (white level) and black luminance (black level) that can be displayed on the display that displays the output signal OS without ambient light.
- the calculation unit 605 c acquires values from the luminance measurement unit 605 a and the storage unit 605 b, and calculates the actual contrast C 2 value.
- the calculating unit 605 c adds the luminance value of the ambient light acquired from the luminance measuring unit 605 a to each of the black level luminance value and the white level luminance value stored in the storage unit 605 b. Furthermore, the calculator 605 c is black Using the addition result to the luminance value of the level, the value obtained by dividing the addition result to the luminance value of the white level is output as the value [n] of the actual contrast C2. As a result, the value [n] of the actual contrast C 2 indicates the contrast value displayed on the display in a display environment where ambient light exists.
- the storage unit 6 0 5 b shown in Fig. 3 3 shows the ratio of white luminance (white level) and black luminance (black level) that can be displayed without the ambient light to the value of the target contrast C 1 [ m] may be stored.
- the actual contrast setting unit 60 5 simultaneously performs the function of the target contrast setting unit 60 4 that automatically sets the target contrast C 1.
- the storage unit 6 005 b does not store the ratio, and the ratio may be calculated by the calculation unit 6 0 5 G.
- the display device that displays the output signal OS is a projector, etc.
- the white luminance (white level) and black luminance (black level) that can be displayed in the absence of ambient light depend on the distance to the screen.
- the actual contrast setting unit 6 0 5 for automatically setting the value of contrast C 2 will be described.
- Figure 34 shows the actual contrast setting section 6 0 5 that automatically sets the actual contrast C 2 value.
- the actual contrast setting unit 60 5 includes a luminance measurement unit 60 05 d and a control unit 60 05 e.
- the luminance measuring unit 6 0 5 d is a luminance sensor that measures the luminance value in the display environment of the output signal OS displayed by the projector.
- the controller 6 0 5 e causes the projector to display a white level and a black level. Furthermore, the brightness value when each level is displayed is obtained from the brightness measuring section 6 0 5 d and the value of the actual contrast C 2 is calculated.
- the controller 6 0 5 e operates the projector in a display environment in which ambient light is present to display a white level (step S 6 2 0).
- the control unit 60 5 e acquires the brightness of the measured white level from the brightness measurement unit 60 5 d (step S 6 2 1).
- the control unit 6 05 e operates the projector in a display environment in which ambient light exists to display a black level (step S 6 2 2).
- Control unit 6 0 5 e is a brightness measurement unit From 605d, the brightness of the measured black level is obtained (step S623).
- the control unit 605 e calculates the ratio of the acquired brightness value of the white level and the brightness value of the black level and outputs it as the value of the actual contrast G2.
- the value [n] of the actual contrast C2 indicates the contrast value displayed by the projector in the display environment where ambient light exists.
- the value [m] of the target contrast C 1 can be derived by calculating the ratio between the white level and the black level in a display environment in which no ambient light exists.
- the actual contrast setting unit 605 simultaneously performs the function of the target contrast setting unit 604 that automatically sets the target contrast C1.
- the processing in the visual processing device 600 is performed on the luminance of the input signal IS.
- the present invention is not effective only when the input signal Is is expressed in the YCbCr color space.
- the input signal IS may be expressed in a YUV color space, a Lab color space, a Luv color space, a YIQ color space, an XYZ color space, a YPbPr color space, or the like.
- the processing described in the above embodiment can be executed for the luminance and brightness of each color space.
- the processing in the processing device 600 may be performed independently for each component of RGB.
- the processing by the target contrast conversion unit 601 is independently performed on the RGB components of the input signal I S, and the RGB components of the target contrast signal J S are output. Furthermore, the RGB component of the target contrast signal J S is independently processed by the conversion signal processing unit 602, and the RGB component of the visual processing signal KS is output. Furthermore, the RGB component of the visual processing signal KS is independently processed by the actual contrast conversion unit 603, and the RGB component of the output signal OS is output.
- common values are used for the target contrast C 1 and the actual contrast C 2 in the processing of each of the RGB components.
- (V i) Color difference correction processing The visual processing device 600 suppresses the hue of the output signal OS from being different from the hue of the input signal IS due to the influence of the luminance component processed by the conversion signal processing unit 602. May be further provided.
- FIG. 36 shows a visual processing device 600 that includes a color difference correction processing unit 608.
- the same components as those of the visual processing device 600 shown in FIG. It is assumed that the input signal IS has a YCbCr color space, and that the Y component is processed in the same way as described in the above embodiment.
- the color difference correction processing unit 608 will be described.
- the color difference correction processing unit 608 uses the target contrast signal JS as the first input (value [Y in]), the visual processing signal KS as the second input (value [Y out]), and the Cb component of the input signal IS as the first input. 3 input (value [CB in]), C r component of the input signal IS is the 4th input (value [CR in]), and the color difference corrected Cb component is the 1st output (value [CB out] )
- the Cr component that has undergone color difference correction is the second output (value [CRout]).
- FIG. 37 outlines the color difference correction process.
- the color difference correction processing unit 608 has four inputs, [Y in], [Y out], [CB i ⁇ ], and [CR PRINT]. By calculating these four inputs, [Cout], [C CR out] 2 outputs.
- [CBo ut] and [CRo ut] are derived based on the following equation that corrects [CB in] and [CR in] by the difference and ratio between [Y in] and [Y out].
- [CBo ut] is al * ([Y out-[Y in]) * [CB in] + a 2 * (1-[Y out] / [Y in]) * [CB in] + a 3 * ( [Y out]-[Y in]) * [CR in] + a 4 * (1-[Y out] / [Y in]) * derived based on [CR in] + [CB in], ( Hereinafter referred to as Formula CB).
- [CRo u t] is a 5 * ([Yo u t — [Y i n]) * [C B i n] + a
- step S630 four inputs of [Y i n], [Y o u t], [CB i n], and [GR i n] are obtained (step S630).
- the value of each input is data prepared in advance to determine the coefficients a 1 to a 8. For example, as [Y i n], [CB i n], and [CR i n], values obtained by thinning out all possible values at predetermined intervals are used. Further, as [Y 0 U t], a value obtained by thinning out a value that can be output when the value of [Y i n] is input to the conversion signal processing unit 602 at a predetermined interval is used.
- the data prepared in this way is acquired as 4 inputs.
- the acquired [ ⁇ ⁇ n], [CB in], [CR in] are converted to the Lab color space, and the converted Lab color
- the chromaticity values [A in] and [B in] in the space are calculated (step S 631).
- “expression CB” and “expression CR” are calculated using default coefficients a 1 to a 8, and the values of [CBo u t] and [GR Lou t] are obtained (step S 632).
- the obtained value and [Y o t] are converted to the Lab color space, and the chromaticity values [Ao t] and [Bo u t] in the converted Lab color space are calculated (step S633).
- an evaluation function is calculated using the calculated chromaticity values [A PRINT:], [B in], [A out], [Boot] (step S634), and the value of the evaluation function is predetermined. It is judged whether it is below the threshold of.
- the evaluation function is a function that becomes a small value when the hue change between [A in] and [B in] and [A out] and [Boot] becomes small. It is a function such as the sum of squares of component deviations. More specifically, the evaluation function is ([A in] — [Ao ut]) ⁇ 2+ ([B in] — [Bo ut]) ⁇ 2, etc.
- step S 635 If the value of the evaluation function is larger than the predetermined threshold value (step S 635), the coefficients a 1 to a 8 are modified (step S 636), and the new coefficient is used to change to step S 63 2 to Step S 635 are repeated.
- step S635 When the value of the evaluation function is smaller than the predetermined threshold (step S635), the coefficients a1 to a8 used for the calculation of the evaluation function are output as the result of the estimation calculation (step S637).
- the coefficients a 1 to a 8 are calculated using one of the four combinations of [Y in], [Yout], [CB in], and [CR in n] prepared in advance.
- the estimation calculation may be performed, the above-described processing may be performed using a plurality of combinations, and the coefficients a 1 to a 8 that minimize the evaluation function may be output as a result of the estimation calculation.
- the value of the target contrast signal JS is [Y in]
- the value of the visual processing signal KS is [Yout]
- the value of the Cb component of the input signal IS is [ CB in]
- the value of the Cr component of the input signal IS is [CR in]
- the value of the Cb component of the output signal OS is [CB out]
- the value of the C r component of the output signal OS is [CR Out].
- [ ⁇ in], [Y out] % [CB in], [CR in], [C Boot], and [CR Output] may represent values of other signals.
- the target contrast conversion unit 601 performs processing on each component of the input signal IS.
- the processed RGB color space signal is converted to a YC b Cr color space signal, the Y component value is [ ⁇ ⁇ n], the Cb component value is [CB in], and the Cr component value is The value may be [CR in].
- the output signal OS is an RGB color space signal
- the derived [Y out], [CBot], [CRout] are converted to the RGB color space, and the actual contrast conversion is performed for each component.
- the conversion processing by the unit 603 may be performed to obtain the output signal OS.
- the color difference correction processing unit 608 corrects each of the RGB components input to the color difference correction processing unit 608 using the ratio of the signal values before and after the processing of the conversion signal processing unit 602. It may be processed.
- the structure of a visual processing device 600 as a modification will be described with reference to FIG. Note that portions that perform substantially the same functions as those of the visual processing device 600 shown in FIG. 36 are assigned the same reference numerals, and descriptions thereof are omitted.
- the visual processing device 600 as a modified example includes a luminance signal generation unit 610 as a characteristic configuration.
- Each component of the input signal I S that is a signal in the RGB color space is converted into a target contrast signal J S that is a signal in the RGB color space by the target contrast conversion unit 601. Since detailed processing has been described above, description thereof will be omitted.
- the values of the respective components of the target contrast signal J S are [R i n], [G i n], and [B i n].
- the luminance signal generation unit 610 generates a luminance signal having a value [Y i n] from each component of the target contrast signal J S.
- the luminance signal is obtained by adding the values of each component of RGB in a certain ratio.
- the conversion signal processing unit 602 processes the luminance signal having the value [Y i n] and outputs the visual processing signal KS having the value [Y u t]. Detailed processing is the same as the processing in the conversion signal processing unit 602 (see FIG. 36) that outputs the visual processing signal KS from the target contrast signal J S, and thus the description thereof is omitted.
- the color difference correction processing unit 608 includes a luminance signal (value [Y in]), a visual processing signal KS (value [Y out]), and a target contrast signal JS (value [R in], [G in], [B in]). Is used to output color difference correction signals (values [Rout:], [Gout], [Bout]) that are signals in the RGB color space.
- the color difference correction processing unit 608 calculates a ratio (value [[Y out] Z [Y in]]) between the value [Y in] and the value [Y out].
- the calculated ratio is multiplied by each component of the target contrast signal JS (value [R in], [G in], [B in]) as a color difference correction coefficient. Accordingly, the color difference correction signal (value [Ro U t], [Go ut], [B out]) is output.
- the actual contrast conversion unit 603 converts the color difference correction signal, which is a signal in the RGB color space. Each component is converted and converted to the output signal OS, which is a signal in the RGB color space. Since the detailed processing has been described above, the description thereof will be omitted.
- the processing in the conversion signal processing unit 602 is only processing for the luminance signal, and it is not necessary to perform processing for each of the RGB components. This reduces the visual processing load on the RGB color space input signal IS.
- the visual processing unit 623 shown in FIG. 24 may be formed by a two-dimensional LUT.
- the two-dimensional LUT is a value of the visual processing signal KS with respect to the target contrast signal JS value and the unsharp signal US value. Is stored. More specifically, [First Embodiment] ⁇ Profile Data> (2) The value of the visual processing signal KS is determined based on “Expression M2” described in ⁇ Second Profile Data >>. It should be noted that “in formula M2J, the value of target contrast signal J S is used as value A, and the value of unsharp signal US is used as value B.
- the visual processing device 600 includes a plurality of such two-dimensional LUTs in a storage device (not shown).
- the storage device may be incorporated in the visual processing device 600, or may be connected to the outside via a wired or wireless connection.
- Each 2D LUT stored in the storage device is associated with a value for the target contrast ⁇ C 1 and a value for the actual contrast ⁇ C 2. That is, for each combination of the value of the target contrast C 1 and the value of the actual contrast ⁇ C 2, [Second Embodiment] ⁇ Conversion signal processing unit 602> ⁇ Operation of the conversion signal processing unit 602 >> The same calculation is performed, and it is two-dimensionally stored as UT.
- the visual processing unit 623 obtains the values of the target contrast C 1 and the actual contrast C 2, the two-dimensional LUT associated with each obtained value among the two-dimensional LUTs stored in the storage device. Is read. Furthermore, the visual processing unit 623 performs visual processing using the read two-dimensional LUT. Specifically, the visual processing unit 6 23 acquires the value of the target contrast signal JS and the value of the unsharp signal US, reads the value of the visual processing signal KS corresponding to the acquired value from the two-dimensional LUT, and outputs the visual processing signal KS.
- a visual processing device is a device that performs visual processing of images, such as a computer, television, digital camera, mobile phone, PDA, printer, scanner, etc. It is realized as an integrated circuit.
- each functional block of the above embodiment may be individually made into one chip, or may be made into one chip so as to include a part or all of them.
- LS I is used, but depending on the degree of integration, it may be called IC, system LS, super LS I, or ultra LS I.
- the method of circuit integration is not limited to LSI, but may be realized by a dedicated circuit or a general-purpose processor.
- FPGA Field Programmable Gate Array
- FPGA Field Programmable Gate Array
- reconfigurable 'processor that can reconfigure the connection and settings of circuit cells inside the LSI may be used.
- each block of each visual processing device described in the first embodiment and the second embodiment is performed by, for example, a central processing unit (CPU) included in the visual processing device.
- a program for performing each processing is stored in a storage device such as a hard disk or a ROM, and is read out and executed in the ROM or RAM.
- the 2D LUT 4 is a hard disk, R It is stored in a storage device such as OM and is referred to as necessary. Furthermore, the visual processing unit 3 receives profile data from the profile data registration device 8 that is directly connected to the visual processing device 1 or indirectly connected via a network, and serves as a two-dimensional LUT 4. register.
- the visual processing device may be a device that performs gradation processing of an image for each frame (for each field) built in or connected to a device that handles moving images.
- the visual processing method described in the first embodiment is executed.
- the visual processing program is stored in a storage device such as a hard disk or ROM in a device such as a computer, television, digital camera, mobile phone, PDA, printer, scanner, etc. It is a program that executes processing, and is provided, for example, via a recording medium such as a CD-ROM or via a network.
- the visual processing device described in the first embodiment and the second embodiment can also be expressed by the configuration shown in FIGS.
- FIG. 40 is a block diagram showing a configuration of a visual processing device 9 10 having the same function as the visual processing device 5 25 shown in FIG. 7, for example.
- the sensor 9 1 1 and the user input unit 9 1 2 have the same functions as the input device 5 2 7 (see FIG. 7). More specifically, the sensor 9 1 1 is a sensor that detects ambient light in an environment where the visual processing device 9 1 0 is installed or an environment where the output signal OS from the visual processing device 9 1 0 is displayed. The detected value is output as parameter P 1 representing ambient light.
- the user input unit 9 1 2 is a device that allows the user to set the intensity of the ambient light stepwise, for example, “strong, medium, weak” or steplessly (continuously). The set value is output as parameter P 1 representing ambient light.
- Output section 9 1 4 has the same function as profile data registration section 5 2 6 (see Fig. 7)
- the output unit 9 14 includes a plurality of profile data associated with the value of the parameter P 1 representing ambient light.
- the profile data is data in a table format that gives the value of the output signal OS for the input signal IS and a signal obtained by spatially processing the input signal IS.
- the output unit 9 14 outputs the profile data corresponding to the acquired value of the parameter P 1 representing the ambient light to the conversion units 9 ... 5 as the brightness adjustment parameter P 2 .
- the conversion unit 9 15 has the same functions as the spatial processing unit 2 and the visual processing unit 3 (see FIG. 7).
- the conversion unit 9 15 receives, as inputs, the luminance of the target pixel (target pixel) that is the target of visual processing, the luminance of peripheral pixels located around the target pixel, and the luminance adjustment parameter P 2. Is output and the output signal OS is output.
- the conversion unit 9 15 performs spatial processing on the target pixel and the surrounding pixels. Further, the conversion unit 9 15 reads the value of the output signal O S corresponding to the target pixel and the result of the spatial processing from the brightness adjustment parameter P 2 in the table format, and outputs it as the output signal O S.
- the brightness adjustment parameter P2 is not limited to the profile data described above.
- the brightness adjustment parameter P 2 may be coefficient matrix data used when calculating the value of the output signal OS from the brightness of the target pixel and the brightness of surrounding pixels.
- the coefficient matrix data is data storing a coefficient part of a function used when calculating the value of the output signal OS from the brightness of the target pixel and the brightness of the surrounding pixels.
- the output unit 9 14 does not need to have profile data or coefficient matrix data for all values of the parameter P 1 representing ambient light.
- appropriate profile data or the like may be generated by appropriately dividing or externally dividing the provided profile data or the like according to the acquired parameter P 1 representing ambient light.
- FIG. 41 is a block diagram showing a configuration of a visual processing device 920 that performs the same function as the visual processing device 600 shown in FIG. 24, for example.
- the output unit 9 2 1 further acquires the external parameter P 3 in addition to the parameter P 1 representing the ambient light, and based on the parameter P 1 representing the ambient light and the external parameter P 3 Outputs brightness adjustment parameter P2.
- the parameter P 1 representing ambient light is the same as that described in (1) above.
- the external parameter P 3 is a parameter that represents a visual effect that a user who views the output signal OS, for example, finds. More specifically, it is a value such as contrast (target contrast) required by the user viewing the image.
- the external parameter P 3 is set by the target contrast setting unit 60 4 (see FIG. 24).
- the default value stored in advance in the output unit 9 2 1 is used.
- the output unit 9 2 1 calculates the actual contrast value from the parameter P 1 representing the ambient light according to the configuration shown in FIGS. 33 and 34, and outputs it as the brightness adjustment parameter P2.
- the output unit 9 2 1 outputs the external parameter P 3 (target contrast) as the brightness adjustment parameter P 2.
- the output unit 9 2 1 stores a plurality of profile data stored in the two-dimensional LUT described in [Second Embodiment] (Modification) (V ii), and outputs the external parameter P 3 and ambient light.
- the profile data is selected from the actual contrast calculated from the parameter P1 to be displayed, and the table data is output as the brightness adjustment parameter P2.
- the converter 9 2 2 has the same functions as the target contrast converter 6 0 1, the converted signal processor 6 0 2, and the actual contrast converter 6 0 3 (see FIG. 24 above). More specifically, the input signal IS (the luminance of the target pixel and the luminance of surrounding pixels) and the luminance adjustment parameter P2 are input to the conversion unit 922, and the output signal OS is output. For example, the input signal IS is converted into the target contrast signal JS (see FIG. 24) using the target contrast acquired as the brightness adjustment parameter P2. Furthermore, the target contrast signal JS is spatially processed to derive the unsharp signal US (see Fig. 24).
- the conversion unit 9 2 2 includes the visual processing unit 6 2 3 as the modification described in [Second Embodiment] ⁇ Modification> (Vii), and is obtained as the brightness adjustment parameter P2.
- a visual processing signal KS (see Fig. 24) is output from the profile data, target contrast signal JS, and unsharp signal US. Further, the visual processing signal KS is converted into the output signal OS using the actual contrast acquired as the brightness adjustment parameter P2.
- this visual processing device 9 it becomes possible to select profile data used for visual processing based on the external parameter P 3 and the parameter P 1 representing ambient light, and correct the influence of ambient light,
- the local contrast can be improved even in an environment where ambient light is present, and the output signal OS can be brought closer to the user's preferred contrast.
- the configuration described in (1) and the configuration described in (2) can be switched and used as necessary. Switching may be performed using an external switching signal. Further, it may be determined which configuration is used depending on whether or not the external parameter P 3 exists.
- the actual contrast is calculated by the output unit 9 2
- the actual contrast value may be directly input to the output unit 9 2 1.
- the visual processing device 9 2 0 ′ shown in FIG. 42 has an adjustment unit 9 2 5 that moderates the time change of the parameter P 1 representing the ambient light compared to the visual processing device 9 20 shown in FIG. It differs in the point to prepare.
- the adjustment unit 9 2 5 takes the parameter P 1 representing ambient light as an input and the adjusted output P 4 as an output.
- the output unit 9 2 1 can acquire the parameter P 1 representing the ambient light without an abrupt change, and as a result, the time change of the output of the output unit 9 2 1 can also be obtained. Be gentle.
- the adjustment unit 925 is realized by an IIR filter, for example.
- k 1 and k 2 are parameters that take positive values
- [P 1] is the value of parameter P 1 representing ambient light
- [P4] ' is the output of adjustment unit 925 This is the value of the delay output of P4 (for example, the previous output).
- the processing in adjustment section 925 may be performed using a configuration other than the IIR filter.
- the adjustment unit 925 may be a means that is provided on the output side of the output unit 921 and directly moderates the time change of the brightness adjustment parameter P 2, like a visual processing device 920 ′′ shown in FIG.
- the processing in adjustment unit 925 may be performed using a configuration other than the IIR filter.
- the histogram creation unit 302 creates a tone conversion curve Cm from the brightness histogram Hm of the pixels in the image area Sm. Apply to image area Sm In order to create the tone conversion curve Cm more appropriately, it is necessary to have the entire image from the dark part to the bright part (highlight) of the image, and it is necessary to refer to more pixels. . For this reason, each image region Sm cannot be made very small, that is, the number n of original image divisions cannot be made too large. The division number n varies depending on the content of the image, but empirically, a division number of 4 to 16 is used.
- each image area Sm cannot be made too small, the following problem may occur in the output signal OS after gradation processing.
- gradation processing is performed using one gradation conversion curve Cm for each image area Sm, the boundary between the boundaries of each image area Sm is unnaturally conspicuous, or is simulated in the image area Sm. An outline may occur.
- the number of divisions is 4 to 16 at most, the image area Sm is large, so if there are extremely different images between the image areas, the shading change between the image areas is large and it is difficult to prevent the occurrence of false contours. .
- the shade changes extremely depending on the positional relationship between the image (for example, an object in the image) and the image area Sm.
- the visual processing device 101 is a device that performs gradation processing of an image by being incorporated in or connected to a device that handles images, such as a computer, a television, a digital camera, a mobile phone, and a PDA.
- the visual processing device 101 is characterized in that gradation processing is performed on each of the image regions finely divided as compared with the conventional art.
- FIG. 44 is a block diagram illustrating the structure of the visual processing device 1 0 1.
- the visual processing device 101 includes an image dividing unit 102 that divides an original image input as an input signal IS into a plurality of image regions Pm (1 ⁇ m ⁇ n, where n is the number of divisions of the original image), A gradation conversion curve deriving unit 1 1 0 for deriving a gradation conversion curve Gm for the image region Pm; And a gradation processing unit 105 that outputs an output signal OS that gradates the gradation conversion curve Cm and performs gradation processing on each image area Pm.
- the gradation conversion curve deriving unit 1 1 0 is a histogram creating unit 1 that creates a brightness histogram Hm of pixels in a wide-area image region Em composed of each image region Pm and an image region around the image region Pm. 03 and a gradation curve creation unit 104 that creates a gradation conversion curve Cm for each image area Pm from the created brightness histogram Hm.
- the image dividing unit 102 divides the original image input as the input signal IS into a plurality (n) of image areas Pm (see FIG. 45).
- the number of divisions of the original image is larger than the number of divisions of the conventional visual processing device 300 shown in FIG. 104 (for example, 4 to 16 divisions), for example, 80 divisions in the horizontal direction and vertical direction. 60 divisions, 4800 divisions, etc.
- the histogram creation unit 103 creates a brightness histogram Hm of the wide-area image region Em for each image region Pm.
- the wide area image area Em is a set of a plurality of image areas each including the image area Pm. For example, 25 areas of 5 blocks in the vertical direction and 5 blocks in the horizontal direction centering on the image area Pm are used. It is a set of image areas. Depending on the position of the image area Pm, it may not be possible to take a wide area image area Em of 5 blocks in the vertical direction and 5 blocks in the horizontal direction around the image area Pm.
- the wide area image region E I For example, it is not possible to take a wide area image area E I of 5 blocks in the vertical direction and 5 blocks in the horizontal direction around the image area P I with respect to the image area P I located around the original image. In this case, a region where the original image is overlapped with the region of 5 blocks in the vertical direction and 5 blocks in the horizontal direction centered on the image region PI is adopted as the wide area image region E I.
- the brightness histogram Hm created by the histogram creation unit 103 shows the distribution of brightness values of all pixels in the wide-area image area Em. That is, in the brightness histogram Hm shown in FIGS. 46 (a) to 46 (c), the horizontal axis indicates the brightness level of the input signal IS, and the vertical axis indicates the number of pixels.
- the gradation curve creation unit 104 accumulates the “number of pixels” in the brightness histogram Hm of the wide area image area Em in the order of brightness, and uses this accumulated curve as the gradation conversion curve Cm of the image area Pm (see FIG. 47).
- the horizontal axis is the input signal.
- the brightness value of the pixel in the image area Pm in IS, and the vertical axis indicates the brightness value of the pixel in the image area Pm in the output signal OS.
- the gradation processing unit 105 loads the gradation conversion curve Cm, and converts the brightness value of the pixel in the image area Pm in the input signal IS based on the gradation conversion curve Cm.
- FIG. 48 shows a flowchart for explaining the visual processing method in the visual processing device 101.
- the visual processing method shown in FIG. 48 is realized by hardware in the visual processing device 101, and performs gradation processing of the input signal I S (see FIG. 1).
- the input signal IS is processed in units of images (steps S 1 10 to S 1 16).
- the original image input as the input signal IS is divided into multiple image areas Pm (1 ⁇ m ⁇ n, where n is the number of divisions of the original image) (step S 1 1 1). Processed (steps S 1 1 2 to S 1 1 5).
- a brightness histogram Hm of pixels in the wide-area image area Em composed of the respective image areas P m and the image areas around the image area P m is created (step S I 1 2). Further, a gradation conversion curve Cm for each image region Pm is created based on the brightness histogram Hm (step S 1 1 3). Here, the description of the brightness histogram Hm and the gradation conversion curve Cm will be omitted (see the section of ⁇ Action> above). Using the created gradation conversion curve Cm, gradation processing is performed on the pixels in the image area Pm (step S 1 1 4).
- step SI 15 it is determined whether or not the processing for all the image areas Pm has been completed (step SI 15), and the processing of steps S 1 1 2 to S 1 15 is performed until it is determined that the processing has been completed. Repeat several times for the original image. Thus, the processing for each image is completed (step S 1 1 6).
- each step of the visual processing method shown in FIG. 48 may be realized as a visual processing program by a computer or the like.
- a tone conversion curve Cm is created for each image area Pm. Therefore, it is possible to perform appropriate gradation processing as compared with the case where the same gradation conversion is performed on the entire original image.
- the gradation conversion curve Cm created for each image area Pm is created based on the brightness histogram Hm of the wide-area image area Em. For this reason, even if the size of each image area Pm is small, it is possible to sample sufficient brightness values. As a result, an appropriate gradation conversion curve Cm can be created even for a small image area Pm.
- each image area Pm is smaller than before. For this reason, it is possible to suppress the occurrence of pseudo contours in the image area Pm.
- the number of divisions of the original image is 4800 divisions, but the effect of the present invention is not limited to this case, and the same effect can be obtained with other division numbers. is there.
- the processing amount of gradation processing and the visual effect are in a trade-off relationship with respect to the number of divisions. In other words, if the number of divisions is increased, the amount of gradation processing increases, but a better visual effect (for example, suppression of pseudo contours) can be obtained.
- the number of image areas constituting the wide area image area is 25, but the effect of the present invention is not limited to this case, and the same effect can be obtained with other numbers. It is possible to obtain.
- a visual processing device 11 1 as a fifth embodiment of the present invention will be described with reference to FIGS.
- the visual processing device 11 1 1 is a device that performs gradation processing of an image by being incorporated in or connected to a device that handles images, such as a computer, a television, a digital camera, a mobile phone, and a PDA.
- the visual processing device 1 1 1 is characterized in that it uses a plurality of gradation conversion curves stored in advance as LUTs.
- FIG. 49 is a block diagram illustrating the structure of the visual processing device 1 1 1.
- the visual processing device 1 1 1 includes an image dividing unit 1 1 2, a selection signal deriving unit 1 1 3, and a gradation processing unit 1 2 0.
- the image segmentation unit 1 1 2 receives an input signal IS and outputs an image area Pm (1 ⁇ m ⁇ n, where n is the number of divisions of the original image) obtained by dividing the original image input as the input signal IS. To do.
- the selection signal deriving unit 1 1 3 outputs a selection signal Sm for selecting a gradation conversion curve Cm applied to the gradation processing of each image region Pm.
- the gradation processing unit 120 includes a gradation processing execution unit 1 1 4 and a gradation correction unit 1 1 5.
- the gradation processing execution unit 1 1 4 has a plurality of gradation conversion curve candidates G1 to Gp (p is the number of candidates) as a two-dimensional LUT, and receives the input signal IS and the selection signal Sm as input.
- the gradation processing signal CS obtained by gradation processing on the pixels in the region Pm is output.
- the gradation correction unit 1 1 5 receives the gradation processing signal CS and outputs an output signal OS obtained by correcting the gradation of the gradation processing signal CS.
- the gradation conversion curve candidate G “! To Gp will be described with reference to FIG. 50.
- the gradation conversion curve candidate G“! ⁇ Gp is a curve that gives the relationship between the brightness value of the pixel of the input signal IS and the brightness value of the pixel of the gradation processing signal CS.
- the horizontal axis represents the brightness value of the pixel in the input signal IS
- the vertical axis represents the brightness value of the pixel in the gradation processing signal CS.
- the tone conversion curve candidates G1 to Gp are in a monotonically decreasing relationship with the subscript, and satisfy the relationship of G 1 ⁇ G2 ⁇ .
- the tone conversion curve candidates G1 to Gp are input signals IS Is a power function with the pixel brightness value as a variable, and expressed as Gm (dm) (1 ⁇ m ⁇ p, X is a variable, is a constant), S 1 ⁇ S 2 ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ 5 p relationship is satisfied.
- the lightness value of the input signal IS is in the range of values [0. 0 to 1.0].
- the gradation processing execution unit 114 has gradation conversion curve candidates G1 to Gp as two-dimensional LUTs. That is, the two-dimensional LUT calculates the pixel brightness value of the gradation processing signal CS with respect to the pixel brightness value of the input signal IS and the selection signal S m for selecting the gradation conversion curve candidates G 1 to G p.
- the lookup table (LUT) to give.
- Figure 51 shows an example of this two-dimensional LUT.
- the two-dimensional LUT 14 1 shown in Fig. 5 1 is a 64 ⁇ 64 matrix, and each tone conversion curve candidate G 1 to G 64 is arranged in the row direction (horizontal direction). .
- the column direction (vertical direction) of the matrix for example, the upper 6-bit value of the pixel value of the input signal IS represented by 10 bits, that is, the level for the input signal IS value divided into 64 stages.
- the pixel values of the adjustment processing signal CS are lined up.
- the pixel value of the gradation processing signal CS has a value in the range of, for example, the value [0. 0 to 1.0] when the gradation conversion curve candidates G 1 to G p are “power functions J”.
- the image dividing unit 1 1 2 operates in substantially the same manner as the image dividing unit 102 in FIG. 44, and divides the original image input as the input signal IS into a plurality (n) of image areas Pm (see FIG. 45). .
- the number of divisions of the original image is larger than the number of divisions of the conventional visual processing device 300 shown in FIG. 104 (for example, 4 to 16 divisions). For example, 80 divisions in the horizontal direction and 60 divisions in the vertical direction Yes, such as 4800 division.
- the selection signal deriving unit 1 13 selects a gradation conversion curve Cm to be applied to each image region Pm from the gradation conversion curve candidates G 1 to G p. Specifically, select The signal deriving unit 1 1 3 calculates the average brightness value of the wide area image area Em of the image area Pm, and selects any one of the tone conversion curve candidates G 1 to G p according to the calculated average brightness value . In other words, the tone conversion curve candidates G 1 to Gp are associated with the average brightness value of the wide-area image region Em, and the larger the average brightness value, the more the tone conversion curve candidates G 1 to G p with larger subscripts. Selected.
- the wide area image region Em is the same as that described with reference to FIG. 45 in the [fourth embodiment]. That is, the wide area image area Em is a set of a plurality of image areas including the respective image areas Pm. For example, 25 image areas of 5 blocks in the vertical direction and 5 blocks in the horizontal direction centering on the image area Pm. Is a set of Depending on the position of the image area Pm, it may not be possible to take a wide area image area Em of 5 blocks in the vertical direction and 5 blocks in the horizontal direction around the image area Pm. For example, a wide image area E I of 5 blocks in the vertical direction and 5 blocks in the horizontal direction cannot be taken around the image area P I with respect to the image area P I located around the original image. In this case, a region where the region of 5 blocks in the vertical direction and 5 blocks in the horizontal direction centering on the image region P I overlaps with the original image is adopted as the wide area image region E I.
- the selection result of the selection signal deriving unit 1 1 3 is output as a gradation conversion curve candidate (selection signal Sm indicating any one of 31 to Gp. More specifically, the selection signal Sm is a gradation conversion curve candidate. Output as the value of the subscript (1 to p) of G 1 to G p.
- the gradation processing execution unit 1 1 4 receives the brightness value of the pixel in the image area Pm included in the input signal IS and the selection signal Sm as an input. For example, the gradation processing execution unit 1 1 4 uses the 2D LUT 1 41 shown in FIG. Outputs the brightness value of the processed signal CS.
- the gradation correction unit 1 1 5 selects the brightness value of the pixel in the image area Pm included in the gradation processing signal CS for the pixel position, the image area P m, and the image area around the image area P m. Correction is performed based on the gradation conversion curve. For example, the gradation conversion curve Cm applied to the pixels included in the image area Pm and the gradation conversion curve selected for the image area around the image area Pm are corrected by the internal division ratio of the pixel position. The brightness value of the subsequent pixel is obtained.
- the operation of the gradation correction unit 1 15 will be described in more detail with reference to FIG. Figure 52 shows the image area?
- the position of pixel X (value value [X]) of the image area P o that is subject to gradation correction is set to the center of the image area Po and the center of the image area P p.
- the center of the image area ⁇ and the center of the image area P q are divided into [j: 1-j].
- [Gs], [Gt], [Gu], and [Gv] are the brightness values when the gradation transformation curve candidates Gs, Gt, Gu, and Gv are applied to the brightness value [x].
- it is a value.
- FIG. 53 shows a flowchart explaining the visual processing method in the visual processing device 1 1 1.
- the visual processing method shown in FIG. 53 is realized by hardware in the visual processing device 11 and performs gradation processing of the input signal I S (see FIG. 49).
- the input signal I S is processed in units of images (Steps S 1 20 to S 1 26).
- the original image input as the input signal IS is divided into multiple image areas Pm (1 ⁇ m ⁇ n, where n is the number of divisions of the original image) (step S 1 2 1), and gradation processing is performed for each image area Pm. (Steps S 1 22 to S 1 2).
- the gradation conversion curve Cm applied to each image region Pm is selected from the gradation conversion curve candidates G1 to Gp (step S122). Specifically, the average brightness value of the wide area image area Em of the image area Pm is calculated, and any one of the gradation conversion curve candidates G1 to Gp is selected according to the calculated average brightness value.
- the tone conversion curve candidates G 1 to Gp are related to the average brightness value of the wide-area image area Em. The larger the average brightness value, the larger the tone conversion curve candidate G “! To Gp is selected.
- the description of the wide-area image region Em is omitted (see the section ⁇ Action> above).
- the brightness value of the gradation processing signal CS is output using the two-dimensional LUT 14 “I” shown in FIG. 51 (step S 1 23). Further, for all image regions Pm, It is determined whether or not the processing is completed (step S 1 24), and the processing of steps S 122 to S 124 is repeated several times until the processing is determined to be completed. The processing for each image area ends.
- the brightness value of the pixel in the image area Pm included in the gradation processing signal CS is corrected based on the pixel position and the gradation conversion curve selected for the image area Pm and the image area around the image area Pm.
- Step S 1 25 For example, the gradation conversion curve Cm applied to the pixels included in the image area Pm and the gradation conversion curve selected for the image area around the image area Pm are corrected by the internal division ratio of the pixel position.
- the brightness value of the subsequent pixel is determined. Explanation of the details of the correction is omitted (see the column ⁇ Action> above, Fig. 52).
- step S 1 26 the processing for each image is completed.
- each step of the visual processing method shown in FIG. 53 may be realized as a visual processing program by a computer or the like.
- the gradation conversion curve Cm selected for each image area Pm is created based on the average brightness value of the wide area image area Em. For this reason, even if the size of the image area Pm is small, it is possible to sample sufficient brightness values. As a result, an appropriate gradation conversion curve Cm can be selected and applied to a small image area Pm.
- the gradation processing execution unit 1 1 4 has a two-dimensional LUT created in advance. For this reason, it is possible to reduce the processing load required for gradation processing, more specifically, the processing load required for creating the gradation conversion curve Cm. As a result, it is possible to speed up the processing required for gradation processing of the image area Pm.
- the gradation processing execution unit 1 1 4 executes gradation processing using a two-dimensional LUT.
- the two-dimensional LUT is read from a storage device such as a hard disk or ROM provided in the visual processing device 1 1 1 and used for gradation processing.
- a storage device such as a hard disk or ROM provided in the visual processing device 1 1 1 and used for gradation processing.
- the tone correction unit 1 15 corrects the tone of the pixels in the image region Pm that has been tone-processed using one tone conversion curve Cm. For this reason, it is possible to obtain an output signal OS subjected to more appropriate gradation processing. For example, it becomes possible to suppress the generation of pseudo contours. In addition, in the output signal OS, it is possible to further prevent the joint between the boundaries of the respective image areas Pm from being unnaturally conspicuous.
- the number of divisions of the original image is 4800 divisions, but the effect of the present invention is not limited to this case, and the same effect can be obtained with other division numbers. is there.
- the processing amount of gradation processing and the visual effect are in a trade-off relationship with respect to the number of divisions. In other words, increasing the number of divisions increases the amount of gradation processing, but makes it possible to obtain better visual effects (for example, images with suppressed pseudo contours).
- the number of image areas constituting the wide area image area is 25, but the effect of the present invention is not limited to this case, and the same effect can be obtained with other numbers. It is possible to obtain.
- a two-dimensional LUT 1 41 composed of a 64 ⁇ 64 matrix is used.
- An example of a two-dimensional LUT was used.
- the effect of the present invention is not limited to the two-dimensional LU ⁇ of this size.
- it may be a matrix in which more gradation conversion curve candidates are arranged in the row direction.
- the pixel values of the gradation processing signal CS corresponding to the values obtained by dividing the pixel values of the input signal IS into finer steps may be arranged in the matrix column direction.
- the pixel value of the gradation processing signal CS may be arranged for each pixel value of the input signal IS represented by 10 bits.
- the size of the 2D L U is increased, more appropriate gradation processing can be performed, and if the size is decreased, the memory for storing the 2D LUT can be reduced.
- the gradation processing signal CS may be output as a component of a matrix that is linearly interpolated by the gradation processing execution unit 1 14 with the lower 4 bits of the pixel value of the input signal IS. good. That is, in the column direction of the matrix, for example, the components of the matrix for the upper 6 bits of the pixel value of the input signal IS represented by 10 bits are arranged, and the upper 6 bits of the pixel value of the input signal IS are arranged.
- the matrix component for the value and the matrix component for the value obtained by adding [1] to the upper 6 bits of the pixel value of the input signal IS (for example, the component one row below in Fig. 51) Linear interpolation is performed using the lower 4 bits of the pixel value of the input signal IS and output as the gradation processing signal CS.
- the gradation conversion curve Cm to be applied to the image area Pm is selected based on the average brightness value of the wide area image area Em.
- the method of selecting the gradation conversion curve Cm is not limited to this method.
- the value [Sm] of the selection signal Sm may be the average brightness value, the maximum brightness value, or the minimum brightness value of the wide area image area Em.
- tone conversion curve candidates G1 to G64 are associated with the respective values obtained by dividing the possible values of the selection signal Sm into 64 levels.
- the gradation conversion curve C m to be applied to the image region P m may be selected as follows. That is, an average brightness value is obtained for each image area Pm, and a provisional selection signal S m ′ for each image area P m is obtained from each average brightness value.
- the provisional selection signal Sm "has the value of the subscript number of the gradation conversion curve candidates G 1 to Gp. Further, the provisional selection signal Sm ′ is set for each image area included in the wide area image area Em.
- the average value is averaged to obtain the value [Sm] of the selection signal Sm for the image area Pm, and the tone conversion curve candidate G 1 to Gp with the integer closest to the value [Sm] as the subscript is tone-converted Select as curve Cm.
- the gradation conversion curve Cm to be applied to the image area Pm is selected based on the average brightness value of the wide area image area Em.
- the gradation conversion curve Cm to be applied to the image area Pm may be selected based on the weighted average (weighted average) instead of the simple average of the wide area image area Em.
- weighted average weighted average
- FIG. 54 an average brightness value of each image area constituting the wide area image area Em is obtained, and an image area P s 1, P having an average brightness value greatly different from the average brightness value of the image area Pm is obtained.
- s 2 ⁇ ⁇ ⁇
- the average brightness value of the wide-area image area Em is obtained by reducing or excluding weights.
- the wide-area image area Em includes an area that is lightly specific (for example, when the wide-area image area Em includes a boundary between two objects with different brightness values), it is applied to the image area Pm.
- the tone conversion curve Cm to be selected is less affected by the brightness value of the specific area, and more appropriate tone processing is performed.
- the presence of the gradation correction unit 1 15 may be arbitrary.
- Snow even when the gradation processing signal CS is output, it is the same as that described in ⁇ Effect> in the [fourth embodiment], compared to the conventional visual processing device 300 (see FIG. 104). And ⁇ Effects> of [Fifth Embodiment] The same effects as described in (1) and (2) can be obtained.
- the gradation conversion curve candidates G 1 to Gp are in a monotonically decreasing relationship with respect to the subscript, and G 1 ⁇ G2 ⁇ -- ⁇ ⁇ Gp with respect to the brightness values of the pixels of all input signals IS Explained that the relationship was satisfied.
- the gradation transformation curve candidates G 1 to G p included in the two-dimensional LUT satisfy the relationship of G 1 ⁇ G2 ⁇ ⁇ -' ⁇ Gp with respect to a part of the brightness value of the pixel of the input signal IS. It does not have to be. That is, any of the gradation conversion curve candidates G 1 to Gp may be in a relationship of crossing each other.
- the tone conversion curve candidates G1 to Gp included in the two-dimensional LUT have a relationship of G 1 ⁇ G2 ⁇ ' ⁇ ' ⁇ Gp with respect to a part of the brightness value of the pixel of the input signal IS. It does not have to be satisfied.
- the value stored in the two-dimensional LUT may be arbitrary in a part where the value after gradation processing has little influence on the image quality.
- the values stored for the input signal IS and the selection signal Sm with the same value are the same as the values of the input signal IS and the selection signal Sm. It is desirable to maintain a monotonically increasing or monotonically decreasing relationship.
- the gradation conversion curve candidates G “! To Gp included in the two-dimensional LUT have been described as“ power functions ”.
- the tone conversion curve candidates G1 to Gp do not have to be strictly formulated as “power functions”. Also, it may be a function having any shape such as S-shape or inverse S-shape.
- the visual processing device 1 1 1 may further include a profile data creation unit that creates profile data that is a value stored in the two-dimensional LUT.
- the profile data creation unit is composed of an image division unit 102 and a tone conversion curve deriving unit 110 in the visual processing device 101 (see FIG. 44), and a plurality of tone conversion curves created. Is stored in the 2D LUT as profile data. Each gradation transformation curve stored in the 2D LUT may be associated with the spatially processed input signal IS.
- the image dividing unit 1 1 2 and the selection signal deriving unit 1 1 3 may be replaced with a spatial processing unit that spatially processes the input signal IS.
- the lightness value of the pixel of the input signal I S need not be a value in the range of the value [0.0 to 1.0].
- the value in that range may be normalized to the value [0. 0 to 1.0].
- Each of the gradation conversion curve candidates G1 to Gp performs gradation processing on the input signal IS having a wider dynamic range than the normal dynamic range, and outputs the gradation processing signal CS of the normal dynamic range. It may be a curve.
- the input signal IS has a wider dynamic range than the normal dynamic range (for example, a signal in the range of values [0. It has been demanded.
- the gradation processing signal CS having the value [0.0 to 1.0] is output even for the input signal IS in the range exceeding the value [0.0 to 1.0].
- a gradation transformation curve is used.
- the pixel value of the “tone” processing signal CS has a range of values [0 to 1.0] when the tone conversion curve candidates G1 to Gp are “power functions j”, for example.
- the pixel value of the gradation processing signal CS is not limited to this range, for example, for an input signal IS having a value [0.0 to 1.0],
- the tone conversion curve candidates G 1 to Gp may perform dynamic range compression.
- the gradation processing execution unit 1 14 has gradation conversion curve candidates G 1 to G p as a two-dimensional LUT.”
- the gradation processing execution unit 1 14 may have a one-dimensional LUT that stores the relationship between the curve parameter for specifying the gradation conversion curve candidates G1 to Gp and the selection signal Sm.
- FIG. 56 is a block diagram for explaining the structure of a gradation processing execution unit 144 as a modification of the gradation processing execution unit 114.
- the gradation processing execution unit 144 receives the input signal IS and the selection signal Sm as inputs, and outputs a gradation processing signal CS, which is the gradation-processed input signal IS.
- the gradation processing execution unit 144 includes a curve parameter output unit 145 and a calculation unit 148.
- the curve parameter output unit 145 includes a first LUT 1 46 and a second LUT 1 47.
- the first LUT 1 46 and the second LUT 1 47 receive the selection signal Sm and output the curve parameters P 1 and P 2 of the gradation conversion curve candidate Gm specified by the selection signal Sm, respectively.
- Arithmetic unit 148 receives curve parameters P 1 and P 2 and input signal IS, and outputs gradation processing signal CS.
- 1 1 1st 1 "1 46th and 21st 2nd 1st 1 47 are 1-dimensional LUTs that store the values of the curve parameters P 1 and P 2 for the selection signal Sm respectively.
- 1st LUT 1 46 Before describing the details of the first 21_1_1-147, the contents of the curve parameters P 1 and P 2 will be described.
- FIG. 57 shows tone conversion curve candidates G1 to Gp.
- the tone conversion curve candidates G 1 to G p are monotonically decreasing with respect to the subscript, and G1 ⁇ G2 ⁇ '' ⁇ ⁇ Gp with respect to the brightness values of the pixels of all input signals IS Meet.
- the relationship between the above-mentioned tone conversion curve candidates G 1 to Gp is as follows: for a tone conversion curve candidate with a large subscript, when the input signal IS is small, or for a tone conversion curve candidate with a small subscript. If the input signal IS is large, it does not have to be established.
- the curve parameters P 1 and P 2 are output as the value of the gradation processing signal CS with respect to a predetermined value of the input signal IS. That is, when the tone conversion curve candidate Gm is specified by the selection signal Sm, the value of the curve parameter P 1 is the value of the tone conversion curve candidate Gm [R im] with respect to the predetermined value [XI] of the input signal IS.
- the value of the curve parameter P2 is output as the value [R2m] of the gradation conversion curve candidate Gm for the predetermined value [X2] of the input signal IS.
- the value [X2] is larger than the value [X 1].
- the first LUT 1 46 and the second LUT 1 47 store the values of the curve parameters P 1 and P 2 for the selection signal S m, respectively. More specifically, for example, for each selection signal Sm given as a 6-bit signal, the values of the curve parameters P 1 and P 2 are given at 6 bits, respectively.
- the number of bits secured for the selection signal S m and the curve parameters P 1 and P 2 is not limited to this.
- FIG. 58 shows changes in the values of the curve parameters P 1 and P 2 with respect to the selection signal Sm.
- the first LUT 1 46 and the second LUT 1 47 store the values of the curve parameters P 1 and P 2 for the respective selection signals Sm.
- the value [R1 m] is stored as the value of the curve parameter P 1 with respect to the selection signal Sm
- the value [R2m] is stored as the value of the curve parameter P 2.
- the calculation unit 148 derives the gradation processing signal CS for the input signal ⁇ S based on the acquired curve parameters P 1 and P 2 (value [Rim] and value [R2m]). The specific procedure is described below. Here, it is assumed that the value of the input signal I S is given in the range of the value [0. 0 to 1.0]. Also, the gradation conversion curve candidates G1 to Gp convert the input signal IS given in the range of the value [0. 0 to 1.0] into the range of the value [0. 0 to 1.0]. And The present invention can also be applied when the input signal 1 S is not limited to this range.
- the calculation unit 148 compares the value of the input signal IS with predetermined values [X I], [X2].
- the value of the input signal IS (value [X]) is not less than [0.0] and less than [X 1]
- the calculation unit 148 derives the gradation processing signal CS for the input signal IS.
- the above processing may be executed by a computer or the like as a gradation processing program.
- the gradation processing program is a program for causing a computer to execute the gradation processing method described below.
- the gradation processing method is a method of obtaining the input signal IS and the selection signal Sm and outputting the gradation processing signal CS, and is characterized in that the input signal IS is gradation processed using a one-dimensional LUT. are doing.
- the curve parameters P 1 and P 2 are output from the first LUT 1 46 and the 21st to 11th 11 1 7 7. Detailed descriptions of the first LUT 1 46, the second LUT 1 47, and the curve parameters P 1 and P 2 are omitted.
- gradation processing of the input signal I S is performed based on the curve parameters P 1 and P 2. The details of the gradation processing are omitted because they are described in the explanation of the calculation unit 148.
- the gradation processing signal CS with respect to the input signal IS is derived.
- the gradation processing execution unit 144 as a modification of the gradation processing execution unit 1 1 4 includes two one-dimensional LUTs instead of the two-dimensional LUT. For this reason, it is possible to reduce the storage capacity for storing the lookup table.
- the values of the curve parameters P 1 and P 2 are the values of the tone conversion curve candidate Gm with respect to the predetermined value of the input signal I S.”
- the curve parameters P 1 and P 2 may be other curve parameters of the tone conversion curve candidate Gm.
- the curve parameter may be the slope of the tone conversion curve candidate Gm. This will be specifically described with reference to FIG.
- the value of the curve parameter P1 is within the predetermined range [0. 0 to X1] of the input signal IS. Is the slope value [Ki m] of the tone conversion curve candidate Gm at, and the value of the curve parameter P 2 is the slope value of the tone conversion curve candidate Gm in the predetermined range [X 1 to X2] of the input signal IS [K2m].
- FIG. 59 shows changes in the values of the curve parameters P 1 and P 2 with respect to the selection signal Sm.
- the first LUT 1 46 and the second LUT 1 47 store the values of the curve parameters P 1 and P 2 for the respective selection signals Sm.
- the value [K 1 m] is stored as the value of the curve parameter P 1 for the selection signal Sm
- the value [K2m] is stored as the value of the curve parameter P 2.
- the curve parameters P 1 and P 2 are output for the input selection signal Sm.
- the computing unit 148 derives the gradation processing signal CS for the input signal IS based on the acquired curve parameters P 1 and P 2. The specific procedure is described below. First, the calculation unit 148 compares the value of the input signal IS with predetermined values [X 1], [X2].
- the value of the input signal IS (value [X]) is [0. 0] or more and less than [X 1], the origin and coordinates ([X 1], [Kim] * [XI] (Hereinafter referred to as [Y 1]))
- the calculation unit 148 derives the gradation processing signal CS for the input signal IS.
- the curve parameter may be a coordinate on the tone conversion curve candidate Gm. This will be specifically described with reference to FIG.
- the value of the curve parameter P 1 is the value [Mm] of one of the coordinates on the tone conversion curve candidate Gm
- the curve parameter P2 The value is the value [Nm] of the other component of the coordinates on the tone conversion curve candidate Gm.
- the tone conversion curve candidates G "! To Gp are all curves that pass through the coordinates (X I, Y 1).
- FIG. 61 shows changes in the values of the curve parameters P 1 and P 2 with respect to the selection signal Sm.
- the values of the curve parameters P 1 and P 2 for the respective selection signals Sm are stored.
- the value [Mm] is stored as the value of the curve parameter P 1 for the selection signal Sm
- the value [Nm] is stored as the value of the curve parameter P 2.
- the first LUT 1 46 and the second LUT 1 47 described above output the curve parameters P 1 and P 2 for the input selection signal Sm.
- the gradation processing signal CS is derived from the input signal IS by the same processing as that of the modification described with reference to FIG. Detailed explanation is omitted.
- curve parameters P 1 and P 2 may be other curve parameters of the tone conversion curve candidate Gm.
- the number of curve parameters is not limited to the above. There may be fewer or more.
- the tone conversion curve candidates G 1 to Gp are straight line segments.
- the curve parameter output unit 1 4 5 is composed of the first LUT 1 4 6 and the second 1_re-cho 1 4 7. It has been described as J.
- the curve parameter output unit 1 5 May not have a LUT for storing the values of the curve parameters P 1 and P 2 for the value of the selection signal S m.
- the curve parameter output unit 1 4 5 calculates the values of the curve parameters P 1 and P 2. More specifically, the curve parameter output unit 1 4 5 stores parameters representing the graphs of the curve parameters P 1 and P 2 shown in FIG. 58, FIG. 59, FIG. 61, etc. Yes.
- the curve parameter output unit 1 45 specifies the graph of the curve parameters P 1 and P 2 from the stored parameters. Further, using the graphs of the curve parameters P 1 and P 2, the values of the curve parameters P 1 and P 2 with respect to the selection signal S m are output.
- the parameters for specifying the graph of the curve parameters P 1 and P 2 are the coordinates on the graph, the slope of the graph, the curvature, and the like.
- the curve parameter output unit 14 45 stores the coordinates of two points on the graph of the curve parameters P 1 and P 2 shown in FIG. 58, and connects the coordinates of these two points.
- the straight line is used as a graph for the curve parameters P1 and P2.
- the visual processing device 1 2 1 according to the sixth embodiment of the present invention is shown in FIGS. 6 2 to 6 4. It explains using.
- the visual processing device 121 is a device that performs gradation processing of an image by being incorporated in or connected to a device that handles images, such as a computer, a television, a digital camera, a mobile phone, and a PDA.
- the visual processing device 121 is characterized in that a plurality of gradation conversion curves stored in advance as LUTs are used by switching for each pixel to be subjected to gradation processing.
- FIG. 62 is a block diagram illustrating the structure of the visual processing device 121.
- the visual processing device 121 includes an image dividing unit 122, a selection signal deriving unit 123, and a gradation processing unit 130.
- the image dividing unit 122 receives the input signal IS and outputs an image area Pm (1 ⁇ m ⁇ n: n is the number of divisions of the original image) obtained by dividing the original image input as the input signal IS into a plurality of parts.
- the selection signal deriving unit 123 outputs a selection signal Sm for selecting the gradation conversion curve Cm for each image region Pm.
- the gradation processing unit 130 includes a selection signal correction unit 124 and a gradation processing execution unit 125.
- the selection signal correction unit 124 receives the selection signal Sm and outputs a selection signal SS for each pixel, which is a signal obtained by correcting the selection signal Sm for each image region Pm.
- the gradation processing execution unit 1 25 includes a plurality of gradation conversion curve candidates G 1 to Gp (p is the number of candidates) as a two-dimensional LUT, and receives an input signal IS and a selection signal SS for each pixel as inputs.
- the output signal OS obtained by gradation processing for each pixel is output.
- the gradation conversion curve candidates G 1 to Gp are substantially the same as those described with reference to FIG. 50 in [Fifth Embodiment], and thus description thereof is omitted here.
- the tone conversion curve candidates G1 to Gp are curves that give the relationship between the brightness value of the pixel of the input signal IS and the brightness value of the pixel of the output signal OS.
- the gradation processing execution unit 125 includes gradation conversion curve candidates G 1 to Gp as a two-dimensional LUT.
- the two-dimensional LUT performs a lookup that gives the brightness value of the pixel of the output signal OS to the brightness value of the pixel of the input signal IS and the selection signal SS that selects the tone conversion curve candidates G 1 to G p.
- It is a table (LUT).
- a specific example is substantially the same as that described with reference to FIG. 51 in [Fifth Embodiment], and thus the description thereof is omitted here.
- the present embodiment in the column direction of the matrix, for example, The pixel values of the output signal OS corresponding to the upper 6 bits of the pixel value of the input signal IS represented by 10 bits are arranged.
- the image dividing unit 122 operates in substantially the same manner as the image dividing unit 102 in FIG. 44, and divides the original image input as the input signal IS into a plurality (n) of image regions Pm (see FIG. 45).
- the number of divisions of the original image is larger than the number of divisions of the conventional visual processing device 300 shown in FIG. 104 (for example, 4 to 16 divisions), for example, 80 divisions in the horizontal direction and 60 divisions in the vertical direction. Yes, such as 4800 division.
- the selection signal deriving unit 123 selects a gradation conversion curve Cm from the gradation conversion curve candidates G1 to Gp for each image region Pm. Specifically, the selection signal deriving unit 123 calculates the average brightness value of the wide area image area Em of the image area Pm, and selects one of the gradation conversion curve candidates G1 to Gp according to the calculated average brightness value. Make a selection. In other words, the tone conversion curve candidates G 1 to Gp are related to the average brightness value of the wide-area image area Em, and as the average brightness value increases, the tone conversion curve candidate G 1 with a larger subscript is added. ⁇ Gp is selected.
- the wide area image region Em is the same as that described with reference to FIG. 45 in the [fourth embodiment]. That is, the wide area image area Em is a set of a plurality of image areas including the respective image areas Pm. For example, 25 image areas of 5 blocks in the vertical direction and 5 blocks in the horizontal direction centering on the image area Pm. Is a set of Depending on the position of the image area Pm, it may not be possible to take a wide area image area Em of 5 blocks in the vertical direction and 5 blocks in the horizontal direction around the image area Pm. For example, it is not possible to take a wide area image area E I of 5 blocks in the vertical direction and 5 blocks in the horizontal direction around the image area P I with respect to the image area P I located around the original image. In this case, a region in which the region of 5 blocks in the vertical direction and 5 blocks in the horizontal direction centering on the image region P I and the original image overlap is adopted as the wide area image region E I.
- the selection result of the selection signal deriving unit 123 is output as a selection signal Sm indicating any one of the gradation conversion curve candidates G 1 to G p. More specifically, the selection signal Sm is output as the value of the subscript (1 to p) of the gradation conversion curve candidates G 1 to Gp.
- the selection signal correction unit 124 selects a gradation conversion curve for each pixel constituting the input signal IS by correction using each selection signal Sm output for each image region Pm.
- the selection signal SS for each pixel is output. For example, the selection signal SS for the pixels included in the image area P m is obtained by correcting the value of the selection signal output for the image area Pm and the image area around the image area Pm with the internal division ratio of the pixel position. B.
- FIG. Fig. 63 shows the selection signals S o, S p, for the image areas Po, P p, P q, P r (o, p, q, r are division numbers n (see Fig. 45)). S q and S r are output.
- the position of the pixel X subject to gradation correction is divided into [i: 1— ⁇ ] and the center of the image area P o and the center of the image area P p, and the image area ⁇ ⁇ Is the position that internally divides the center of the image area and the center of the image area P q into [j: 1 1 j].
- [So], [Sp], [Sq], and [Sr] are the values of the selection signals So, Sp, Sq, and Sr.
- the gradation processing execution unit 1 25 receives the brightness value of the pixel included in the input signal IS and the selection signal SS, and outputs the brightness value of the output signal OS using, for example, the two-dimensional LUT 11 shown in FIG. .
- the value [SS] of the selection signal SS does not equal the subscript (1 to P) of the tone conversion curve candidates G 1 to Gp provided in the 2D LUT 1 41, the value [SS] Gradation conversion curve candidates G 1 to Gp with subscripts of near integers are used for gradation processing of the input signal IS.
- FIG. 64 shows a flowchart for explaining the visual processing method in the visual processing device 121.
- the visual processing method shown in FIG. 64 is realized by hardware in the visual processing device 121 and is a method of performing gradation processing of the input signal IS (see FIG. 62).
- the input signal IS is processed in image units. Tape SI 30 ⁇ S 1 37).
- the original image input as the input signal IS is divided into multiple image areas Pm (1 ⁇ m ⁇ n, where n is the number of divisions of the original image) (step S 1 3 1), and gradation conversion is performed for each image area Pm.
- the curve Cm is selected (steps SI 32 to S 1 33), and the tone conversion curve is selected for each pixel of the original image based on the selection signal Sm for selecting the tone conversion curve Cm for each image area Pm. Then, gradation processing in units of pixels is performed (steps S 1 34 to S 1 36).
- the gradation conversion curve Cm is selected from the gradation conversion curve candidates G 1 to Gp for each image area Pm (step S “! 32). Specifically, the wide area image area of the image area Pm. The average brightness value of Em is calculated, and one of the tone conversion curve candidates G 1 to G p is selected according to the calculated average brightness value. Image area Em is associated with the average brightness value of Em, and the higher the average brightness value, the larger the subscript candidate for the gradation transformation curve G! ⁇ Gp is selected. Here, the description of the wide-area image region Em is omitted (see the section ⁇ Action> above). The selection result is output as a selection signal Sm indicating one of the gradation conversion curve candidates G 1 to G p.
- the selection signal Sm is output as the value of the subscript (1 to P) of the gradation conversion curve candidates G1 to Gp. Further, it is determined whether or not the processing for all the image areas Pm has been completed (step S 1 33), and the processing of steps SI 32 to S 1 33 is divided into the original images until it is determined that the processing has been completed. Repeat several times. Thus, the processing for each image area is completed.
- a selection signal SS for each pixel for selecting a gradation conversion curve for each pixel constituting the input signal IS is obtained.
- the selection signal SS for the pixels included in the image area Pm is obtained by correcting the value of the selection signal output for the image area Pm and the image area around the image area Pm with the internal ratio of the pixel position. It is Explanation of the details of the correction is omitted (see the column ⁇ Action> above, Fig. 63).
- each step of the visual processing method shown in FIG. 64 may be realized as a visual processing program by a computer or the like.
- the gradation conversion curve Cm selected for each image area Pm is created based on the average brightness value of the wide area image area Em. For this reason, even if the size of the image area Pm is small, it is possible to sample sufficient brightness values. As a result, an appropriate gradation conversion curve Cm is selected even for a small image area Pm.
- the selection signal correction unit 124 outputs a selection signal SS for each pixel by correction based on the selection signal Sm output in units of image areas.
- the pixels of the original image constituting the input signal IS are subjected to gradation processing using the gradation conversion curve candidates G1 to Gp designated by the selection signal SS for each pixel. For this reason, it is possible to obtain an output signal OS subjected to more appropriate gradation processing. For example, it becomes possible to suppress the generation of pseudo contours.
- the output signal OS it is possible to further prevent the joint at the boundary of each image region Pm from being unnaturally conspicuous.
- the gradation processing execution unit 125 has a two-dimensional LUT created in advance. For this reason, it is possible to reduce the processing load required for the gradation processing, more specifically, the processing load required to create the gradation conversion curve Cm. As a result, the gradation processing can be speeded up.
- the gradation processing execution unit 125 executes gradation processing using a two-dimensional LUT.
- the contents of the two-dimensional LUT are read from a storage device such as a hard disk or ROM provided in the visual processing device 121 and used for gradation processing.
- a storage device such as a hard disk or ROM provided in the visual processing device 121
- gradation processing By changing the contents of the read 2D LUT, it is possible to implement various gradation processing without changing the hardware configuration. In other words, it is possible to realize gradation processing more suitable for the characteristics of the original image.
- the present invention is not limited to the embodiment described above, and various modifications can be made without departing from the scope of the invention.
- a modification substantially similar to the above [Fifth Embodiment] ⁇ Modification> can be applied to the sixth embodiment.
- the selection signal Sm is replaced with the selection signal SS
- the gradation processing signal CS is replaced with the output signal OS.
- modifications specific to the sixth embodiment will be described.
- the two-dimensional LUT 1 41 composed of a 64 ⁇ 64 matrix is taken as an example of the two-dimensional LUT.
- the effect of the present invention is not limited to the two-dimensional LUT of this size.
- it may be a matrix in which more gradation conversion curve candidates are arranged in the row direction.
- the pixel value of the output signal OS corresponding to the value obtained by dividing the pixel value of the input signal IS into finer steps may be arranged in the column direction of the matrix.
- the pixel value of the output signal OS may be arranged for each pixel value of the input signal IS represented by 10 bits.
- the size of the 2D LUT is increased, it is possible to perform more appropriate gradation processing, and if it is reduced, the memory for storing the 2D LUT can be reduced (2).
- the value [SS] of the selection signal SS is not equal to the subscripts (1 to P) of the tone conversion curve candidates G1 to Gp included in the two-dimensional LUT 1 41 (see Fig. 51). It was explained that the tone conversion curve candidates G 1 to Gp with the integer closest to the value [SS] as the subscript are used for the tone processing of the input signal IS.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Picture Signal Circuits (AREA)
- Television Receiver Circuits (AREA)
- Studio Devices (AREA)
- Color Image Communication Systems (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/571,124 US7860339B2 (en) | 2003-09-11 | 2004-09-10 | Visual processing device, visual processing method, visual processing program, intergrated circuit, display device, image-capturing device, and portable information terminal |
EP04773249.0A EP1667066B1 (en) | 2003-09-11 | 2004-09-10 | Visual processing device, visual processing method, visual processing program, integrated circuit, display device, imaging device, and mobile information terminal |
KR1020067005003A KR101030839B1 (ko) | 2003-09-11 | 2004-09-10 | 시각 처리 장치, 시각 처리 방법, 시각 처리 프로그램,집적 회로, 표시 장치, 촬영 장치 및 휴대 정보 단말 |
US11/980,581 US8165417B2 (en) | 2003-09-11 | 2007-10-31 | Visual processing device, visual processing method, visual processing program, integrated circuit, display device, image-capturing device, and portable information terminal |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-320061 | 2003-09-11 | ||
JP2003320061 | 2003-09-11 | ||
JP2003-433324 | 2003-12-26 | ||
JP2003433324 | 2003-12-26 | ||
JP2004-169693 | 2004-06-08 | ||
JP2004169693 | 2004-06-08 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US10/571,124 A-371-Of-International US7860339B2 (en) | 2003-09-11 | 2004-09-10 | Visual processing device, visual processing method, visual processing program, intergrated circuit, display device, image-capturing device, and portable information terminal |
US11/980,581 Continuation US8165417B2 (en) | 2003-09-11 | 2007-10-31 | Visual processing device, visual processing method, visual processing program, integrated circuit, display device, image-capturing device, and portable information terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005027043A1 true WO2005027043A1 (ja) | 2005-03-24 |
Family
ID=34317225
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/013605 WO2005027043A1 (ja) | 2003-09-11 | 2004-09-10 | 視覚処理装置、視覚処理方法、視覚処理プログラム、集積回路、表示装置、撮影装置および携帯情報端末 |
Country Status (6)
Country | Link |
---|---|
US (2) | US7860339B2 (ja) |
EP (1) | EP1667066B1 (ja) |
JP (3) | JP4481333B2 (ja) |
KR (3) | KR101030872B1 (ja) |
CN (1) | CN101686306A (ja) |
WO (1) | WO2005027043A1 (ja) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007043460A1 (ja) * | 2005-10-12 | 2007-04-19 | Matsushita Electric Industrial Co., Ltd. | 視覚処理装置、表示装置、視覚処理方法、プログラムおよび集積回路 |
US7593017B2 (en) | 2006-08-15 | 2009-09-22 | 3M Innovative Properties Company | Display simulator |
US7773158B2 (en) | 2005-10-12 | 2010-08-10 | Panasonic Corporation | Visual processing device, display device, and integrated circuit |
US7881550B2 (en) | 2006-04-28 | 2011-02-01 | Panasonic Corporation | Visual processing apparatus, visual processing method, program, recording medium, display device, and integrated circuit |
US7894684B2 (en) | 2006-04-19 | 2011-02-22 | Panasonic Corporation | Visual processing device, visual processing method, program, display device, and integrated circuit |
US7990465B2 (en) | 2007-09-13 | 2011-08-02 | Panasonic Corporation | Imaging apparatus, imaging method, storage medium, and integrated circuit |
US20110310271A1 (en) * | 2007-04-18 | 2011-12-22 | Haruo Yamashita | Imaging apparatus, imaging method, integrated circuit, and storage medium |
US9240072B2 (en) | 2010-09-08 | 2016-01-19 | Panasonic Intellectual Property Management Co., Ltd. | Three-dimensional image processing apparatus, three-dimensional image-pickup apparatus, three-dimensional image-pickup method, and program |
Families Citing this family (100)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101030872B1 (ko) * | 2003-09-11 | 2011-04-22 | 파나소닉 주식회사 | 시각 처리 장치, 시각 처리 방법, 화상 표시 장치, 텔레비젼, 정보 단말 장치, 카메라, 집적회로 및 기록 매체 |
TWI297875B (en) * | 2004-09-16 | 2008-06-11 | Novatek Microelectronics Corp | Image processing method and device using thereof |
KR100640063B1 (ko) * | 2005-02-18 | 2006-10-31 | 삼성전자주식회사 | 외부조도를 고려한 영상향상방법 및 장치 |
US8312489B2 (en) * | 2005-02-23 | 2012-11-13 | Sony Corporation | CM searching method and apparatus, and CM-appendant information supplying method and apparatus |
JP4752431B2 (ja) * | 2005-10-03 | 2011-08-17 | セイコーエプソン株式会社 | カラー画像複写装置、カラー画像複写方法、およびコンピュータプログラム |
JP4675418B2 (ja) * | 2006-04-11 | 2011-04-20 | パナソニック株式会社 | 映像信号処理装置及び映像信号処理方法 |
US7905606B2 (en) * | 2006-07-11 | 2011-03-15 | Xerox Corporation | System and method for automatically modifying an image prior to projection |
US7692644B2 (en) * | 2006-10-13 | 2010-04-06 | Hitachi Displays, Ltd. | Display apparatus |
JP4761566B2 (ja) * | 2006-12-13 | 2011-08-31 | キヤノン株式会社 | 画像処理装置及びその方法とプログラム及び媒体 |
WO2008073109A1 (en) * | 2006-12-15 | 2008-06-19 | Thomson Licensing | System and method for interactive visual effects compositing |
JP4780413B2 (ja) * | 2007-01-12 | 2011-09-28 | 横河電機株式会社 | 不正アクセス情報収集システム |
KR100830333B1 (ko) * | 2007-02-23 | 2008-05-16 | 매그나칩 반도체 유한회사 | 적응형 구간 선형 처리 장치 |
KR100844781B1 (ko) * | 2007-02-23 | 2008-07-07 | 삼성에스디아이 주식회사 | 유기 전계 발광표시장치 및 그 구동방법 |
KR100844775B1 (ko) * | 2007-02-23 | 2008-07-07 | 삼성에스디아이 주식회사 | 유기 전계발광 표시장치 |
KR100844780B1 (ko) | 2007-02-23 | 2008-07-07 | 삼성에스디아이 주식회사 | 유기 전계 발광표시장치 및 그 구동방법 |
KR100944595B1 (ko) * | 2007-04-24 | 2010-02-25 | 가부시끼가이샤 르네사스 테크놀로지 | 표시 장치, 표시 장치 구동 회로, 화상 표시 방법, 전자기기 및 화상 표시 장치 구동 회로 |
US8207931B2 (en) * | 2007-05-31 | 2012-06-26 | Hong Kong Applied Science and Technology Research Institute Company Limited | Method of displaying a low dynamic range image in a high dynamic range |
JP5032911B2 (ja) * | 2007-07-31 | 2012-09-26 | キヤノン株式会社 | 画像処理装置及び画像処理方法 |
US7995857B2 (en) * | 2007-09-03 | 2011-08-09 | Himax Technologies Limited | Method and apparatus utilizing step-wise gain control for image processing |
JP5374773B2 (ja) * | 2007-10-26 | 2013-12-25 | 太陽誘電株式会社 | 映像表示装置および方法およびこれに組み込まれる信号処理回路および液晶バックライト駆動装置 |
EP2068569B1 (en) * | 2007-12-05 | 2017-01-25 | Vestel Elektronik Sanayi ve Ticaret A.S. | Method of and apparatus for detecting and adjusting colour values of skin tone pixels |
JP4453754B2 (ja) * | 2007-12-20 | 2010-04-21 | ソニー株式会社 | 表示装置、映像信号補正装置、映像信号補正方法 |
JP4314305B1 (ja) | 2008-02-04 | 2009-08-12 | シャープ株式会社 | 鮮鋭化画像処理装置、方法、及びソフトウェア |
JP5275122B2 (ja) * | 2008-05-30 | 2013-08-28 | パナソニック株式会社 | ダイナミックレンジ圧縮装置、ダイナミックレンジ圧縮方法、プログラム、集積回路および撮像装置 |
US8311360B2 (en) | 2008-11-13 | 2012-11-13 | Seiko Epson Corporation | Shadow remover |
JP5134508B2 (ja) * | 2008-11-19 | 2013-01-30 | 株式会社日立製作所 | テレビジョン装置 |
JP2010127994A (ja) * | 2008-11-25 | 2010-06-10 | Sony Corp | 補正値算出方法、表示装置 |
US8970707B2 (en) * | 2008-12-17 | 2015-03-03 | Sony Computer Entertainment Inc. | Compensating for blooming of a shape in an image |
JP4912418B2 (ja) * | 2009-01-14 | 2012-04-11 | 三菱電機株式会社 | 画像処理装置及び方法並びに画像表示装置 |
JP4994354B2 (ja) * | 2008-12-22 | 2012-08-08 | 三菱電機株式会社 | 画像処理装置及び方法並びに画像表示装置 |
JP4994407B2 (ja) * | 2009-03-18 | 2012-08-08 | 三菱電機株式会社 | 画像処理装置及び方法並びに画像表示装置 |
WO2010073485A1 (ja) * | 2008-12-22 | 2010-07-01 | 三菱電機株式会社 | 画像処理装置及び方法並びに画像表示装置 |
US8363131B2 (en) * | 2009-01-15 | 2013-01-29 | Aptina Imaging Corporation | Apparatus and method for local contrast enhanced tone mapping |
CN101527779B (zh) * | 2009-03-12 | 2012-06-27 | 北京大学 | 一种校色的方法和装置 |
JP2010278724A (ja) * | 2009-05-28 | 2010-12-09 | Olympus Corp | 画像処理装置、画像処理方法及び画像処理プログラム |
JP5299867B2 (ja) * | 2009-06-30 | 2013-09-25 | 日立コンシューマエレクトロニクス株式会社 | 画像信号処理装置 |
KR20110021107A (ko) * | 2009-08-25 | 2011-03-04 | 삼성전자주식회사 | 선명도 보정을 위한 영상처리장치 및 영상처리방법 |
JP5500964B2 (ja) * | 2009-12-11 | 2014-05-21 | キヤノン株式会社 | 動画像処理装置及びその制御方法、プログラム |
CN102110403B (zh) * | 2009-12-23 | 2013-04-17 | 群康科技(深圳)有限公司 | 改善显示器拖影现象的方法及相关的显示器 |
WO2011119178A1 (en) * | 2010-03-22 | 2011-09-29 | Nikon Corporation | Tone mapping with adaptive slope for image sharpening |
WO2011118837A1 (ja) * | 2010-03-26 | 2011-09-29 | シャープ株式会社 | 表示装置及びその制御方法、テレビジョン受像機、プログラム、並びに記録媒体 |
KR101113483B1 (ko) * | 2010-04-09 | 2012-03-06 | 동아대학교 산학협력단 | 컬러 영상의 가시성 향상 장치 및 방법 |
JP5672776B2 (ja) * | 2010-06-02 | 2015-02-18 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
KR101681776B1 (ko) * | 2010-06-10 | 2016-12-01 | 엘지디스플레이 주식회사 | 화질 제어 방법과 이를 이용한 표시장치 |
US9100640B2 (en) * | 2010-08-27 | 2015-08-04 | Broadcom Corporation | Method and system for utilizing image sensor pipeline (ISP) for enhancing color of the 3D image utilizing z-depth information |
US8952978B2 (en) * | 2010-09-28 | 2015-02-10 | Sony Corporation | Display device, viewing angle control method, computer program storage device with viewing angle control program, and mobile terminal |
US9055305B2 (en) * | 2011-01-09 | 2015-06-09 | Mediatek Inc. | Apparatus and method of sample adaptive offset for video coding |
KR101773419B1 (ko) * | 2010-11-22 | 2017-09-01 | 삼성디스플레이 주식회사 | 데이터 보상 방법 및 이를 수행하는 표시 장치 |
TWI471678B (zh) * | 2010-12-17 | 2015-02-01 | Hon Hai Prec Ind Co Ltd | 投影儀及其投影畫面自動調整方法 |
TW201248604A (en) * | 2011-05-16 | 2012-12-01 | Novatek Microelectronics Corp | Display apparatus and image compensating method thereof |
KR101803571B1 (ko) * | 2011-06-17 | 2017-11-30 | 엘지디스플레이 주식회사 | 입체영상표시장치와 이의 구동방법 |
JP2013137418A (ja) * | 2011-12-28 | 2013-07-11 | Panasonic Liquid Crystal Display Co Ltd | 液晶表示装置 |
KR20130106642A (ko) * | 2012-03-20 | 2013-09-30 | 삼성디스플레이 주식회사 | 휘도 보정 시스템 및 그 방법 |
TWI523500B (zh) * | 2012-06-29 | 2016-02-21 | 私立淡江大學 | 影像的動態範圍壓縮方法與影像處理裝置 |
US9646366B2 (en) * | 2012-11-30 | 2017-05-09 | Change Healthcare Llc | Method and apparatus for enhancing medical images |
US9554102B2 (en) * | 2012-12-19 | 2017-01-24 | Stmicroelectronics S.R.L. | Processing digital images to be projected on a screen |
KR102103277B1 (ko) * | 2013-04-12 | 2020-04-22 | 삼성전자주식회사 | 이미지를 관리하는 방법 및 그 전자 장치 |
CA2909493C (en) | 2013-04-16 | 2018-01-23 | Christian Weissig | Alignment of a camera system, camera system and alignment aid |
CN104166835A (zh) * | 2013-05-17 | 2014-11-26 | 诺基亚公司 | 用于识别活体用户的方法和装置 |
JP5842940B2 (ja) * | 2014-01-10 | 2016-01-13 | 株式会社ニコン | 画像処理装置及び電子カメラ |
LU92406B1 (en) * | 2014-03-19 | 2015-09-21 | Iee Sarl | Camera head with integrated calibration tables |
US9489881B2 (en) * | 2014-07-01 | 2016-11-08 | Canon Kabushiki Kaisha | Shading correction calculation apparatus and shading correction value calculation method |
WO2016022374A1 (en) * | 2014-08-05 | 2016-02-11 | Seek Thermal, Inc. | Local contrast adjustment for digital images |
US9930324B2 (en) | 2014-08-05 | 2018-03-27 | Seek Thermal, Inc. | Time based offset correction for imaging systems |
US9924116B2 (en) | 2014-08-05 | 2018-03-20 | Seek Thermal, Inc. | Time based offset correction for imaging systems and adaptive calibration control |
WO2016022008A1 (en) * | 2014-08-08 | 2016-02-11 | Samsung Electronics Co., Ltd. | Method and apparatus for environmental profile generation |
US9584750B2 (en) | 2014-08-20 | 2017-02-28 | Seek Thermal, Inc. | Adaptive adjustment of the operating bias of an imaging system |
US9595934B2 (en) | 2014-08-20 | 2017-03-14 | Seek Thermal, Inc. | Gain calibration for an imaging system |
US10163408B1 (en) * | 2014-09-05 | 2018-12-25 | Pixelworks, Inc. | LCD image compensation for LED backlighting |
KR102190233B1 (ko) | 2014-10-06 | 2020-12-11 | 삼성전자주식회사 | 영상 처리 장치 및 이의 영상 처리 방법 |
US9947086B2 (en) | 2014-12-02 | 2018-04-17 | Seek Thermal, Inc. | Image adjustment based on locally flat scenes |
US10467736B2 (en) | 2014-12-02 | 2019-11-05 | Seek Thermal, Inc. | Image adjustment based on locally flat scenes |
US10600164B2 (en) | 2014-12-02 | 2020-03-24 | Seek Thermal, Inc. | Image adjustment based on locally flat scenes |
WO2016163314A1 (ja) * | 2015-04-10 | 2016-10-13 | シャープ株式会社 | 液晶表示装置およびその駆動方法 |
US9549130B2 (en) | 2015-05-01 | 2017-01-17 | Seek Thermal, Inc. | Compact row column noise filter for an imaging system |
US10623608B2 (en) * | 2015-06-26 | 2020-04-14 | Hewlett-Packard Development Company, L.P. | Enhance highlighter color in scanned images |
EP3142355B1 (en) * | 2015-09-08 | 2017-10-25 | Axis AB | Method and apparatus for enhancing local contrast in a thermal image |
US9651534B1 (en) * | 2015-12-02 | 2017-05-16 | Sani-Hawk Optical Solutions LLC | Optical chemical test systems and methods |
EP3391331A1 (fr) * | 2015-12-16 | 2018-10-24 | B<>Com | Procede de traitement d'une image numerique, dispositif, equipement terminal et programme d'ordinateur associes |
WO2017114473A1 (en) * | 2015-12-31 | 2017-07-06 | Shanghai United Imaging Healthcare Co., Ltd. | Methods and systems for image processing |
US9848235B1 (en) * | 2016-02-22 | 2017-12-19 | Sorenson Media, Inc | Video fingerprinting based on fourier transform of histogram |
JP6703800B2 (ja) * | 2016-04-01 | 2020-06-03 | シャープ株式会社 | 表示装置、表示装置の制御方法、および制御プログラム |
US10867371B2 (en) | 2016-06-28 | 2020-12-15 | Seek Thermal, Inc. | Fixed pattern noise mitigation for a thermal imaging system |
US10861420B2 (en) * | 2016-06-30 | 2020-12-08 | I-Cubed Research Center Inc. | Image output apparatus, image output method, for simultaneous output of multiple images |
WO2018003937A1 (ja) | 2016-06-30 | 2018-01-04 | アイキューブド研究所株式会社 | 映像信号処理装置、映像信号処理方法、およびプログラム |
US10764632B2 (en) | 2016-06-30 | 2020-09-01 | I-Cubed Research Center Inc. | Video signal processing apparatus, video signal processing method, and program |
JP6852411B2 (ja) * | 2017-01-19 | 2021-03-31 | ソニー株式会社 | 映像信号処理装置、映像信号処理方法およびプログラム |
US10454832B2 (en) * | 2017-01-31 | 2019-10-22 | Google Llc | Balancing data requests over a network |
US10084997B1 (en) * | 2017-05-23 | 2018-09-25 | Sony Corporation | Adaptive optics for a video projector |
KR102368229B1 (ko) * | 2018-02-06 | 2022-03-03 | 한화테크윈 주식회사 | 영상처리장치 및 방법 |
US10664960B1 (en) * | 2019-04-15 | 2020-05-26 | Hanwha Techwin Co., Ltd. | Image processing device and method to perform local contrast enhancement |
US11276152B2 (en) | 2019-05-28 | 2022-03-15 | Seek Thermal, Inc. | Adaptive gain adjustment for histogram equalization in an imaging system |
US11430093B2 (en) | 2019-10-01 | 2022-08-30 | Microsoft Technology Licensing, Llc | Face-based tone curve adjustment |
JP7418690B2 (ja) * | 2019-10-25 | 2024-01-22 | 株式会社Jvcケンウッド | 表示制御装置、表示システム、表示制御方法及びプログラム |
US11113818B1 (en) * | 2020-02-25 | 2021-09-07 | Himax Technologies Limited | Timing controller and operating method thereof |
US11809996B2 (en) * | 2020-09-21 | 2023-11-07 | University Of Central Florida Research Foundation, Inc. | Adjusting parameters in an adaptive system |
TWI819411B (zh) * | 2021-11-23 | 2023-10-21 | 瑞昱半導體股份有限公司 | 用以對比提升的影像處理裝置及影像處理方法 |
WO2023094873A1 (en) * | 2021-11-29 | 2023-06-01 | Weta Digital Limited | Increasing dynamic range of a virtual production display |
US11438520B1 (en) | 2021-11-29 | 2022-09-06 | Unity Technologies Sf | Increasing dynamic range of a virtual production display |
WO2023094875A1 (en) * | 2021-11-29 | 2023-06-01 | Weta Digital Limited | Increasing dynamic range of a virtual production display |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09168097A (ja) * | 1995-12-18 | 1997-06-24 | Fuji Xerox Co Ltd | 画像処理装置 |
JPH1013667A (ja) * | 1996-06-18 | 1998-01-16 | Fuji Photo Film Co Ltd | 画像再生方法および装置 |
JPH1075395A (ja) | 1995-09-29 | 1998-03-17 | Fuji Photo Film Co Ltd | 画像処理方法および装置 |
JP2832954B2 (ja) | 1988-09-09 | 1998-12-09 | 日本電気株式会社 | 画像強調回路 |
JPH1127517A (ja) * | 1997-06-27 | 1999-01-29 | Sharp Corp | 画像処理装置 |
JPH11501841A (ja) * | 1995-03-10 | 1999-02-16 | アキュソン コーポレイション | イメージングシステム用表示プロセッサ |
JP2000057335A (ja) | 1998-08-05 | 2000-02-25 | Minolta Co Ltd | 画像処理装置のための画像補正装置、画像補正方法及び画像補正プログラムを記録した機械読取り可能な記録媒体 |
JP2001298619A (ja) | 2000-04-17 | 2001-10-26 | Fuji Photo Film Co Ltd | 画像処理方法及び画像処理装置 |
JP2002133409A (ja) * | 2000-09-13 | 2002-05-10 | Eastman Kodak Co | ピクセルカラーに基づくデジタル画像強調方法 |
Family Cites Families (74)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE3024459A1 (de) * | 1979-07-03 | 1981-01-08 | Crosfield Electronics Ltd | Pyramideninterpolation |
JPS5827145A (ja) | 1981-08-10 | 1983-02-17 | Fuji Photo Film Co Ltd | カラ−画像信号の輪郭強調処理方法及びカラ−画像走査装置 |
US4837722A (en) | 1986-05-14 | 1989-06-06 | Massachusetts Institute Of Technology | Digital high speed 3-dimensional interpolation machine |
JP2718448B2 (ja) | 1987-10-19 | 1998-02-25 | 富士写真フイルム株式会社 | 画像処理データの設定方法 |
JP2655602B2 (ja) | 1987-11-04 | 1997-09-24 | 日本電気株式会社 | 画像強調回路 |
JPH0348980A (ja) | 1989-07-18 | 1991-03-01 | Fujitsu Ltd | 輪郭強調処理方式 |
JP2663189B2 (ja) | 1990-01-29 | 1997-10-15 | 富士写真フイルム株式会社 | 画像のダイナミックレンジ圧縮処理方法 |
JP2961953B2 (ja) | 1991-06-14 | 1999-10-12 | 松下電器産業株式会社 | 色補正方法および装置 |
JP3303427B2 (ja) | 1992-05-19 | 2002-07-22 | ミノルタ株式会社 | デジタル画像形成装置 |
JP3222577B2 (ja) | 1992-09-16 | 2001-10-29 | 古河電気工業株式会社 | 熱交換器用アルミニウム合金フィン材 |
US5524070A (en) | 1992-10-07 | 1996-06-04 | The Research Foundation Of State University Of New York | Local adaptive contrast enhancement |
JPH06259543A (ja) | 1993-03-05 | 1994-09-16 | Hitachi Medical Corp | 画像処理装置 |
JP3196864B2 (ja) | 1993-04-19 | 2001-08-06 | 富士写真フイルム株式会社 | 画像のダイナミックレンジ圧縮処理方法 |
JPH07220066A (ja) | 1994-01-28 | 1995-08-18 | Matsushita Electric Ind Co Ltd | 画像処理装置 |
US5483360A (en) | 1994-06-06 | 1996-01-09 | Xerox Corporation | Color printer calibration with blended look up tables |
JP3518913B2 (ja) | 1994-12-22 | 2004-04-12 | 株式会社リコー | 階調変換曲線生成装置 |
US5774599A (en) | 1995-03-14 | 1998-06-30 | Eastman Kodak Company | Method for precompensation of digital images for enhanced presentation on digital displays with limited capabilities |
US6094185A (en) | 1995-07-05 | 2000-07-25 | Sun Microsystems, Inc. | Apparatus and method for automatically adjusting computer display parameters in response to ambient light and user preferences |
JP3192407B2 (ja) | 1995-09-25 | 2001-07-30 | 松下電器産業株式会社 | 画像表示方法及びその装置 |
JP3003561B2 (ja) | 1995-09-25 | 2000-01-31 | 松下電器産業株式会社 | 階調変換方法及びその回路と画像表示方法及びその装置と画像信号変換装置 |
EP1156451B1 (en) | 1995-09-29 | 2004-06-02 | Fuji Photo Film Co., Ltd. | Image processing method and apparatus |
JP4014671B2 (ja) | 1995-09-29 | 2007-11-28 | 富士フイルム株式会社 | 多重解像度変換方法および装置 |
JPH09231353A (ja) | 1996-02-23 | 1997-09-05 | Toshiba Corp | カラー画像処理システム |
JPH09275496A (ja) | 1996-04-04 | 1997-10-21 | Dainippon Screen Mfg Co Ltd | 画像の輪郭強調処理装置および方法 |
JPH1065930A (ja) | 1996-08-19 | 1998-03-06 | Fuji Xerox Co Ltd | カラー画像処理方法およびカラー画像処理装置 |
US6351558B1 (en) | 1996-11-13 | 2002-02-26 | Seiko Epson Corporation | Image processing system, image processing method, and medium having an image processing control program recorded thereon |
US6453069B1 (en) | 1996-11-20 | 2002-09-17 | Canon Kabushiki Kaisha | Method of extracting image from input image using reference image |
JPH10154223A (ja) | 1996-11-25 | 1998-06-09 | Ricoh Co Ltd | データ変換装置 |
KR100261214B1 (ko) | 1997-02-27 | 2000-07-01 | 윤종용 | 영상처리 시스템의 콘트라스트 확장장치에서 히스토그램 등화방법 및 장치 |
JP2951909B2 (ja) | 1997-03-17 | 1999-09-20 | 松下電器産業株式会社 | 撮像装置の階調補正装置及び階調補正方法 |
JPH10334218A (ja) | 1997-06-02 | 1998-12-18 | Canon Inc | 画像処理装置およびその方法、並びに、記録媒体 |
JP3671616B2 (ja) | 1997-08-21 | 2005-07-13 | 富士ゼロックス株式会社 | 画像処理装置 |
US6069597A (en) | 1997-08-29 | 2000-05-30 | Candescent Technologies Corporation | Circuit and method for controlling the brightness of an FED device |
US6147664A (en) | 1997-08-29 | 2000-11-14 | Candescent Technologies Corporation | Controlling the brightness of an FED device using PWM on the row side and AM on the column side |
JP3880156B2 (ja) | 1997-10-17 | 2007-02-14 | シャープ株式会社 | 画像処理装置 |
JP4834896B2 (ja) | 1997-10-31 | 2011-12-14 | ソニー株式会社 | 画像処理装置及び方法、画像送受信システム及び方法、並びに記録媒体 |
US6411306B1 (en) | 1997-11-14 | 2002-06-25 | Eastman Kodak Company | Automatic luminance and contrast adustment for display device |
JP3907810B2 (ja) | 1998-01-07 | 2007-04-18 | 富士フイルム株式会社 | 3次元ルックアップテーブルの補正法およびこれを行う画像処理装置ならびにこれを備えたデジタルカラープリンタ |
US6323869B1 (en) | 1998-01-09 | 2001-11-27 | Eastman Kodak Company | Method and system for modality dependent tone scale adjustment |
JP3809298B2 (ja) | 1998-05-26 | 2006-08-16 | キヤノン株式会社 | 画像処理方法、装置および記録媒体 |
JP2000032281A (ja) | 1998-07-07 | 2000-01-28 | Ricoh Co Ltd | 画像処理方法、装置および記録媒体 |
US6643398B2 (en) | 1998-08-05 | 2003-11-04 | Minolta Co., Ltd. | Image correction device, image correction method and computer program product in memory for image correction |
US6275605B1 (en) | 1999-01-18 | 2001-08-14 | Eastman Kodak Company | Method for adjusting the tone scale of a digital image |
US6624828B1 (en) | 1999-02-01 | 2003-09-23 | Microsoft Corporation | Method and apparatus for improving the quality of displayed images through the use of user reference information |
JP2000278522A (ja) | 1999-03-23 | 2000-10-06 | Minolta Co Ltd | 画像処理装置 |
US6580835B1 (en) | 1999-06-02 | 2003-06-17 | Eastman Kodak Company | Method for enhancing the edge contrast of a digital image |
JP2001111858A (ja) | 1999-08-03 | 2001-04-20 | Fuji Photo Film Co Ltd | 色修正定義作成方法、色修正定義作成装置、および色修正定義作成プログラム記憶媒体 |
JP2001069352A (ja) | 1999-08-27 | 2001-03-16 | Canon Inc | 画像処理装置およびその方法 |
JP4076302B2 (ja) * | 1999-08-31 | 2008-04-16 | シャープ株式会社 | 画像の輝度補正方法 |
JP2001078047A (ja) | 1999-09-02 | 2001-03-23 | Seiko Epson Corp | プロファイル合成方法、プロファイル合成装置、並びにプロファイル合成プログラムを記憶した媒体、及びデータ変換装置 |
US6650774B1 (en) | 1999-10-01 | 2003-11-18 | Microsoft Corporation | Locally adapted histogram equalization |
US7006668B2 (en) | 1999-12-28 | 2006-02-28 | Canon Kabushiki Kaisha | Image processing method and image processing apparatus |
US6618045B1 (en) | 2000-02-04 | 2003-09-09 | Microsoft Corporation | Display device with self-adjusting control parameters |
JP3758452B2 (ja) | 2000-02-28 | 2006-03-22 | コニカミノルタビジネステクノロジーズ株式会社 | 記録媒体、並びに、画像処理装置および画像処理方法 |
JP4556276B2 (ja) | 2000-03-23 | 2010-10-06 | ソニー株式会社 | 画像処理回路及び画像処理方法 |
US6822762B2 (en) | 2000-03-31 | 2004-11-23 | Hewlett-Packard Development Company, L.P. | Local color correction |
US6813041B1 (en) | 2000-03-31 | 2004-11-02 | Hewlett-Packard Development Company, L.P. | Method and apparatus for performing local color correction |
JP2002044451A (ja) | 2000-07-19 | 2002-02-08 | Canon Inc | 画像処理装置およびその方法 |
US20020154138A1 (en) * | 2000-08-28 | 2002-10-24 | Osamu Wada | Environment adaptive image display system, image processing method and information storing medium |
US6483245B1 (en) | 2000-09-08 | 2002-11-19 | Visteon Corporation | Automatic brightness control using a variable time constant filter |
JP3793987B2 (ja) | 2000-09-13 | 2006-07-05 | セイコーエプソン株式会社 | 補正カーブ生成方法、画像処理方法、画像表示装置および記録媒体 |
US6915024B1 (en) | 2000-09-29 | 2005-07-05 | Hewlett-Packard Development Company, L.P. | Image sharpening by variable contrast mapping |
JP2002204372A (ja) | 2000-12-28 | 2002-07-19 | Canon Inc | 画像処理装置およびその方法 |
JP2002281333A (ja) | 2001-03-16 | 2002-09-27 | Canon Inc | 画像処理装置およびその方法 |
US7023580B2 (en) | 2001-04-20 | 2006-04-04 | Agilent Technologies, Inc. | System and method for digital image tone mapping using an adaptive sigmoidal function based on perceptual preference guidelines |
US6586704B1 (en) * | 2001-05-15 | 2003-07-01 | The United States Of America As Represented By The United States Department Of Energy | Joining of materials using laser heating |
US6826310B2 (en) | 2001-07-06 | 2004-11-30 | Jasc Software, Inc. | Automatic contrast enhancement |
JP3705180B2 (ja) | 2001-09-27 | 2005-10-12 | セイコーエプソン株式会社 | 画像表示システム、プログラム、情報記憶媒体および画像処理方法 |
JP3752448B2 (ja) | 2001-12-05 | 2006-03-08 | オリンパス株式会社 | 画像表示システム |
JP2003242498A (ja) | 2002-02-18 | 2003-08-29 | Konica Corp | 画像処理方法および画像処理装置ならびに画像出力方法および画像出力装置 |
JP4367162B2 (ja) | 2003-02-19 | 2009-11-18 | パナソニック株式会社 | プラズマディスプレイパネルおよびそのエージング方法 |
JP4089483B2 (ja) | 2003-03-27 | 2008-05-28 | セイコーエプソン株式会社 | 異なった特徴の画像が混在する画像を表す画像信号の階調特性制御 |
KR101030872B1 (ko) * | 2003-09-11 | 2011-04-22 | 파나소닉 주식회사 | 시각 처리 장치, 시각 처리 방법, 화상 표시 장치, 텔레비젼, 정보 단말 장치, 카메라, 집적회로 및 기록 매체 |
JP4126297B2 (ja) * | 2003-09-11 | 2008-07-30 | 松下電器産業株式会社 | 視覚処理装置、視覚処理方法、視覚処理プログラム、集積回路、表示装置、撮影装置および携帯情報端末 |
-
2004
- 2004-09-10 KR KR1020107028912A patent/KR101030872B1/ko active IP Right Grant
- 2004-09-10 US US10/571,124 patent/US7860339B2/en active Active
- 2004-09-10 WO PCT/JP2004/013605 patent/WO2005027043A1/ja active Application Filing
- 2004-09-10 KR KR1020097017390A patent/KR101030864B1/ko active IP Right Grant
- 2004-09-10 EP EP04773249.0A patent/EP1667066B1/en not_active Expired - Lifetime
- 2004-09-10 CN CN200910003890A patent/CN101686306A/zh active Pending
- 2004-09-10 KR KR1020067005003A patent/KR101030839B1/ko active IP Right Grant
-
2007
- 2007-10-31 US US11/980,581 patent/US8165417B2/en active Active
-
2008
- 2008-01-23 JP JP2008012160A patent/JP4481333B2/ja not_active Expired - Lifetime
-
2009
- 2009-05-08 JP JP2009113175A patent/JP4410304B2/ja not_active Expired - Lifetime
- 2009-05-11 JP JP2009114574A patent/JP2009211709A/ja active Pending
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2832954B2 (ja) | 1988-09-09 | 1998-12-09 | 日本電気株式会社 | 画像強調回路 |
JPH11501841A (ja) * | 1995-03-10 | 1999-02-16 | アキュソン コーポレイション | イメージングシステム用表示プロセッサ |
JPH1075395A (ja) | 1995-09-29 | 1998-03-17 | Fuji Photo Film Co Ltd | 画像処理方法および装置 |
JPH09168097A (ja) * | 1995-12-18 | 1997-06-24 | Fuji Xerox Co Ltd | 画像処理装置 |
JPH1013667A (ja) * | 1996-06-18 | 1998-01-16 | Fuji Photo Film Co Ltd | 画像再生方法および装置 |
JPH1127517A (ja) * | 1997-06-27 | 1999-01-29 | Sharp Corp | 画像処理装置 |
JP2000057335A (ja) | 1998-08-05 | 2000-02-25 | Minolta Co Ltd | 画像処理装置のための画像補正装置、画像補正方法及び画像補正プログラムを記録した機械読取り可能な記録媒体 |
JP2001298619A (ja) | 2000-04-17 | 2001-10-26 | Fuji Photo Film Co Ltd | 画像処理方法及び画像処理装置 |
JP2002133409A (ja) * | 2000-09-13 | 2002-05-10 | Eastman Kodak Co | ピクセルカラーに基づくデジタル画像強調方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1667066A4 |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2323100A3 (en) * | 2005-10-12 | 2012-03-21 | Panasonic Corporation | Visual processing apparatus, display apparatus, visual processing method, program and integrated circuit |
EP1959390A1 (en) * | 2005-10-12 | 2008-08-20 | Matsushita Electric Industrial Co., Ltd. | Visual processing apparatus, display apparatus, visual processing method, program and integrated circuit |
JP2009277222A (ja) * | 2005-10-12 | 2009-11-26 | Panasonic Corp | 視覚処理装置、テレビジョン、情報携帯端末、カメラ、視覚処理方法、およびプロセッサ |
EP1959390A4 (en) * | 2005-10-12 | 2010-05-05 | Panasonic Corp | VISUAL PROCESSING APPARATUS, DISPLAY APPARATUS, VISUAL PROCESSING METHOD, PROGRAM, AND INTEGRATED CIRCUIT |
US7773158B2 (en) | 2005-10-12 | 2010-08-10 | Panasonic Corporation | Visual processing device, display device, and integrated circuit |
WO2007043460A1 (ja) * | 2005-10-12 | 2007-04-19 | Matsushita Electric Industrial Co., Ltd. | 視覚処理装置、表示装置、視覚処理方法、プログラムおよび集積回路 |
US7881549B2 (en) | 2005-10-12 | 2011-02-01 | Panasonic Corporaiton | Visual processing device, display device, visual processing method, program, and integrated circuit |
US7880814B2 (en) | 2005-10-12 | 2011-02-01 | Panasonic Corporation | Visual processing device, display device, and integrated circuit |
US8368814B2 (en) | 2005-10-12 | 2013-02-05 | Panasonic Corporation | Visual processing device, display device, and integrated circuit for adjusting the contrast of an image |
US8311357B2 (en) | 2005-10-12 | 2012-11-13 | Panasonic Corporation | Visual processing device, display device, visual processing method, program, and integrated circuit |
US8406547B2 (en) | 2006-04-19 | 2013-03-26 | Panasonic Corporation | Visual processing device, visual processing method, program, display device, and integrated circuit |
US7894684B2 (en) | 2006-04-19 | 2011-02-22 | Panasonic Corporation | Visual processing device, visual processing method, program, display device, and integrated circuit |
US8401324B2 (en) | 2006-04-28 | 2013-03-19 | Panasonic Corporation | Visual processing apparatus, visual processing method, program, recording medium, display device, and integrated circuit |
US7903898B2 (en) | 2006-04-28 | 2011-03-08 | Panasonic Corporation | Visual processing apparatus, visual processing method, program, recording medium, display device, and integrated circuit |
US7881550B2 (en) | 2006-04-28 | 2011-02-01 | Panasonic Corporation | Visual processing apparatus, visual processing method, program, recording medium, display device, and integrated circuit |
US7593017B2 (en) | 2006-08-15 | 2009-09-22 | 3M Innovative Properties Company | Display simulator |
US20110317032A1 (en) * | 2007-04-18 | 2011-12-29 | Haruo Yamashita | Imaging apparatus, imaging method, integrated circuit, and storage medium |
US20110310271A1 (en) * | 2007-04-18 | 2011-12-22 | Haruo Yamashita | Imaging apparatus, imaging method, integrated circuit, and storage medium |
US8144214B2 (en) * | 2007-04-18 | 2012-03-27 | Panasonic Corporation | Imaging apparatus, imaging method, integrated circuit, and storage medium |
US8488029B2 (en) | 2007-04-18 | 2013-07-16 | Panasonic Corporation | Imaging apparatus, imaging method, integrated circuit, and storage medium |
US8711255B2 (en) | 2007-04-18 | 2014-04-29 | Panasonic Corporation | Visual processing apparatus and visual processing method |
US7990465B2 (en) | 2007-09-13 | 2011-08-02 | Panasonic Corporation | Imaging apparatus, imaging method, storage medium, and integrated circuit |
US8786764B2 (en) | 2007-09-13 | 2014-07-22 | Panasonic Intellectual Property Corporation Of America | Imaging apparatus, imaging method, and non-transitory storage medium which perform backlight correction |
US9240072B2 (en) | 2010-09-08 | 2016-01-19 | Panasonic Intellectual Property Management Co., Ltd. | Three-dimensional image processing apparatus, three-dimensional image-pickup apparatus, three-dimensional image-pickup method, and program |
Also Published As
Publication number | Publication date |
---|---|
JP4481333B2 (ja) | 2010-06-16 |
KR101030864B1 (ko) | 2011-04-22 |
KR20090099013A (ko) | 2009-09-18 |
JP2009211709A (ja) | 2009-09-17 |
CN101686306A (zh) | 2010-03-31 |
KR20060121874A (ko) | 2006-11-29 |
KR101030839B1 (ko) | 2011-04-22 |
JP2009271925A (ja) | 2009-11-19 |
JP2008159069A (ja) | 2008-07-10 |
KR20110003597A (ko) | 2011-01-12 |
EP1667066A4 (en) | 2008-11-19 |
US8165417B2 (en) | 2012-04-24 |
JP4410304B2 (ja) | 2010-02-03 |
KR101030872B1 (ko) | 2011-04-22 |
EP1667066A1 (en) | 2006-06-07 |
EP1667066B1 (en) | 2020-01-08 |
US7860339B2 (en) | 2010-12-28 |
US20080107360A1 (en) | 2008-05-08 |
US20070188623A1 (en) | 2007-08-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005027043A1 (ja) | 視覚処理装置、視覚処理方法、視覚処理プログラム、集積回路、表示装置、撮影装置および携帯情報端末 | |
US7773158B2 (en) | Visual processing device, display device, and integrated circuit | |
JP2008159069A5 (ja) | ||
JP4688945B2 (ja) | 視覚処理装置、視覚処理方法、テレビジョン、携帯情報端末、カメラおよびプロセッサ | |
US9832378B2 (en) | Exposure mapping and dynamic thresholding for blending of multiple images using floating exposure | |
US20070165048A1 (en) | Image processing device, image processing method, and image processing program | |
US7783126B2 (en) | Visual processing device, visual processing method, visual processing program, and semiconductor device | |
JPWO2007043460A1 (ja) | 視覚処理装置、表示装置、視覚処理方法、プログラムおよび集積回路 | |
WO2018231968A1 (en) | Efficient end-to-end single layer inverse display management coding | |
JP4126297B2 (ja) | 視覚処理装置、視覚処理方法、視覚処理プログラム、集積回路、表示装置、撮影装置および携帯情報端末 | |
EP3639238A1 (en) | Efficient end-to-end single layer inverse display management coding | |
JP2006024176A5 (ja) | ||
JP5410378B2 (ja) | 映像信号補正装置および映像信号補正プログラム | |
JP4414307B2 (ja) | 視覚処理装置、視覚処理方法、視覚処理プログラムおよび半導体装置 | |
WO2010137387A1 (ja) | 画像処理装置及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200480026191.3 Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BW BY BZ CA CH CN CO CR CU CZ DK DM DZ EC EE EG ES FI GB GD GE GM HR HU ID IL IN IS KE KG KP KR LC LK LR LS LT LU LV MA MD MG MN MW MX MZ NA NI NO NZ OM PG PL PT RO RU SC SD SE SG SK SL SY TM TN TR TT TZ UA UG US UZ VC YU ZA ZM |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SZ TZ UG ZM ZW AM AZ BY KG MD RU TJ TM AT BE BG CH CY DE DK EE ES FI FR GB GR HU IE IT MC NL PL PT RO SE SI SK TR BF CF CG CI CM GA GN GQ GW ML MR SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 1020067005003 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2004773249 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2004773249 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020067005003 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10571124 Country of ref document: US Ref document number: 2007188623 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 10571124 Country of ref document: US |