WO2012127904A1 - 画像処理装置及び方法 - Google Patents
画像処理装置及び方法 Download PDFInfo
- Publication number
- WO2012127904A1 WO2012127904A1 PCT/JP2012/052083 JP2012052083W WO2012127904A1 WO 2012127904 A1 WO2012127904 A1 WO 2012127904A1 JP 2012052083 W JP2012052083 W JP 2012052083W WO 2012127904 A1 WO2012127904 A1 WO 2012127904A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- noise reduction
- contrast
- coefficient
- target pixel
- correction target
- Prior art date
Links
- 238000000034 method Methods 0.000 title description 24
- 230000009467 reduction Effects 0.000 claims abstract description 286
- 230000002093 peripheral effect Effects 0.000 claims abstract description 53
- 238000009499 grossing Methods 0.000 claims abstract description 42
- 230000006872 improvement Effects 0.000 claims abstract description 9
- 238000001514 detection method Methods 0.000 claims description 61
- 230000002708 enhancing effect Effects 0.000 claims description 24
- 230000007423 decrease Effects 0.000 claims description 12
- 230000008859 change Effects 0.000 claims description 5
- 238000003672 processing method Methods 0.000 claims description 5
- 230000015556 catabolic process Effects 0.000 claims description 3
- 238000006731 degradation reaction Methods 0.000 claims description 3
- 230000001131 transforming effect Effects 0.000 claims 3
- 235000019557 luminance Nutrition 0.000 description 41
- 230000000694 effects Effects 0.000 description 20
- 230000003111 delayed effect Effects 0.000 description 17
- 230000008569 process Effects 0.000 description 15
- 238000004364 calculation method Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 8
- 238000009826 distribution Methods 0.000 description 8
- 230000003321 amplification Effects 0.000 description 7
- 125000004122 cyclic group Chemical group 0.000 description 7
- 238000003199 nucleic acid amplification method Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 206010047571 Visual impairment Diseases 0.000 description 4
- 238000012935 Averaging Methods 0.000 description 3
- 201000005569 Gout Diseases 0.000 description 3
- 238000003384 imaging method Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 102220226415 rs1064795924 Human genes 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 230000009291 secondary effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/21—Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/94—Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20182—Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
Definitions
- the present invention relates to an image processing apparatus and method for correcting image contrast, and in particular, improves image contrast and sharpness in order to improve the visibility of low-contrast images taken under bad weather conditions such as fog and haze. And a process for reducing noise emphasized along with the process.
- a luminance signal is generated from primary color signals, distribution information of the generated luminance signal is detected over the entire screen, and the frequency distribution is smoothed over the entire range of luminance levels.
- a high-contrast image is created by calculating a correction coefficient based on the input / output ratio of the gradation correction conversion table and multiplying each primary color signal by the same correction coefficient. Is going on.
- the image processing apparatus described in Patent Document 2 generates smoothed image data in which the edge component of the input image data is stored, amplifies the difference between the input image data and the smoothed image data, and generates the smoothed image data.
- the contrast enhancement of the high frequency component of the image is performed.
- the above-described contrast improvement processing is intended to widen the range of the luminance distribution in the entire image or a local region.
- the range of the luminance distribution is expanded, there is a problem that at the expanded luminance level, not only the signal but also noise is amplified and the image quality is impaired.
- the image after gradation correction is separated into a plurality of frequency components by wavelet transform, and noise is obtained by coring the separated frequency components. Reduction is performed, and the coring threshold is set based on the gradation correction curve, thereby suppressing noise components in accordance with the degree of noise amplification that occurs when performing gradation correction.
- JP 2004-342030 A (paragraphs 0036 to 0072) JP 2001-298621 A (paragraphs 0008 to 0041) JP 2008-199448 A (paragraphs 0025 to 0083)
- the present invention has been made to solve the above-described problems of the prior art, and its purpose is a portion where the contrast is lowered with respect to a low-contrast image taken under bad weather conditions such as fog and haze. It is an object of the present invention to provide an image processing apparatus and method capable of obtaining a high-quality image by appropriately improving the visibility of the image and reducing noise emphasized as the contrast is improved.
- An image processing apparatus includes: A low-contrast part detection means for detecting a contrast correlation value of a peripheral region of the correction target pixel of the input image, using each pixel of the input image as a correction target pixel; Enhancement coefficient determination means for determining a contrast enhancement coefficient for the correction target pixel in accordance with the contrast correlation value detected by the low contrast portion detection means; Local contrast enhancement means for enhancing the contrast of the local region of the correction target pixel of the input image according to the enhancement coefficient determined by the enhancement coefficient determination means and outputting a local contrast enhanced image; Noise reduction coefficient generation means for generating a noise reduction coefficient that increases as the enhancement coefficient increases with respect to the enhancement coefficient determined by the enhancement coefficient determination means; Comprising three-dimensional noise reduction means for reducing noise of the noise component of the correction target pixel by smoothing the local contrast-enhanced image in a time direction over a plurality of frames; The three-dimensional noise reduction means controls the degree of noise reduction for the correction target pixel in accordance with the noise reduction coefficient generated by
- An image processing apparatus includes: Three-dimensional noise reduction in which each pixel of the input image is used as a correction target pixel, and the input image is smoothed in the time direction over a plurality of frames to reduce noise components of the correction target pixel and output a three-dimensional noise reduction image Means, Low-contrast part detection means for detecting a contrast correlation value of a peripheral region of the correction target pixel of the three-dimensional noise reduced image; Enhancement coefficient determination means for determining a contrast enhancement coefficient for the correction target pixel in accordance with the contrast correlation value detected by the low contrast portion detection means; Local contrast enhancement means for enhancing the contrast of the local region of the correction target pixel of the three-dimensional noise reduced image according to the enhancement coefficient determined by the enhancement coefficient determination means and outputting a local contrast enhanced image signal; Noise reduction coefficient generation means for generating a noise reduction coefficient that increases as the enhancement coefficient increases with respect to the enhancement coefficient determined by the enhancement coefficient determination means; A first frame memory for storing one frame of the noise reduction coefficient for each pixel generated
- the noise reduction coefficient for performing noise reduction can be set according to the enhancement coefficient for local contrast enhancement, noise can be appropriately reduced according to the degree of noise amplification accompanying local contrast enhancement.
- the three-dimensional noise reduction means for smoothing the noise component in the time direction over a plurality of frames of the image is provided, only the random noise component can be reduced without attenuating the amplitude of the emphasized subject signal.
- noise reduction can be performed without causing blurring of an edge portion in a still image. Therefore, for low-contrast images taken under bad weather conditions such as fog and haze, by appropriately improving the contrast of the part where the contrast is reduced, and by reducing the noise that is emphasized with the contrast improvement A high-quality image can be obtained.
- FIG. 1 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 1 of the present invention. It is a figure which shows an example of the relationship of the emphasis coefficient Ken with respect to contrast correlation value CT in the emphasis coefficient determination means 2 of FIG. It is a figure which shows the nonlinear function for converting the value Ds of the surrounding pixel in the nonlinear LPF means 32 of FIG. 1 into the correction pixel value Dst. It is a figure which shows the relationship between the input signal Sin and the output signal Sout when the output of the nonlinear LPF means 32 is D32 in the local contrast emphasis means 3 of FIG. It is a figure which shows the relationship of the noise reduction coefficient Kne with respect to the emphasis coefficient Ken in the noise reduction coefficient production
- FIG. 3 is a flowchart illustrating an operation of the image processing apparatus according to the first embodiment.
- (A)-(d) is a figure which shows an example of the signal which appears in each part of the image processing apparatus which concerns on Embodiment 1.
- FIG. It is a block diagram which shows the structure of the image processing apparatus which concerns on Embodiment 2 of this invention.
- 6 is a flowchart illustrating an operation of the image processing apparatus according to the second embodiment.
- (A)-(e) is a figure which shows an example of the signal which appears in each part of the image processing apparatus which concerns on Embodiment 2.
- FIG. It is a block diagram which shows the structure of the image processing apparatus which concerns on Embodiment 3 of this invention.
- FIG. 1 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 1 of the present invention.
- the image processing apparatus according to the first embodiment includes a low-contrast portion detection unit 1, an enhancement coefficient determination unit 2, a noise reduction coefficient generation unit 4, and a three-dimensional noise reduction unit 5.
- the low-contrast part detection means 1 receives an input image signal representing the input image Din, and detects a contrast correlation value CT of a peripheral region centered on the correction target pixel for each pixel of the input image Din.
- the input image and the input image signal representing the input image are represented by the same symbol Din.
- the enhancement coefficient determination means 2 determines the contrast enhancement coefficient Ken for each pixel according to the contrast correlation value CT detected by the low contrast portion detection means 1.
- the local contrast enhancing unit 3 generates an intermediate image D3 in which the local contrast is enhanced for each pixel of the input image Din according to the enhancement coefficient Ken determined by the enhancement coefficient determination unit 2.
- the local contrast enhancement unit 3 includes, for example, a delay unit 31, a nonlinear LPF unit 32, a gain determination unit 33, and a multiplier 34 as shown in the figure.
- the delay means 31 delays the input image signal Din by a predetermined amount and outputs a delayed image signal D31.
- the nonlinear LPF means 32 smoothes the peripheral area for each pixel using a value obtained by nonlinearly converting the value of the peripheral pixel in accordance with the difference between the value of the correction target pixel of the input image Din and the value of the peripheral pixel. A nonlinear smoothing signal D32 is generated.
- the gain determination means 33 uses the signal (delayed image signal) D31 obtained by delaying the nonlinear smoothed signal D32 and the input image signal Din and the enhancement coefficient Ken to enhance local contrast for each pixel of the input image Din.
- the gain G is determined.
- the multiplier 34 multiplies the gain G for each pixel of the delayed image signal D31 to generate an intermediate image D3.
- the gain determining unit 33 and the multiplier 34 constitute a mixing unit 35 that mixes the delayed image signal D31 and the nonlinear smoothed signal D32 at a mixing ratio corresponding to the enhancement coefficient Ken.
- the delay unit 31, the nonlinear LPF unit 32, and the gain determination unit 33 include a gain generation unit 36 that generates a gain G according to the delayed image signal D31, the nonlinear smoothed signal D32, and the enhancement coefficient Ken. It is configured.
- the noise reduction coefficient generation means 4 sets a noise reduction coefficient (NR coefficient) Knr for the three-dimensional noise reduction means 5 to perform noise reduction according to the enhancement coefficient Ken determined by the enhancement coefficient determination means 2.
- NR coefficient noise reduction coefficient
- the three-dimensional noise reduction means 5 generates an output image Dout with reduced noise by smoothing noise components in the time direction over a plurality of frames of the intermediate image D3.
- the three-dimensional noise reduction means 5 includes, for example, subtractors 51 and 53, a multiplier 52, and a frame memory 54 as shown in the figure, and constitutes a frame cyclic noise reduction device.
- the input image signal Din is a component signal such as a luminance signal Y, color difference signals Cb, Cr, or three primary color signals R, G, B.
- the low-contrast portion detection unit 1 calculates a contrast correlation value CT of a peripheral region centered on the correction target pixel for each pixel of the input image Din in order to detect a portion having a low contrast in the image.
- the contrast correlation value CT is an amount that correlates with the contrast between pixels included in the peripheral area centered on the correction target pixel, that is, the range of the luminance distribution. That is, a contrast correlation value CT is obtained which has a small value in a locally low contrast region and a large value in a locally high contrast region.
- the contrast correlation value for each correction target pixel is small when the contrast of the peripheral region centered on the correction target pixel is low, and the contrast of the peripheral region centered on the correction target pixel is high Find a value that has a large value.
- the contrast correlation value CT is, for example, a first window having a predetermined size centered on the correction target pixel, for example, a second window having a predetermined size centered on each pixel in an 11 ⁇ 11 pixel window.
- a standard deviation of values of all pixels in the window is obtained, and the obtained standard deviation is further averaged in the first predetermined size window, that is, a window of 11 ⁇ 11 pixels.
- 11 ⁇ 11 pixels in a first predetermined size window for example, an 11 ⁇ 11 pixel window, centered on the correction target pixel (the position is represented by coordinates (h, v)).
- a difference (MAX ⁇ MIN) between the maximum value MAX and the minimum value MIN of the pixel values in the second predetermined size window is obtained, and this is calculated as the first predetermined size.
- the contrast correlation value CT may be obtained by averaging within the window. Further, the size of the window is not limited to 5 ⁇ 5 pixels or 11 ⁇ 11 pixels. Further, the “first predetermined size” and the “second predetermined size” may be the same size.
- the role of the low-contrast part detection means 1 is to detect a low-contrast part in the image and increase the degree of contrast enhancement in the low-contrast part by a process described later. Therefore, as long as it is possible to detect a portion where contrast is to be emphasized using an index other than those described above, the contrast correlation value CT may be obtained using such an index.
- the fogged part in the case of an image shot under bad weather conditions such as fog and haze, the fogged part not only has a narrow range of brightness distribution, but also has a low amount of high frequency components and a bright brightness level. There is a tendency for saturation to decrease. By utilizing this, the amount of high frequency components, the luminance level, and the saturation in the peripheral region centered on the correction target pixel may be detected for each pixel by means not shown, and reflected in the contrast correlation value CT.
- the amount of the high frequency component is determined by using a 3 ⁇ 3 pixel or 5 ⁇ 5 pixel Laplacian filter centered on each pixel in an 11 ⁇ 11 pixel window centered on the correction target pixel.
- the absolute value of the detected filter output can be calculated and further integrated by the 11 ⁇ 11 pixel window.
- the contrast correlation value CT is set to a large value when many high-frequency components are locally included, and is set to a small value for a flat image.
- the luminance level of the peripheral region can be calculated by obtaining the average value of the luminances of all the pixels in the window in a 5 ⁇ 5 pixel window centered on the correction target pixel.
- a brightness level of about 1/2 to 3/4 of the brightness of the maximum value of the brightness signal is determined as a fogged part, and the contrast correlation value CT is set to a small value.
- the saturation of the peripheral area when the input image signal Din is composed of the luminance signal Y of the color image and the color difference signals Cb and Cr, the saturation is expressed by the following equation using two signals of the color difference signals Cb and Cr. (1) And a mean value of Srm of all the pixels in the window in a 5 ⁇ 5 pixel window centered on the pixel to be corrected.
- the contrast correlation value CT is set to a large value when the saturation of the peripheral region is large, and is set to a small value when the saturation of the peripheral region is small.
- the enhancement coefficient determination unit 2 determines the contrast enhancement coefficient Ken for each pixel in accordance with the contrast correlation value CT. In other words, the enhancement coefficient determination unit 2 obtains the enhancement coefficient Ken for the pixel based on the contrast correlation value CT for each pixel.
- the enhancement coefficient Ken is calculated by the following equation (2) as a function of the contrast correlation value CT, for example.
- Kmin, Kmax, and CTtp are preset values
- Kmin is the minimum value of the enhancement coefficient (Kmin ⁇ 1)
- Kmax is the maximum value of the enhancement coefficient (Kmax ⁇ Kmin)
- CTtp is a change in Ken.
- Ken is fixed at Kmin, and below CTtp, Ken increases as CT decreases.
- the enhancement coefficient Ken has a characteristic that monotonously decreases with respect to the contrast correlation value CT. That is, the enhancement coefficient determination unit 2 determines the enhancement coefficient Ken so that the enhancement coefficient Ken increases when the contrast correlation value CT is small, and the enhancement coefficient Ken decreases when the contrast correlation value CT is large.
- the contrast correlation value CT has a characteristic that the increment of the enhancement coefficient Ken increases as the contrast correlation value CT decreases. Note that the calculation of the enhancement coefficient Ken does not necessarily depend on the relationship shown in Equation (2) or FIG. Further, the enhancement coefficient Ken may be obtained by executing the calculation according to the expression (2), but the enhancement coefficient Ken corresponding to the contrast correlation value CT can be held in the form of a lookup table (LUT) in advance. . When such an LUT is used, it is not necessary to perform the calculation of Expression (2), so that the processing in the enhancement coefficient determination unit 2 can be simplified.
- LUT lookup table
- the local contrast enhancement means 3 generates an intermediate image D3 in which the local contrast is enhanced for each pixel of the input image Din according to the enhancement coefficient Ken.
- the local contrast enhancement unit 3 performs contrast enhancement on the value of the pixel (correction target pixel) of the input image Din based on the enhancement coefficient Ken determined for each pixel (correction target pixel).
- An intermediate image D3 is generated.
- the delay unit 31 delays the input image Din by a predetermined delay amount related to the reference of the peripheral area of the correction target pixel in the low contrast portion detection unit 1 and the nonlinear LPF unit 32.
- the non-linear LPF unit 32 calculates, for example, an average value of pixel values in the window in an 11 ⁇ 11 pixel window centered on the correction target pixel.
- the value Ds of the peripheral pixel in the window is not more than the threshold TH1 with respect to the correction target pixel Dc, that is, the value Ds of the peripheral pixel and the value Dc of the correction target pixel.
- the absolute value of the difference from the threshold value TH1 is less than or equal to the threshold TH1
- the value Ds of that pixel (peripheral pixel) is corrected to a pixel value (corrected pixel value) Dst whose difference from the value Dc of the correction target pixel is 0.
- the value Ds of the pixel is set to a difference of the value Dc of the correction target pixel and (TH2-TH1).
- the pixel value Dst is corrected and used to calculate the average value of the surrounding pixels.
- the pixel value Ds is set to TH1 minutes from the actual difference from the correction target pixel Dc.
- the pixel value Dst having a small difference is corrected to be used for calculating the average value of the peripheral pixels.
- TH1 and TH2 are preset values
- TH1 is a parameter corresponding to a threshold for clipping noise
- TH2 is a parameter for adjusting the effect of contrast enhancement (TH1 ⁇ TH2).
- the process represented by Expression (3) is performed by performing a coring process using Ds ⁇ Dc as an input, using TH1 as a threshold, and performing a clipping process using TH2 as a limit value on the output of the coring process. realizable. Accordingly, the nonlinear LPF means 32 performs the calculation of the expression (3) for each of the 11 ⁇ 11 pixels to obtain the corrected pixel value Dst, and the corrected pixel value Dst for all the 11 ⁇ 11 pixels. An arithmetic means for obtaining an average value and an adding means for adding the obtained average value to the correction target pixel value Dc can be used.
- the gain determination unit 33 uses the nonlinear smoothed signal D32, the delayed image signal D31, and the enhancement coefficient Ken to gain G for enhancing local contrast for each pixel of the delayed image, specifically, the delay by the multiplier 34.
- a gain G to be multiplied with the image signal D31 is determined.
- the pixel value D31 input to the multiplier 34 is set as an input value Sin, and the pixel value D3 output from the multiplier 34 is set as an output value Sout.
- the relationship between these values depends on the value of the nonlinear smoothing signal D32. It is a figure which shows how it is determined.
- the horizontal axis indicates the input value Sin
- the vertical axis indicates the output value Sout
- the broken line Cv drawn with a thick broken line and a solid line indicates that the value of the nonlinear smoothing signal D32 is at the illustrated position on the horizontal axis.
- the relationship of the output value Sout with respect to the input value Sin in a case is shown.
- the input / output characteristics are shown in FIG.
- the contrast can be amplified to Ken times in a region close to the luminance level of the peripheral region of the correction target pixel.
- Sout is expressed by the following equation (4). Can be represented.
- the range of Sin ⁇ D32-WL In the range of Sin> D32 + WR, There is a relationship.
- the gain G to be multiplied by the input value Sin (the gain G to be multiplied when Sin is in the range from D32-WL to D32 + WR) from the relationship of the equation (4). ) Is calculated as shown in the following equation (7).
- the gain G when the input value Sin is smaller than D32 ⁇ WL and the gain G when the input value Sin is larger than D32 + WR are also expressed by Expression (8) and Expression (9), respectively.
- the gain determination means 33 calculates the gain G according to the relationship of Formula (7), Formula (8), and Formula (9) using the nonlinear smoothed signal D32 and the delayed image signal D31.
- the multiplier 34 multiplies the gain G for each pixel of the delayed image signal D31 to generate an intermediate image D3. That is, When D32 ⁇ WL ⁇ D31 ⁇ D32 + WR, the following equation (10) is satisfied. When D31 ⁇ D32-WL, the equation (11) When D31> D32 + WR, D3 is obtained by the equation (12).
- the combination of the gain determining means 33 and the multiplier 34 is such that when the D31 is in the range from D32 ⁇ WL to D32 + WR, the delay image signal D31 and the nonlinear smoothed signal are mixed at a mixing ratio according to the enhancement coefficient Ken. It can be seen that the mixing means 35 that mixes D32 and generates the signal D31 having the value represented by the above equation (10) is configured.
- the gain G output from the gain determining means 33 is when the delayed image signal D31 is smaller than the nonlinear smoothed signal D32. It becomes a small value, and becomes a large value when the delayed image signal D31 is larger than the nonlinear smoothed signal D32. That is, the gain is calculated so that the gain for the pixel whose correction target pixel is darker than the peripheral region is small, and the gain for the pixel whose correction target pixel is brighter than the peripheral region is large. By multiplying this, the brightness contrast (local contrast) with the peripheral region for each pixel is enhanced.
- the size of the window used when the nonlinear LPF means 32 obtains the average value is not necessarily limited to 11 ⁇ 11 pixels.
- the window is made smaller, the contrast of each pixel with respect to the brightness in a small peripheral range is improved, and a sense of contrast with high frequency characteristics (a sense of high contrast for high-frequency components) is obtained.
- the window is enlarged, the contrast with respect to the brightness of a large peripheral range is improved for each pixel, and a sense of contrast with low frequency characteristics (a sense of high contrast for low frequency components) is obtained.
- an emphasis effect on a specific frequency component can be obtained by setting the window size in the nonlinear LPF unit 32.
- the noise reduction coefficient generation means 4 sets the noise reduction coefficient Knr for the three-dimensional noise reduction means 5 to reduce noise for each pixel according to the enhancement coefficient Ken.
- the noise reduction coefficient generation unit 4 sets the noise reduction coefficient Knr for the pixel (correction target pixel) based on the enhancement coefficient Ken obtained for each pixel (correction target pixel).
- the noise reduction coefficient Knr is calculated by the following equation (13) as a function of the enhancement coefficient Ken, for example.
- k is a preset value (proportional constant) within a range satisfying (0 ⁇ k ⁇ 1), and is a parameter for adjusting the degree of noise reduction of the entire image. These parameters may be changed from the outside in accordance with the image quality setting by the user and the discrimination result of the image scene.
- FIG. 5 shows the relationship between the noise reduction coefficient Knr and the enhancement coefficient Ken in Expression (13).
- the noise reduction coefficient generation means 4 generates a noise reduction coefficient Knr with a characteristic that monotonously increases with respect to the enhancement coefficient Ken. That is, the noise reduction coefficient Knr is determined so that the noise reduction coefficient Knr is small when the enhancement coefficient Ken is small, and the noise reduction coefficient Knr is large when the enhancement coefficient Ken is large. Further, the larger the enhancement coefficient Ken, the smaller the noise reduction coefficient Knr is. As long as such characteristics are satisfied, the calculation of the noise reduction coefficient Knr does not necessarily depend on the relationship shown in Expression (13) or FIG.
- the noise reduction coefficient Knr may be obtained by executing the calculation according to the equation (13), but the noise reduction coefficient Knr corresponding to the enhancement coefficient Ken may be held in the form of a lookup table (LUT) in advance. it can. When such an LUT is used, it is not necessary to perform the calculation of Expression (13), so that the processing in the noise reduction coefficient determination unit 4 can be simplified.
- LUT lookup table
- the three-dimensional noise reduction means 5 generates an output image Dout with reduced noise by smoothing noise components in the time direction over a plurality of frames of the intermediate image D3. This process is also performed using the noise reduction coefficient Knr set for each pixel. That is, the three-dimensional noise reduction unit 5 uses the noise reduction coefficient Knr set for the pixel (correction target pixel) for each pixel (correction target pixel) of the intermediate image D3 to convert the noise component in the time direction. To generate an output image Dout with reduced noise.
- a subtracter 51 that subtracts the output image Dout of the previous frame stored in the frame memory 54 from the intermediate image D 3 and a noise reduction coefficient generation unit 4.
- the multiplier 52 that multiplies the output of the subtractor 51 by the noise reduction coefficient Knr generated in this way, and the subtractor 53 that subtracts the output of the multiplier 52 from the intermediate image D3.
- This configuration is a configuration of a known frame recursive noise removing device, which is an intermediate image D3 that is an input image of the three-dimensional noise reduction unit 5 and an output image Dout one frame before (an output image of the three-dimensional noise reduction unit 5).
- FIG. 6 shows a flowchart of the operation of the image processing apparatus according to Embodiment 1 of the present invention.
- the low contrast portion detection means 1 calculates a contrast correlation value CT for each pixel of the input image Din (S1).
- the enhancement coefficient determination means 2 determines the enhancement coefficient Ken for each pixel from the contrast correlation value CT according to the relationship shown in the equation (2) and FIG. 2 (S2).
- the local contrast enhancement means 3 generates an intermediate image D3 in which the local contrast is enhanced for each pixel of the input image Din (S3).
- the non-linear LPF unit 32 performs non-linear smoothing of the peripheral region for each pixel of the input image Din using the value of the peripheral pixel subjected to non-linear conversion according to the relationship shown in FIG.
- the gain determination unit 33 calculates the gain G according to the relationship shown in Expression (5) using the nonlinear smoothed signal D32 and the delayed image signal D31 based on the enhancement coefficient Ken.
- the multiplier 34 multiplies the delayed image signal D31 by a gain G for each pixel to generate an intermediate image D3.
- the noise reduction coefficient generation means 4 generates the noise reduction coefficient Knr based on the enhancement coefficient Ken according to the relationship shown in the equation (13) and FIG. 5 (S4).
- the three-dimensional noise reduction means 5 generates an output image Dout with reduced noise by smoothing the noise component with a cyclic coefficient Knr over a plurality of frames with respect to the intermediate image D3 (S5).
- the subtractor 51 subtracts the output image Dout of the previous frame stored in the frame memory 54 from the intermediate image D3. The subtraction is performed on the same pixel in both images.
- the multiplier 52 multiplies the noise reduction coefficient Knr for each pixel output from the subtractor 51.
- the subtractor 53 subtracts the output of the multiplier 52 from the intermediate image D3 to obtain an output image Dout.
- the output image Dout is stored in the frame memory 54, and the process proceeds to the next frame.
- FIGS. 7A to 7D are one-dimensional representations of the edge of the subject in the image and the image signal corresponding to the edge, with the horizontal axis representing the pixel position and the vertical axis representing the signal level.
- Yes. 7A shows the edge of the subject in the noise-free state
- FIG. 7B shows the edge of the input image Din (in the presence of noise)
- FIG. 7C shows the edge of the intermediate image D3 (in the state where local contrast is emphasized)
- 7D shows the edge of the output image Dout (the state in which noise is reduced).
- FIG. 7A shows the edge of a subject in a noise-free state.
- the edge signal amplitude ⁇ d is extremely small, such as a ridgeline of a distant mountain in a foggy image. It shows that there is.
- random noise is superimposed on the actual signal level change, as shown in FIG.
- the amplitude of the random noise is approximately the same as the amplitude ⁇ d of the edge signal.
- FIG. 7C shows the edge of the result (intermediate image D3) obtained by performing contrast enhancement on the peripheral region of the edge with the enhancement coefficient Ken by the local contrast enhancement means 3.
- the local contrast enhancement means 3 since the local contrast enhancement means 3 has the input / output characteristics shown in FIG. 4, when the nonlinearity in the nonlinear LPF means 32 is ignored, the input is centered on the average level (D32) of the peripheral region. This has the effect of amplifying the contrast component of the image by the enhancement factor Ken times. Accordingly, when the amplitude of the edge of the input image Din is ⁇ d, the amplitude of the edge with the local contrast enhanced is Ken ⁇ ⁇ d.
- the noise clipping threshold (TH1) in the nonlinear LPF means 32 when the noise clipping threshold (TH1) in the nonlinear LPF means 32 is 0, the noise amplitude is simultaneously amplified to Ken times, and the input image state (FIG. 7A) is below the detection limit. In the state where the local contrast is emphasized (FIG. 7 (b)), the noise component that is present exceeds the detection limit and is visually recognized as annoying noise.
- the noise clipping threshold (TH1) in the nonlinear LPF means 32 to a value larger than 0, it is possible to clip noise components having an amplitude equal to or less than TH1, as shown in FIG. 7B. Assuming an input image, in order to completely suppress noise, TH1 must be set to a value substantially equal to ⁇ d. In this case, the edge signal is clipped, and contrast enhancement of the signal component is performed. Cannot be done.
- the first embodiment includes the three-dimensional noise reduction means 5 that smoothes the noise component in the time direction over a plurality of frames of the intermediate image D3 with local contrast enhancement, as shown in FIG.
- the random noise component can be reduced without attenuating the amplitude of the signal representing the emphasized subject (edge).
- the local contrast enhancing means 3 has an effect of highlighting the edge of the portion where the contrast is lowered by enhancing the contrast of a specific frequency component (especially high frequency).
- a specific frequency component especially high frequency
- the high frequency component emphasized by the local contrast enhancement is affected, the effect is canceled, and the blur of the edge portion is reduced. In some cases, the visibility may be reduced.
- the effect of removing fog and haze can be obtained by highlighting the edge of the subject in the fog or haze by local contrast enhancement, whereas the edge portion is blurred by noise reduction. The problem was that it seemed to be resented again.
- the three-dimensional noise reduction means 5 for smoothing the noise component in the time direction over a plurality of frames is provided in order to reduce noise amplified with local contrast enhancement. Only the noise can be effectively reduced without affecting the frequency component of the image of the frame, that is, without causing blurring of the edge portion emphasized by local contrast enhancement. This makes it possible to effectively remove fog and haze by local contrast enhancement and reduce noise associated therewith in an image with fog and haze.
- the low-contrast portion detection unit 1 detects the contrast correlation value CT of the peripheral region centered on the correction target pixel for each pixel of the input image Din.
- the contrast correlation value CT is obtained, for example, by averaging the standard deviation of the pixel values in the small area in the image with respect to the peripheral area. Therefore, a region having a small contrast correlation value CT in the image means that the luminance distribution range in the peripheral region is narrow, and a region having a large contrast correlation value CT is a region having a wide luminance distribution range in the peripheral region.
- the enhancement coefficient Ken increases when the contrast correlation value CT is small, and the enhancement coefficient Ken decreases when the contrast correlation value CT is large.
- the increase of the enhancement coefficient Ken is larger, so that the contrast of a level that cannot be visually recognized is compared with a region that has a level of contrast that is originally visible.
- the enhancement coefficient Ken for a region having only that (region where the contrast correlation value CT is almost 0) can be further increased, and the visibility of a portion where the contrast is lowered can be improved appropriately.
- the enhancement coefficient Ken may be determined based on a characteristic that monotonously decreases with respect to the contrast correlation value CT. That is, the enhancement coefficient Ken may be determined so that the enhancement coefficient Ken increases when the contrast correlation value CT is small, and the enhancement coefficient Ken decreases when the contrast correlation value CT is large. Further, it is desirable that the contrast correlation value CT has a characteristic such that the smaller the contrast correlation value CT, the larger the increment of the enhancement coefficient Ken. If the above characteristics are satisfied, there is an effect that the visibility of the portion where the contrast is lowered can be appropriately improved as in the case of using the relationship shown in Formula (2) or FIG.
- the local contrast emphasizing means 3 has an effect of amplifying the contrast component of the input image centering on the average level (D32) of the peripheral region by the enhancement coefficient Ken times. Therefore, considering a case where an image in which noise is superimposed on a flat image is input, the signal level does not change, and only the noise component is amplified by Ken. That is, when the enhancement coefficient is Ken, the SNR (Signal-Noise Ratio) is degraded by 10 log (Ken) [dB]. On the other hand, in the three-dimensional noise reduction means 5, when the noise reduction coefficient (cyclic coefficient) is Knr, the SNR improvement amount [dB] is expressed by the following equation (14).
- the noise can be reduced by the amount amplified by the local contrast enhancement means 3. Therefore, assuming that the amount of SNR degradation by the local contrast enhancement unit 3 is equal to the improvement amount of the SNR by the three-dimensional noise reduction unit 5, the relationship of the following equation (15) is derived.
- the three-dimensional noise reduction means 5 can improve the SNR by an amount. As a result, only the contrast of the signal component is amplified by the enhancement coefficient Ken, and the random noise component can be maintained at a level substantially equal to that of the input image. Furthermore, since the degree of emphasis in the local contrast emphasizing unit 3 changes for each pixel in accordance with the emphasis coefficient Ken, there is a possibility that the noise feeling becomes uneven in one image. 5 can adjust the SNR improvement amount for each pixel, so that the noise feeling can be made uniform in one image. As described above, the noise reduction coefficient generation means 4 generates the noise reduction coefficient Knr according to the relationship shown in the equation (13) and FIG. Noise can be appropriately reduced according to the degree of amplification.
- the three-dimensional noise reduction unit 5 reduces noise by smoothing the noise component in the time direction over a plurality of frames of the input image. Therefore, in a moving subject, the motion component increases as the noise reduction coefficient Knr increases. Is smoothed in the time direction, and there is a drawback that an afterimage is generated.
- the noise reduction coefficient Knr is set to be small for a portion where the enhancement coefficient Ken is small and the noise is not amplified by the local contrast enhancement means 3. Therefore, it is possible to prevent an afterimage from being generated unnecessarily. Further, by setting the value of k in Equation (13) to less than 1, the afterimage can be reduced by adjusting the noise reduction effect of the entire image.
- a motion detection unit (not shown) that detects the motion of the subject for each pixel of the input image
- a coefficient generation unit that generates a coefficient k that has a small value for a pixel that moves and a large value for a pixel that does not move.
- a motion adaptive type three-dimensional noise reduction means may be configured by newly providing (not shown). In this case, afterimage in a portion with motion can be reduced by setting k in Equation (13) to a variable value for each pixel and determining k according to the motion detection result for each pixel. Is possible.
- the noise reduction coefficient Knr may be generated with a characteristic that monotonously increases with respect to the enhancement coefficient Ken. That is, the noise reduction coefficient Knr may be determined so that the noise reduction coefficient Knr is small when the enhancement coefficient Ken is small and the noise reduction coefficient Knr is large when the enhancement coefficient Ken is large. Further, it is desirable that the enhancement coefficient Ken has a characteristic that the increase of the noise reduction coefficient Knr becomes smaller as the enhancement coefficient Ken becomes larger. This is because the SNR improvement amount expressed by the equation (14) increases rapidly as Knr increases. If the above characteristics are satisfied, noise can be appropriately reduced according to the degree of noise amplification accompanying local contrast enhancement, as in the case of using the relationship shown in Expression (13) and FIG. There is an effect.
- the configuration of the three-dimensional noise reduction means 5 does not necessarily have to be a frame cyclic type. It is only necessary to have a configuration in which noise components are smoothed in the time direction over a plurality of frames of the input image, and the degree of smoothing can be controlled by the noise reduction coefficient Knr.
- it is a three-dimensional noise reduction means that has a frame memory for a plurality of frames and reduces noise by taking the averaging in the time direction, and controls the number of frames added for each pixel in accordance with the noise reduction coefficient Knr.
- Frame addition means may be provided. With such a configuration, the same effect can be obtained.
- FIG. FIG. 8 is a block diagram showing the configuration of the image processing apparatus according to Embodiment 2 of the present invention. Unlike the first embodiment, the image processing apparatus according to the second embodiment has a configuration in which a three-dimensional noise reduction unit is provided before the low contrast portion detection unit and the local contrast enhancement unit.
- the image processing apparatus includes a three-dimensional noise reduction unit 105, a low contrast portion detection unit 101, an enhancement coefficient determination unit 102, a local contrast enhancement unit 103, a noise reduction coefficient generation unit 104, And a frame memory 106.
- the three-dimensional noise reduction unit 105 has the same configuration as that of the three-dimensional noise reduction unit 5 of the first embodiment. However, the input image Din is input instead of the intermediate image D3, and noise components are input over a plurality of frames of the input image Din. An intermediate image D105 with reduced noise is generated by smoothing in the time direction. The intermediate image D105 needs to have sufficient bit accuracy so that the gradation skip does not occur by the local contrast enhancement unit 103 at the subsequent stage.
- the low-contrast part detection unit 101 has the same configuration as the low-contrast part detection unit 1 of the first embodiment. However, instead of the input image Din, the intermediate image D105 is used as an input, and correction is performed for each pixel of the intermediate image D105. A contrast correlation value CT in a peripheral area centered on the pixel is detected.
- the enhancement coefficient determination means 102 has the same configuration as the enhancement coefficient determination means 2 of the first embodiment, and the contrast enhancement coefficient Ken for each pixel according to the contrast correlation value CT detected by the low contrast portion determination means 102. To decide.
- the local contrast enhancement unit 103 has the same configuration as the local contrast enhancement unit 3 of the first embodiment, but receives the intermediate image D105 instead of the input image Din, and the enhancement coefficient Ken determined by the enhancement coefficient determination unit 102. Accordingly, an output image Dout in which local contrast is enhanced for each pixel of the intermediate image D105 is generated.
- the noise reduction coefficient generation unit 104 has the same configuration as the noise reduction coefficient generation unit 4 of the first embodiment, and the three-dimensional noise reduction unit 105 performs noise according to the enhancement coefficient Ken determined by the enhancement coefficient determination unit 102.
- a noise reduction coefficient Knr for reduction is set. Since the internal configuration of each means described above is the same as that of the first embodiment, detailed description thereof is omitted.
- the frame memory 106 stores the noise reduction coefficient Knr for each pixel generated by the noise reduction coefficient generation unit 104 for one frame.
- the noise reduction coefficient Knr stored in the frame memory 106 is used by the three-dimensional noise reduction unit 105 when processing each pixel of the next frame.
- FIG. 9 shows a flowchart of the operation of the image processing apparatus according to the second embodiment of the present invention.
- the operation of the image processing apparatus according to the second embodiment is different from the first embodiment in that the three-dimensional noise reduction unit 105 first performs noise reduction on the input image Din.
- the three-dimensional noise reduction means 105 generates an intermediate image D105 in which noise is reduced by smoothing a noise component with a cyclic coefficient Knr over a plurality of frames with respect to the input image Din (S105).
- the low-contrast portion detection unit 101 calculates a contrast correlation value CT for each pixel of the intermediate image D105 (S101).
- the enhancement coefficient determination unit 102 determines the enhancement coefficient Ken for each pixel from the contrast correlation value CT according to the relationship shown in Expression (2) and FIG. 2 (S102).
- the local contrast enhancement unit 103 generates an output image Dout in which the local contrast is enhanced for each pixel of the intermediate image D105 (S103).
- the noise reduction coefficient generation unit 104 generates a noise reduction coefficient Knr based on the enhancement coefficient Ken according to the relationship shown in the equation (13) and FIG. 5 (S104).
- the noise reduction coefficient Knr is stored in the frame memory 106, and the process proceeds to the next frame. Details of the operation in each step are the same as those in the first embodiment, and thus detailed description thereof is omitted.
- step S104 Since the noise reduction coefficient is generated in step S104 after the enhancement coefficient Ken is determined in step S102, in step S105, the three-dimensional noise reduction is performed using the noise reduction coefficient Knr one frame before read from the frame memory 106. Done. Therefore, after the noise reduction coefficient Knr is generated in step S104, the noise reduction coefficient Knr for each pixel of the current frame is stored in the frame memory 106 in step S106.
- FIGS. 10A to 10E are one-dimensional representations of the edge of the subject in the image and the image signal corresponding to the edge.
- the axis represents the pixel position, and the vertical axis represents the signal level.
- 10A shows the edge of the subject in the noise-free state
- FIG. 10B shows the edge of the input image Din (in the presence of noise)
- FIG. 10C shows the edge of the intermediate image D105 in the noise-reduced state.
- FIGS. 10D and 10E show the edges of the output image Dout (in a state where local contrast is emphasized).
- FIG. 10C shows an edge of the intermediate image D105 in which the noise is reduced by the three-dimensional noise reduction unit 105 with respect to the input image Din. Since the noise component is smoothed in the time direction over a plurality of frames of the input image Din, only the random noise component can be reduced without attenuating the amplitude of the signal of the subject (edge).
- FIGS. 10D and 10E show edges of the output image Dout as a result of the contrast enhancement unit 103 performing contrast enhancement on the intermediate image D105 with the enhancement coefficient Ken.
- FIG. 10D shows that when the noise clipping threshold (TH1) in the nonlinear LPF means 132 is zero,
- FIG. 10E shows that the noise clipping threshold (TH1) in the nonlinear LPF means 132 is larger than zero. A case where the value is smaller than ⁇ d is shown.
- the local contrast enhancement unit 103 uses the contrast component of the input image with the average level (D32) of the peripheral region as the enhancement coefficient. Since it can be approximated as a process of amplification by Ken times, the amplitude of the edge becomes Ken ⁇ ⁇ d and the visibility is improved.
- the noise component is similarly amplified by Ken times, but an image in which noise corresponding to the amplification coefficient Ken is reduced in advance from the input image Din (FIG. 10B) (FIG. 10C).
- the noise of the output image Dout (FIG. 10D) can be suppressed to the level of the input image Din.
- the image processing apparatus not only has the same effect as that of the first embodiment, but also as a secondary effect according to the second embodiment, as described above. It is possible to separate a signal of a subject from a signal of noise by noise reduction and reveal a signal of a subject that has not been emphasized due to noise, and emerges by contrast enhancement. Further, since the intermediate image D105 with reduced noise is also input to the low-contrast part detection unit 101, the enhancement coefficient Ken and the noise reduction coefficient Knr are also less likely to be affected by the noise of the input image Din. .
- FIG. 11 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 3 of the present invention.
- the image processing apparatus according to the third embodiment detects a low-contrast portion and determines the enhancement coefficient Ken, the gain G, and the noise reduction coefficient Knr as a luminance signal indicating the luminance component of the image.
- the local contrast enhancement using the determined gain G and the noise reduction using the determined noise reduction coefficient Knr are performed based on Yin (first image signal), and red (R), green (G ), Blue (B) color image signals (second image signals) Rin, Gin, Bin.
- the image processing apparatus of FIG. 11 includes low contrast detection means 201, enhancement coefficient detection means 202, local contrast enhancement means 203, noise reduction coefficient generation means 204, and three-dimensional noise reduction means 5R, 5G, and 5B. .
- the low-contrast portion detection unit 201 has the same configuration as the low-contrast portion detection unit 1 of FIG. 1, but receives the luminance image Yin as the input image Din and centers on the correction target pixel for each pixel of the luminance image Yin.
- the contrast correlation value CT of the peripheral area is detected.
- the enhancement coefficient determination means 202 determines the contrast enhancement coefficient Ken for each pixel according to the contrast correlation value CT detected by the low contrast portion detection means 201.
- the local contrast enhancement unit 203 is an intermediate image D3R in which local contrast is enhanced for each pixel of the red, green, and blue input images Rin, Gin, and Bin according to the enhancement coefficient Ken determined by the enhancement coefficient determination unit 202.
- D3G and D3B are generated.
- the local contrast enhancement unit 203 includes a delay unit 231, a nonlinear LPF unit 232, and a gain determination unit 233. These have the same configuration as the delay unit 31, the nonlinear LPF unit 32, and the gain determination unit 33 shown in FIG. 1, and the luminance image Yin input as the input image Din has been described in the first embodiment. Do the same process.
- the delay unit 231, the nonlinear LPF unit 232, and the gain determination unit 233 constitute a gain generation unit 236 that generates a gain based on the luminance image Yin.
- the local contrast enhancing unit 203 further includes delay units 31R, 31G, and 31B.
- the delay units 31R, 31G, and 31B have the same configuration as the delay unit 231, but receive the red, green, and blue input images Rin, Gin, and Bin, respectively, and delay the same delay time as the delay unit 231 (delay image).
- Signal) D31R, D31G, and D31B are output.
- Multipliers 34R, 34G, 34B multiply the delayed image signals D31R, D31G, D31B from the delay units 31R, 31G, 31B by a gain G for each pixel to generate intermediate images D3R, D3G, D3B.
- the noise reduction coefficient generation means 204 sets a noise reduction coefficient (NR coefficient) Knr for the three-dimensional noise reduction means 5R, 5G, 5B to perform noise reduction according to the enhancement coefficient Ken determined by the enhancement coefficient determination means 202. To do.
- NR coefficient noise reduction coefficient
- the three-dimensional noise reduction means 5R, 5G, and 5B have the same configuration as that of the three-dimensional noise reduction means 5 in FIG. 1, but instead of the intermediate image D3, red, green, and blue intermediate images D3R, D3G, With D3B as an input, processing for these is performed.
- the three-dimensional noise reduction unit 5R generates a red output image Rout in which noise is reduced by smoothing the noise component in the time direction over a plurality of frames of the red intermediate image D3R.
- the three-dimensional noise reduction unit 5G generates a green output image Gout in which noise is reduced by smoothing noise components in a time direction over a plurality of frames of the green intermediate image D3G.
- the three-dimensional noise reduction unit 5B generates a blue output image Bout in which noise is reduced by smoothing noise components in a time direction over a plurality of frames of the blue intermediate image D3B.
- the same effects as those of the first embodiment can be obtained, and contrast for all color images using the gain G and the noise reduction coefficient Knr determined based on the luminance image. Since enhancement and noise reduction are performed, the image can be improved without impairing the color balance.
- FIG. FIG. 12 is a block diagram showing a configuration of an image processing apparatus according to Embodiment 4 of the present invention.
- the detection of the low-contrast portion and the determination of the enhancement coefficient Ken, the gain G, and the noise reduction coefficient Knr are performed based on the luminance image.
- the local contrast enhancement using the gain G and the noise reduction using the noise reduction coefficient Knr are performed on the red (R), green (G), and blue (B) color images.
- the image processing apparatus according to the fourth embodiment is provided with a three-dimensional noise reduction unit before the low contrast portion detection unit and the local contrast enhancement unit.
- the image processing apparatus of FIG. 12 includes three-dimensional noise reduction means 105R, 105G, and 105B, low contrast detection means 301, enhancement coefficient detection means 302, local contrast enhancement means 303, noise reduction coefficient generation means 304, frame A memory 306 and a luminance image generation unit 307 are included.
- the three-dimensional noise reduction means 105R, 105G, and 105B have the same configuration as that of the three-dimensional noise reduction means 105 in FIG. 8, respectively, but input red, green, and blue input images Rin, Gin, and Bin, respectively. Process.
- the three-dimensional noise reduction unit 105R generates a red intermediate image D105R in which noise is reduced by smoothing the noise component in the time direction over a plurality of frames of the red input image Rin.
- the three-dimensional noise reduction unit 105G generates a green intermediate image D105G in which noise is reduced by smoothing noise components in the time direction over a plurality of frames of the green input image Gin.
- the three-dimensional noise reduction unit 105B generates a blue intermediate image D105B in which noise is reduced by smoothing the noise component in the time direction over a plurality of frames of the blue input image Bin.
- the low-contrast portion detection unit 301 has the same configuration as the low-contrast portion detection unit 101 in FIG. 8, but receives the luminance image D307 instead of the intermediate image D105, and centers the correction target pixel for each pixel of the luminance image D307.
- the contrast correlation value CT of the surrounding area is detected.
- the enhancement coefficient determination unit 302 determines the contrast enhancement coefficient Ken for each pixel in accordance with the contrast correlation value CT detected by the low contrast portion detection unit 301.
- the local contrast enhancement unit 303 has the same configuration as the low-contrast part detection unit 203 in FIG. 11, but receives the color intermediate images D105R, D105G, and D105B instead of the input images Rin, Gin, and Bin, and the enhancement coefficient determination unit. Output images Rout, Gout, and Bout in which local contrast is enhanced for each pixel of the intermediate images D105R, D105G, and D105B according to the enhancement coefficient Ken determined in 302 are generated.
- the local contrast enhancement unit 303 includes a delay unit 331, a non-linear LPF unit 332, and a gain determination unit 333. These have the same configuration including the delay unit 131, the nonlinear LPF unit 132, and the gain determination unit 133 shown in FIG. 8, but instead of the intermediate image D105, the luminance image D307 is input, and the luminance image D307 is processed. The same processing as described in the eighth embodiment is performed.
- the delay means 331, the non-linear LPF means 332, and the gain determination means 333 constitute a gain generation means 336 that generates a gain based on the luminance image D307.
- the local contrast enhancing unit 303 further includes delay units 131R, 131G, and 131B.
- the delay units 131R, 131G, and 131B have the same configuration as the delay unit 331, but receive the red, green, and blue intermediate images D105R, D105G, and D105B, respectively, and delay the signal D131R delayed by the same delay time as the delay unit 331. D131G and D131B are output.
- Multipliers 134R, 134G, and 134B multiply the delayed signals D131R, D131G, and D131B from the delay units 131R, 131G, and 131B by a gain G for each pixel to generate output images Rout, Gout, and Bout.
- the noise reduction coefficient generation unit 304 sets a noise reduction coefficient (NR coefficient) Knr for the three-dimensional noise reduction units 105R, 105G, and 105B to perform noise reduction according to the enhancement coefficient Ken determined by the enhancement coefficient determination unit 302. To do.
- NR coefficient noise reduction coefficient
- the frame memory 306 stores the noise reduction coefficient Knr for each pixel generated by the noise reduction coefficient generation unit 304 for one frame.
- the noise reduction coefficient Knr stored in the frame memory 306 is used by the three-dimensional noise reduction means 105R, 105G, and 105B when processing each pixel of the next frame.
- a luminance signal generated by another method may be used instead of the luminance signal D307 obtained from the color signals Rin, Gin, and Bin.
- the original luminance signal Yin may be used by the low contrast portion detection unit 301 and the gain generation unit 336.
- the same effects as those of the second embodiment can be obtained, and contrasts for all color images using the gain G and the noise reduction coefficient Knr determined based on the luminance image can be obtained. Since enhancement and noise reduction are performed, the image can be improved without impairing the color balance.
- the detection of the low-contrast portion and the determination of the enhancement coefficient Ken, the gain G, and the noise reduction coefficient Knr are performed using the luminance signal Yin indicating the luminance component (first component) of the image.
- (Local image enhancement using the determined gain G and noise reduction using the noise reduction coefficient Knr are performed based on (first image signal), and red (R), green (G), blue ( B) Color signals (each of which can be referred to as “second components”) Rin, Gin and Bin indicating the color components (each of which can be referred to as “second components”).
- the detection of the low contrast portion and the determination of the enhancement coefficient Ken, the gain G, and the noise reduction coefficient Knr are performed on the luminance signal Y (first image signal).
- the local contrast enhancement using the determined gain G and the noise reduction using the noise reduction coefficient Knr are performed on the luminance signal Y and the color difference signals Cb and Cr of the same image (respectively “second image signal”).
- detection of a low-contrast portion and determination of the enhancement coefficient Ken, the gain G, and the noise reduction coefficient Knr are performed based on the first image signal indicating the first component of the image, and the determined gain
- the local contrast enhancement using G and the noise reduction using the noise reduction coefficient Knr can be performed on the second image signal indicating the second component of the same image.
- an image or an image signal (first image signal) used for determination of the enhancement coefficient Ken, the gain G, and the noise reduction coefficient Knr, and local contrast enhancement and three-dimensional using the determined gain G and noise reduction coefficient Knr may not be the same.
- the three-dimensional noise reduction means may be arranged after the local contrast enhancement means (3, 203) as in the first and third embodiments, and the local contrast enhancement as in the second and fourth embodiments. You may arrange
- the image signal (second image signal) input to the local contrast enhancement means (3, 203) is the same as that of the first embodiment.
- it may be the same as the first image signal (Din), and is different from the first image signal (Yin) (Rin, Gin, Bin) as in the third embodiment. May be.
- a signal representing an image whose noise has been reduced by the three-dimensional noise reduction means (105, 105R, 105G, 105B) is the first image.
- the signal is input to the enhancement coefficient determination means (102, 301).
- the image signal (second image signal) input to the local contrast enhancement means (103, 303) is the same as the first image signal (D105) as in the second embodiment.
- it may be different (D105R, D105G, D105B) from the first image signal (D307).
- “... Means” may be either means for executing a certain function by an electric circuit or means for executing a certain function by a configuration using software.
- an image processing method executed by the image processing apparatus also forms part of the present invention.
- a program for causing a computer to function as each unit of the image processing apparatus or a program for causing a computer to execute the process of each step of the image processing method, and a computer-readable recording medium on which the computer program is recorded are also provided. It forms part of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Picture Signal Circuits (AREA)
- Facsimile Image Signal Circuits (AREA)
- Studio Devices (AREA)
Abstract
Description
入力画像の各画素を補正対象画素として、前記入力画像の当該補正対象画素の周辺領域のコントラスト相関値を検出する低コントラスト部分検出手段と、
前記低コントラスト部分検出手段で検出された前記コントラスト相関値に応じて当該補正対象画素についてのコントラストの強調係数を決定する強調係数決定手段と、
前記強調係数決定手段で決定された前記強調係数に応じて前記入力画像の当該補正対象画素の局所領域のコントラストを強調して局所コントラスト強調画像を出力する局所コントラスト強調手段と、
前記強調係数決定手段で決定された前記強調係数に対して、当該強調係数が大きくなるにしたがって大きくなるノイズ低減係数を生成するノイズ低減係数生成手段と、
前記局所コントラスト強調画像を複数フレームにわたって時間方向に平滑化することにより当該補正対象画素のノイズ成分のノイズ低減を行う3次元ノイズ低減手段を備え、
前記3次元ノイズ低減手段は、前記ノイズ低減係数生成手段で生成されたノイズ低減係数に応じて当該補正対象画素についてのノイズ低減の度合いを制御する
ことを特徴とする。
入力画像の各画素を補正対象画素として、前記入力画像を複数フレームにわたって時間方向に平滑化することにより当該補正対象画素のノイズ成分のノイズ低減を行い3次元ノイズ低減画像を出力する3次元ノイズ低減手段と、
前記3次元ノイズ低減画像の当該補正対象画素の周辺領域のコントラスト相関値を検出する低コントラスト部分検出手段と、
前記低コントラスト部分検出手段で検出された前記コントラスト相関値に応じて当該補正対象画素についてのコントラストの強調係数を決定する強調係数決定手段と、
前記強調係数決定手段で決定された前記強調係数に応じて前記3次元ノイズ低減画像の当該補正対象画素の局所領域のコントラストを強調して局所コントラスト強調画像信号を出力する局所コントラスト強調手段と、
前記強調係数決定手段で決定された前記強調係数に対して、当該強調係数が大きくなるにしたがって大きくなるノイズ低減係数を生成するノイズ低減係数生成手段と、
前記ノイズ低減係数生成手段で生成された各画素についての前記ノイズ低減係数を1フレーム分格納する第1のフレームメモリを備え、
前記3次元ノイズ低減手段は、前記第1のフレームメモリに格納された1フレーム前の各画素についての前記ノイズ低減係数に応じて当該補正対象画素についてのノイズ低減の度合いを制御する
ことを特徴とする。
また、画像の複数フレームにわたってノイズ成分を時間方向に平滑化する3次元ノイズ低減手段を備えたので、強調された被写体の信号の振幅を減衰させることなく、ランダムノイズ成分のみを低減することができる。また、静止画像においてエッジ部分のぼけを発生させることなくノイズ低減を行うことができる。
従って、霧・霞などの悪天候条件で撮影したような低コントラスト画像に対して、コントラストが低下した部分のコントラストを適切に改善し、かつ、コントラスト改善に伴って強調されるノイズを低減することによって高品位な画像を得ることができる。
図1は本発明の実施の形態1に係る画像処理装置の構成を示すブロック図である。本実施の形態1に係る画像処理装置は、低コントラスト部分検出手段1と、強調係数決定手段2と、ノイズ低減係数生成手段4と、3次元ノイズ低減手段5とを有する。
非線形LPF手段32は、入力画像Dinの補正対象画素の値と周辺画素の値との差分に応じて周辺画素の値を非線形変換した値を用いて画素毎に周辺領域の平滑化を行って、非線形平滑化信号D32を生成する。
乗算器34は、遅延画像信号D31の画素毎にゲインGを乗算して中間画像D3を生成する。
また、遅延手段31と、非線形LPF手段32と、ゲイン決定手段33とで、遅延画像信号D31と、非線形平滑化信号D32と、強調係数Kenとに応じてゲインGを生成するゲイン生成手段36が構成されている。
入力画像信号Dinは、輝度信号Yや色差信号Cb、Cr、あるいは3原色信号R,G,Bなどのコンポーネント信号である。
即ち、局所的にコントラストが低い領域では小さな値となり、局所的にコントラストが高い領域では大きな値となるコントラスト相関値CTを求める。言い換えると、各補正対象画素についてのコントラスト相関値として、該補正対象画素を中心とする周辺領域のコントラストが低い場合には小さな値となり、該補正対象画素を中心とする周辺領域のコントラストが高い場合には大きな値となるものを求める。
また、ウィンドウの大きさは、5×5画素や11×11画素に限定するものではない。さらに、上記の「第1の所定の大きさ」と、「第2の所定の大きさ」は同じ大きさであっても良い。
TH1はノイズをクリップするための閾値に相当するパラメータ、
TH2はコントラスト強調の効果を調節するためのパラメータである(TH1≦TH2)。これらのパラメータは、ユーザによる画質設定や画像シーンの判別結果に応じて外部から変更できるようにしても良い。
従って、非線形LPF手段32は、11×11画素の各々について式(3)の演算を行って、修正画素値Dstを求める処理手段と、11×11画素のすべてについての、上記修正画素値Dstの平均値を求める演算手段と、求められた平均値を補正対象画素の値Dcに加算する加算手段とで構成できる。
原点(Sin=0、Sout=0)と、(Sin=D32-WL、Sout=D32-Ken×WL)の点を結ぶ直線(点線)で表され、
Sin>D32+WRの範囲では、SinとSoutの関係は、
(Sin=Smax、Sout=Smax)と、(Sin=D32+WR、Sout=D32+Ken×WR)の点を結ぶ直線(点線)で表される。
D32-WL≦D31≦D32+WRのときは、下記の式(10)により、
D31<D32-WLのときは、式(11)により、
D31>D32+WRのときは、式(12)により、それぞれD3を求める。
以上が、本発明の実施の形態1に係る画像処理装置の動作の説明である。
図8は本発明の実施の形態2に係る画像処理装置の構成を示すブロック図である。本実施の形態2に係る画像処理装置は、実施の形態1とは異なり、低コントラスト部分検出手段及び局所コントラスト強調手段の前段に3次元ノイズ低減手段を設けた構成を有する。
中間画像D105は、後段の局所コントラスト強調手段103により階調とびが発生することがないよう、十分なビット精度を有する必要がある。
以上の各手段の内部構成は実施の形態1と同様であるのでその詳細な説明は省略する。
フレームメモリ106に格納されたノイズ低減係数Knrは、次のフレームの各画素に対する処理時に3次元ノイズ低減手段105で用いられる。
次に、強調係数決定手段102が、コントラスト相関値CTから式(2)及び図2に示した関係により画素毎の強調係数Kenを決定する(S102)。
各ステップにおける動作の詳細は、実施の形態1と同様であるのでその詳細な説明は省略する。
図11は本発明の実施の形態3に係る画像処理装置の構成を示すブロック図である。本実施の形態3に係る画像処理装置は、実施の形態1とは異なり低コントラスト部分の検出、並びに強調係数Ken、ゲインG、及びノイズ低減係数Knrの決定は、画像の輝度成分を示す輝度信号Yin(第1の画像信号)に基づいて行い、決定されたゲインGを用いた局所コントラスト強調及び決定されたノイズ低減係数Knrを用いたノイズ低減は、同じ画像の赤(R)、緑(G)、青(B)の色成分の色画像信号(第2の画像信号)Rin、Gin、Binに対して行う。
これらは、図1に示される遅延手段31、非線形LPF手段32及びゲイン決定手段33と同じ構成を有し、入力画像Dinとして入力される輝度画像Yinに対して実施の形態1で説明したのと同じ処理を行う。
遅延手段31R、31G、31Bは遅延手段231と同じ構成を有するが、それぞれ赤、緑、青の入力画像Rin、Gin、Binを受け、遅延手段231と同じ遅延時間だけ遅延させた信号(遅延画像信号)D31R、D31G、D31Bを出力する。
3次元ノイズ低減手段5Gは、緑色の中間画像D3Gの複数フレームにわたってノイズ成分を時間方向に平滑化することによりノイズが低減された緑色出力画像Goutを生成する。
3次元ノイズ低減手段5Bは、青色の中間画像D3Bの複数フレームにわたってノイズ成分を時間方向に平滑化することによりノイズが低減された青色出力画像Boutを生成する。
図12は本発明の実施の形態4に係る画像処理装置の構成を示すブロック図である。本実施の形態4に係る画像処理装置は、実施の形態3と同様に、低コントラスト部分の検出、強調係数Ken、ゲインG、及びノイズ低減係数Knrの決定は、輝度画像に基づいて行い、決定されたゲインGを用いた局所コントラスト強調及びノイズ低減係数Knrを用いたノイズ低減は、赤(R)、緑(G)、青(B)の色画像に対して行う。実施の形態4の画像処理装置は、実施の形態2と同様に、低コントラスト部分検出手段及び局所コントラスト強調手段の前段に3次元ノイズ低減手段が設けられている。
3次元ノイズ低減手段105Gは、緑色の入力画像Ginの複数フレームにわたってノイズ成分を時間方向に平滑化することによりノイズが低減された緑色中間画像D105Gを生成する。
3次元ノイズ低減手段105Bは、青色の入力画像Binの複数フレームにわたってノイズ成分を時間方向に平滑化することによりノイズが低減された青色中間画像D105Bを生成する。
D307=α×D105R+β×D105G+γ×D105B
で表される演算により輝度中間画像D307を生成する。
但し、α+β+γ=1であり、例えば、
α=1/4、β=1/2、γ=1/4が用いられる。或いはより正確な計算のため、
α=0.299、β=0.587、γ=0.114を用いても良い。
これらは、図8に示される遅延手段131、非線形LPF手段132及びゲイン決定手段133を有する同じ構成を有するが、中間画像D105の代りに、輝度画像D307を入力とし、輝度画像D307に対して処理を行う実施の形態8で説明したのと同じ処理を行う。
遅延手段131R、131G、131Bは、遅延手段331と同じ構成を有するが、それぞれ赤、緑、青の中間画像D105R、D105G、D105Bを受け、遅延手段331と同じ遅延時間だけ遅延させた信号D131R、D131G、D131Bを出力する。
Claims (26)
- 入力画像の各画素を補正対象画素として、前記入力画像の当該補正対象画素の周辺領域のコントラスト相関値を検出する低コントラスト部分検出手段と、
前記低コントラスト部分検出手段で検出された前記コントラスト相関値に応じて当該補正対象画素についてのコントラストの強調係数を決定する強調係数決定手段と、
前記強調係数決定手段で決定された前記強調係数に応じて前記入力画像の当該補正対象画素の局所領域のコントラストを強調して局所コントラスト強調画像を出力する局所コントラスト強調手段と、
前記強調係数決定手段で決定された前記強調係数に対して、当該強調係数が大きくなるにしたがって大きくなるノイズ低減係数を生成するノイズ低減係数生成手段と、
前記局所コントラスト強調画像を複数フレームにわたって時間方向に平滑化することにより当該補正対象画素のノイズ成分のノイズ低減を行う3次元ノイズ低減手段を備え、
前記3次元ノイズ低減手段は、前記ノイズ低減係数生成手段で生成されたノイズ低減係数に応じて当該補正対象画素についてのノイズ低減の度合いを制御する
ことを特徴とする画像処理装置。 - 前記低コントラスト部分検出手段は、前記コントラスト相関値として、前記補正対象画素を中心とする前記周辺領域のコントラストが低い場合には小さい値となり、前記補正対象画素を中心とする前記周辺領域のコントラストが高い場合には大きい値となるものを出力することを特徴とする請求項1に記載の画像処理装置。
- 前記低コントラスト部分検出手段は、前記補正対象画素を中心とする所定の範囲内の画素の各々について、該画素を中心とする所定の大きさのウィンドウ内の全画素の値の標準偏差を求め、前記所定の範囲内の全画素についての、前記標準偏差の平均を、前記コントラスト相関値として求める
ことを特徴とする請求項1又は2に記載の画像処理装置。 - 前記ノイズ低減係数生成手段は、
前記局所コントラスト強調手段における前記強調係数に応じたSNRの劣化量と、前記3次元ノイズ低減手段における前記ノイズ低減係数に応じたSNRの改善量が対応するように前記ノイズ低減係数を決定することを特徴とする請求項1乃至3のいずれかに記載の画像処理装置。 - 前記ノイズ低減係数生成手段は、
前記強調係数が小さい場合に前記ノイズ低減係数が小さくなり、前記強調係数が大きい場合に前記ノイズ低減係数が大きくなり、かつ、前記強調係数が大きくなるにつれて、前記ノイズ低減係数の増分が小さくなる特性によって前記ノイズ低減係数を生成することを特徴とする請求項1乃至4のいずれかに記載の画像処理装置。 - 前記3次元ノイズ低減手段は、
当該3次元ノイズ低減手段から出力される画像信号を少なくとも1フレーム格納するフレームメモリと、
前記フレームメモリに格納されている、1フレーム前に出力された画像信号を前記局所コントラスト強調画像から減算する第1の減算器と、
前記ノイズ低減係数生成手段で生成された前記ノイズ低減係数を前記第1の減算器の出力に対して乗算する係数乗算器と、
前記係数乗算器の出力を前記局所コントラスト強調画像から減算する第2の減算器とを備え、
前記第2の減算器の出力を前記3次元ノイズ低減手段から出力される前記画像信号とする
ことを特徴とする請求項1乃至5のいずれかに記載の画像処理装置。 - 前記強調係数決定手段は、
前記コントラスト相関値が小さい場合に前記強調係数が大きくなり、
前記コントラスト相関値が大きい場合に前記強調係数が小さくなり、かつ、
前記コントラスト相関値が小さくなるにつれて前記強調係数の増分が大きくなる特性によって前記強調係数を決定することを特徴とする請求項1乃至7のいずれかに記載の画像処理装置。 - 前記局所コントラスト強調手段は、
前記入力画像の前記補正対象画素の値と該補正対象画素の周辺に位置する画素の値との差分に応じて前記周辺に位置する画素の値を非線形変換した値を用いて前記補正対象画素及びその周辺に位置する画素に対する平滑化を行って、該平滑化の結果を当該補正対象画素についての非線形平滑化の結果として出力する非線形LPF手段と、
前記非線形LPF手段の出力と前記入力画像の前記補正対象画素の値及び前記強調係数を用いて当該局所コントラスト強調手段に入力されるゲインを決定するゲイン決定手段と、
前記前記入力画像の前記補正対象画素の値に前記ゲインを乗算するゲイン乗算器と
を備えたことを特徴とする請求項1乃至9のいずれかに記載の画像処理装置。 - 入力画像の各画素を補正対象画素として、前記入力画像を複数フレームにわたって時間方向に平滑化することにより当該補正対象画素のノイズ成分のノイズ低減を行い3次元ノイズ低減画像を出力する3次元ノイズ低減手段と、
前記3次元ノイズ低減画像の当該補正対象画素の周辺領域のコントラスト相関値を検出する低コントラスト部分検出手段と、
前記低コントラスト部分検出手段で検出された前記コントラスト相関値に応じて当該補正対象画素についてのコントラストの強調係数を決定する強調係数決定手段と、
前記強調係数決定手段で決定された前記強調係数に応じて前記3次元ノイズ低減画像の当該補正対象画素の局所領域のコントラストを強調して局所コントラスト強調画像信号を出力する局所コントラスト強調手段と、
前記強調係数決定手段で決定された前記強調係数に対して、当該強調係数が大きくなるにしたがって大きくなるノイズ低減係数を生成するノイズ低減係数生成手段と、
前記ノイズ低減係数生成手段で生成された各画素についての前記ノイズ低減係数を1フレーム分格納する第1のフレームメモリを備え、
前記3次元ノイズ低減手段は、前記第1のフレームメモリに格納された1フレーム前の各画素についての前記ノイズ低減係数に応じて当該補正対象画素についてのノイズ低減の度合いを制御する
ことを特徴とする画像処理装置。 - 前記低コントラスト部分検出手段は、前記コントラスト相関値として、前記補正対象画素を中心とする前記周辺領域のコントラストが低い場合には小さい値となり、前記補正対象画素を中心とする前記周辺領域のコントラストが高い場合には大きい値となるものを出力することを特徴とする請求項11に記載の画像処理装置。
- 前記低コントラスト部分検出手段は、前記補正対象画素を中心とする所定の範囲内の画素の各々について、該画素を中心とする所定の大きさのウィンドウ内の全画素の値の標準偏差を求め、前記所定の範囲内の全画素についての、前記標準偏差の平均を、前記コントラスト相関値として求める
ことを特徴とする請求項11又は12に記載の画像処理装置。 - 前記ノイズ低減係数生成手段は、
前記局所コントラスト強調手段における前記強調係数に応じたSNRの劣化量と、前記3次元ノイズ低減手段における前記ノイズ低減係数に応じたSNRの改善量が対応するように前記ノイズ低減係数を決定することを特徴とする請求項11乃至13のいずれかに記載の画像処理装置。 - 前記ノイズ低減係数生成手段は、
前記強調係数が小さい場合に前記ノイズ低減係数が小さくなり、前記強調係数が大きい場合に前記ノイズ低減係数が大きくなり、かつ、前記強調係数が大きくなるにつれて、前記ノイズ低減係数の増分が小さくなる特性によって前記ノイズ低減係数を生成することを特徴とする請求項11乃至14のいずれかに記載の画像処理装置。 - 前記3次元ノイズ低減手段は、
当該3次元ノイズ低減手段から出力される画像信号を少なくとも1フレーム格納するフレームメモリと、
前記フレームメモリに格納されている、1フレーム前に出力された画像信号を前記入力画像から減算する第1の減算器と、
前記ノイズ低減係数生成手段で生成された前記ノイズ低減係数を前記第1の減算器の出力に対して乗算する係数乗算器と、
前記係数乗算器の出力を前記入力画像から減算する第2の減算器とを備え、
前記第2の減算器の出力を前記3次元ノイズ低減手段から出力される前記画像信号とする
ことを特徴とする請求項11乃至15のいずれかに記載の画像処理装置。 - 前記強調係数決定手段は、
前記コントラスト相関値が小さい場合に前記強調係数が大きくなり、
前記コントラスト相関値が大きい場合に前記強調係数が小さくなり、かつ、
前記コントラスト相関値が小さくなるにつれて前記強調係数の増分が大きくなる特性によって前記強調係数を決定することを特徴とする請求項11乃至17のいずれかに記載の画像処理装置。 - 前記局所コントラスト強調手段は、
前記3次元ノイズ低減画像の前記補正対象画素の値と該補正対象画素の周辺に位置する画素の値との差分に応じて前記周辺に位置する画素の値を非線形変換した値を用いて前記補正対象画素及びその周辺に位置する画素に対する平滑化を行って、該平滑化の結果を当該補正対象画素についての非線形平滑化の結果として出力する非線形LPF手段と、
前記非線形LPF手段の出力と前記3次元ノイズ低減画像の前記補正対象画素の値及び前記強調係数を用いて当該局所コントラスト強調手段に入力されるゲインを決定するゲイン決定手段と、
前記3次元ノイズ低減画像の前記補正対象画素の値に前記ゲインを乗算するゲイン乗算器と
を備えたことを特徴とする請求項11乃至19のいずれかに記載の画像処理装置。 - 入力画像の各画素を補正対象画素として、前記入力画像の輝度信号の当該補正対象画素の周辺領域のコントラスト相関値を検出する低コントラスト部分検出手段と、
前記低コントラスト部分検出手段で検出された前記コントラスト相関値に応じて当該補正対象画素についてのコントラストの強調係数を決定する強調係数決定手段と、
前記強調係数決定手段で決定された前記強調係数に応じて前記入力画像の各色成分の信号の当該補正対象画素の局所領域のコントラストを強調して局所コントラスト強調画像の各色成分の信号を出力する局所コントラスト強調手段と、
前記強調係数決定手段で決定された前記強調係数に対して、当該強調係数が大きくなるにしたがって大きくなるノイズ低減係数を生成するノイズ低減係数生成手段と、
前記局所コントラスト強調画像の各色成分の信号を受け、該局所コントラスト強調画像の各色成分の信号を複数フレームにわたって時間方向に平滑化することにより当該補正対象画素のノイズ成分のノイズ低減を行う3次元ノイズ低減手段を備え、
前記3次元ノイズ低減手段は、前記ノイズ低減係数生成手段で生成されたノイズ低減係数に応じて当該補正対象画素についてのノイズ低減の度合いを制御することを特徴とし、
前記局所コントラスト強調手段は、
前記入力画像の輝度信号の当該補正対象画素の値と該補正対象画素の周辺に位置する画素の値との差分に応じて前記周辺に位置する画素の値を非線形変換した値を用いて当該補正対象画素及びその周辺に位置する画素に対する平滑化を行って、該平滑化の結果を当該補正対象画素についての非線形平滑化の結果として出力する非線形LPF手段と、
前記非線形LPF手段の出力と前記入力画像の輝度信号の当該補正対象画素の値及び前記強調係数を用いて当該局所コントラスト強調手段に入力されるゲインを決定するゲイン決定手段と、
前記入力画像の各色成分の信号の当該補正対象画素の値に前記ゲインを乗算するゲイン乗算器とをさらに備えた
ことを特徴とする画像処理装置。 - 入力画像の各画素を補正対象画素として、前記入力画像の各色成分の信号を複数フレームにわたって時間方向に平滑化することにより当該補正対象画素のノイズ成分のノイズ低減を行い3次元ノイズ低減画像の各色成分の信号を出力する3次元ノイズ低減手段と、
前記3次元ノイズ低減画像の各色成分の信号から3次元ノイズ低減画像の輝度信号を生成する輝度画像生成手段と、
前記3次元ノイズ低減画像の輝度信号の当該補正対象画素の周辺領域のコントラスト相関値を検出する低コントラスト部分検出手段と、
前記低コントラスト部分検出手段で検出された前記コントラスト相関値に応じて当該補正対象画素についてのコントラストの強調係数を決定する強調係数決定手段と、
前記強調係数決定手段で決定された前記強調係数に応じて前記3次元ノイズ低減画像の各色成分の信号の当該補正対象画素の局所領域のコントラストを強調する局所コントラスト強調手段と、
前記強調係数決定手段で決定された前記強調係数に対して、当該強調係数が大きくなるにしたがって大きくなるノイズ低減係数を生成するノイズ低減係数生成手段と、
前記ノイズ低減係数生成手段で生成された各画素についての前記ノイズ低減係数を1フレーム分格納する第1のフレームメモリを備え、
前記3次元ノイズ低減手段は、前記第1のフレームメモリに格納された1フレーム前の各画素についての前記ノイズ低減係数に応じて当該補正対象画素についてのノイズ低減の度合いを制御することを特徴とし、
前記局所コントラスト強調手段は、
前記3次元ノイズ低減画像の輝度信号の当該補正対象画素の値と該補正対象画素の周辺に位置する画素の値との差分に応じて前記周辺に位置する画素の値を非線形変換した値を用いて当該補正対象画素及びその周辺に位置する画素に対する平滑化を行って、該平滑化の結果を当該補正対象画素についての非線形平滑化の結果として出力する非線形LPF手段と、
前記非線形LPF手段の出力と前記3次元ノイズ低減画像の輝度信号の当該補正対象画素の値及び前記強調係数を用いて当該局所コントラスト強調手段に入力されるゲインを決定するゲイン決定手段と、
前記3次元ノイズ低減画像の各色成分の信号の当該補正対象画素の値に前記ゲインを乗算するゲイン乗算器とをさらに備えた
ことを特徴とする画像処理装置。 - 入力画像の各画素を補正対象画素として、前記入力画像の当該補正対象画素の周辺領域のコントラスト相関値を検出する低コントラスト部分検出ステップと、
前記低コントラスト部分検出ステップで検出された前記コントラスト相関値に応じて当該補正対象画素についてのコントラストの強調係数を決定する強調係数決定ステップと、
前記強調係数決定ステップで決定された前記強調係数に応じて前記入力画像の前記補正対象画素の局所領域のコントラストを強調して局所コントラスト強調画像を出力する局所コントラスト強調ステップと、
前記強調係数決定ステップで決定された前記強調係数に対して、当該強調係数が大きくなるにしたがって大きくなるノイズ低減係数を生成するノイズ低減係数生成ステップと、
前記補正対象画素に対して、前記局所コントラスト強調画像を複数フレームにわたって時間方向に平滑化することにより当該補正対象画素のノイズ成分のノイズ低減を行う3次元ノイズ低減ステップを備え、
前記3次元ノイズ低減ステップは、前記ノイズ低減係数生成ステップで生成されたノイズ低減係数に応じて当該補正対象画素についてのノイズ低減の度合いを制御する
ことを特徴とする画像処理方法。 - 入力画像の各画素を補正対象画素として、前記入力画像を複数フレームにわたって時間方向に平滑化することにより当該補正対象画素のノイズ成分のノイズ低減を行い3次元ノイズ低減画像を出力する3次元ノイズ低減ステップと、
前記3次元ノイズ低減画像の当該補正対象画素の周辺領域のコントラスト相関値を検出する低コントラスト部分検出ステップと、
前記低コントラスト部分検出ステップで検出された前記コントラスト相関値に応じて当該補正対象画素についてのコントラストの強調係数を決定する強調係数決定ステップと、
前記強調係数決定ステップで決定された前記強調係数に応じて前記3次元ノイズ低減画像の当該補正対象画素の局所領域のコントラストを強調する局所コントラスト強調ステップと、
前記強調係数決定ステップで決定された前記強調係数に対して、当該強調係数が大きくなるにしたがって大きくなるノイズ低減係数を生成するノイズ低減係数生成ステップと、
前記ノイズ低減係数生成ステップで生成された各画素についての前記ノイズ低減係数を1フレーム分格納する第1のフレームメモリを備え、
前記3次元ノイズ低減ステップは、前記第1のフレームメモリに格納された1フレーム前の各画素についての前記ノイズ低減係数に応じて当該補正対象画素についてのノイズ低減の度合いを制御する
ことを特徴とする画像処理方法。 - 入力画像の各画素を補正対象画素として、前記入力画像の輝度信号の当該補正対象画素の周辺領域のコントラスト相関値を検出する低コントラスト部分検出ステップと、
前記低コントラスト部分検出ステップで検出された前記コントラスト相関値に応じて当該補正対象画素についてのコントラストの強調係数を決定する強調係数決定ステップと、
前記強調係数決定ステップで決定された前記強調係数に応じて前記入力画像の各色成分の信号の当該補正対象画素の局所領域のコントラストを強調して局所コントラスト強調画像の各色成分の信号を出力する局所コントラスト強調ステップと、
前記強調係数決定ステップで決定された前記強調係数に対して、当該強調係数が大きくなるにしたがって大きくなるノイズ低減係数を生成するノイズ低減係数生成ステップと、
前記局所コントラスト強調画像の各色成分の信号を複数フレームにわたって時間方向に平滑化することにより当該補正対象画素のノイズ低減を行う3次元ノイズ低減ステップを備え、
前記3次元ノイズ低減ステップは、前記ノイズ低減係数生成ステップで生成されたノイズ低減係数に応じて当該補正対象画素についてのノイズ低減の度合いを制御することを特徴とし、
前記局所コントラスト強調ステップは、
前記入力画像の輝度信号の当該補正対象画素の値と該補正対象画素の周辺に位置する画素の値との差分に応じて前記周辺に位置する画素の値を非線形変換した値を用いて当該補正対象画素及びその周辺に位置する画素に対する平滑化を行って、該平滑化の結果を当該補正対象画素についての非線形平滑化の結果として出力する非線形LPFステップと、
前記非線形LPFステップの出力と前記入力画像の輝度信号の当該補正対象画素の値及び前記強調係数を用いて当該局所コントラスト強調ステップに入力されるゲインを決定するゲイン決定ステップと、
前記入力画像の各色成分の信号の当該補正対象画素の値に前記ゲインを乗算するゲイン乗算ステップとをさらに備えた
ことを特徴とする画像処理方法。
- 入力画像の各画素を補正対象画素として、前記入力画像の各色成分の信号を複数フレームにわたって時間方向に平滑化することにより当該補正対象画素のノイズ成分のノイズ低減を行い3次元ノイズ低減画像の各色成分の信号を出力する3次元ノイズ低減ステップと、
前記3次元ノイズ低減画像の各色成分の信号から3次元ノイズ低減画像の輝度信号を生成する輝度画像生成ステップと、
前記3次元ノイズ低減画像の輝度信号の当該補正対象画素の周辺領域のコントラスト相関値を検出する低コントラスト部分検出ステップと、
前記低コントラスト部分検出ステップで検出された前記コントラスト相関値に応じて当該補正対象画素についてのコントラストの強調係数を決定する強調係数決定ステップと、
前記強調係数決定ステップで決定された前記強調係数に応じて前記3次元ノイズ低減画像の各色成分の信号の当該補正対象画素の局所領域のコントラストを強調する局所コントラスト強調ステップと、
前記強調係数決定ステップで決定された前記強調係数に対して、当該強調係数が大きくなるにしたがって大きくなるノイズ低減係数を生成するノイズ低減係数生成ステップと、
前記ノイズ低減係数生成ステップで生成された各画素についての前記ノイズ低減係数を1フレーム分格納する第1のフレームメモリを備え、
前記3次元ノイズ低減ステップは、前記第1のフレームメモリに格納された1フレーム前の各画素についての前記ノイズ低減係数に応じて当該補正対象画素についてのノイズ低減の度合いを制御することを特徴とし、
前記局所コントラスト強調ステップは、
前記3次元ノイズ低減画像の輝度信号の当該補正対象画素の値と該補正対象画素の周辺に位置する画素の値との差分に応じて前記周辺に位置する画素の値を非線形変換した値を用いて当該補正対象画素及びその周辺に位置する画素に対する平滑化を行って、該平滑化の結果を当該補正対象画素についての非線形平滑化の結果として出力する非線形LPFステップと、
前記非線形LPFステップの出力と前記3次元ノイズ低減画像の輝度信号の当該補正対象画素の値及び前記強調係数を用いて当該局所コントラスト強調ステップに入力されるゲインを決定するゲイン決定ステップと、
前記3次元ノイズ低減画像の各色成分の信号の当該補正対象画素の値に前記ゲインを乗算するゲイン乗算ステップとをさらに備えた
ことを特徴とする画像処理方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP12761251.3A EP2690860B1 (en) | 2011-03-24 | 2012-01-31 | Image processing device and method |
CN201280014653.4A CN103460682B (zh) | 2011-03-24 | 2012-01-31 | 图像处理装置和方法 |
US14/002,144 US9153015B2 (en) | 2011-03-24 | 2012-01-31 | Image processing device and method |
JP2013505836A JP5595585B2 (ja) | 2011-03-24 | 2012-01-31 | 画像処理装置及び方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-065629 | 2011-03-24 | ||
JP2011065629 | 2011-03-24 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012127904A1 true WO2012127904A1 (ja) | 2012-09-27 |
Family
ID=46879073
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/052083 WO2012127904A1 (ja) | 2011-03-24 | 2012-01-31 | 画像処理装置及び方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9153015B2 (ja) |
EP (1) | EP2690860B1 (ja) |
JP (1) | JP5595585B2 (ja) |
CN (1) | CN103460682B (ja) |
WO (1) | WO2012127904A1 (ja) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014077126A1 (ja) * | 2012-11-13 | 2014-05-22 | 日本電気株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP2014232938A (ja) * | 2013-05-28 | 2014-12-11 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
CN110211184A (zh) * | 2019-06-25 | 2019-09-06 | 珠海格力智能装备有限公司 | 一种led显示屏幕中灯珠定位方法、定位装置 |
US20220343469A1 (en) * | 2019-11-06 | 2022-10-27 | Canon Kabushiki Kaisha | Image processing apparatus |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5948073B2 (ja) | 2012-02-08 | 2016-07-06 | 株式会社 日立産業制御ソリューションズ | 画像信号処理装置、画像信号処理方法 |
JP6057629B2 (ja) * | 2012-09-07 | 2017-01-11 | キヤノン株式会社 | 画像処理装置、その制御方法、および制御プログラム |
JP5887303B2 (ja) * | 2013-06-19 | 2016-03-16 | 株式会社 日立産業制御ソリューションズ | 画像信号処理装置,撮像装置および画像処理プログラム |
KR102244918B1 (ko) | 2014-07-11 | 2021-04-27 | 삼성전자주식회사 | 시인성을 향상시키고 전력 소모를 줄일 수 있는 디스플레이 컨트롤러와 이를 포함하는 디스플레이 시스템 |
JP6333145B2 (ja) | 2014-09-30 | 2018-05-30 | 株式会社Screenホールディングス | 画像処理方法および画像処理装置 |
US10180678B2 (en) | 2016-07-28 | 2019-01-15 | Young Optics Inc. | Method for improved 3-D printing system and system thereof |
JP7262940B2 (ja) | 2018-07-30 | 2023-04-24 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理装置の制御方法およびプログラム |
CN111179182B (zh) * | 2019-11-21 | 2023-10-03 | 珠海格力智能装备有限公司 | 图像的处理方法及装置、存储介质及处理器 |
CN112967207B (zh) * | 2021-04-23 | 2024-04-12 | 北京恒安嘉新安全技术有限公司 | 一种图像处理方法、装置、电子设备及存储介质 |
CN117408988B (zh) * | 2023-11-08 | 2024-05-14 | 北京维思陆科技有限公司 | 基于人工智能的病灶图像分析方法及装置 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008017458A (ja) * | 2006-06-08 | 2008-01-24 | Matsushita Electric Ind Co Ltd | 画像処理装置、画像処理方法、画像処理プログラムおよび集積回路 |
JP2008171059A (ja) * | 2007-01-09 | 2008-07-24 | Rohm Co Ltd | 画像処理回路、半導体装置、画像処理装置 |
JP2009005252A (ja) * | 2007-06-25 | 2009-01-08 | Olympus Corp | 画像処理装置 |
JP2010183182A (ja) * | 2009-02-03 | 2010-08-19 | Olympus Corp | 画像処理装置、プログラム、及び方法、並びに撮像システム |
JP2010220030A (ja) * | 2009-03-18 | 2010-09-30 | Mitsubishi Electric Corp | 映像補正回路および映像表示装置 |
Family Cites Families (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08512418A (ja) * | 1994-05-03 | 1996-12-24 | フィリップス エレクトロニクス ネムローゼ フェンノートシャップ | 残留映像による良好なコントラスト/雑音 |
KR100206319B1 (ko) * | 1995-12-13 | 1999-07-01 | 윤종용 | 비디오 신호의 로컬 콘트라스트 개선을 위한 방법및장치 |
KR100200628B1 (ko) * | 1996-09-30 | 1999-06-15 | 윤종용 | 화질 개선 회로 및 그 방법 |
US5978518A (en) * | 1997-02-25 | 1999-11-02 | Eastman Kodak Company | Image enhancement in digital image processing |
JP3730419B2 (ja) | 1998-09-30 | 2006-01-05 | シャープ株式会社 | 映像信号処理装置 |
US6724943B2 (en) | 2000-02-07 | 2004-04-20 | Sony Corporation | Device and method for image processing |
JP4415236B2 (ja) | 2000-02-07 | 2010-02-17 | ソニー株式会社 | 画像処理装置及び画像処理方法 |
EP1345170B1 (en) | 2002-02-22 | 2005-01-12 | Agfa-Gevaert N.V. | Noise reduction method. |
JP4397623B2 (ja) | 2003-05-19 | 2010-01-13 | パナソニック株式会社 | 階調補正装置 |
JP4069943B2 (ja) * | 2003-12-03 | 2008-04-02 | 株式会社ニコン | ノイズ除去の強弱を画面内でコントロールする画像処理装置、画像処理プログラム、画像処理方法、および電子カメラ |
US8014034B2 (en) * | 2005-04-13 | 2011-09-06 | Acd Systems International Inc. | Image contrast enhancement |
ATE512421T1 (de) * | 2005-11-23 | 2011-06-15 | Cedara Software Corp | Verfahren und system zur verbesserung von digitalen bildern |
US8023733B2 (en) | 2006-06-08 | 2011-09-20 | Panasonic Corporation | Image processing device, image processing method, image processing program, and integrated circuit |
US8111895B2 (en) * | 2006-12-06 | 2012-02-07 | Siemens Medical Solutions Usa, Inc. | Locally adaptive image enhancement for digital subtraction X-ray imaging |
JP2008199448A (ja) | 2007-02-15 | 2008-08-28 | Matsushita Electric Ind Co Ltd | 撮像装置、画像処理装置および画像処理方法 |
US8306348B2 (en) * | 2007-04-24 | 2012-11-06 | DigitalOptics Corporation Europe Limited | Techniques for adjusting the effect of applying kernels to signals to achieve desired effect on signal |
US8233548B2 (en) * | 2007-05-09 | 2012-07-31 | Panasonic Corporation | Noise reduction device and noise reduction method of compression coded image |
JP2009100373A (ja) * | 2007-10-18 | 2009-05-07 | Sanyo Electric Co Ltd | ノイズ低減処理装置、ノイズ低減処理方法、及び電子機器 |
US8238687B1 (en) * | 2008-01-09 | 2012-08-07 | Helwett-Packard Development Company, L.P. | Local contrast enhancement of images |
JP5123756B2 (ja) * | 2008-06-26 | 2013-01-23 | オリンパス株式会社 | 撮像システム、画像処理方法および画像処理プログラム |
EP2457196A4 (en) * | 2009-07-21 | 2013-02-06 | Qualcomm Inc | METHOD AND SYSTEM FOR DETECTION AND ENHANCEMENT OF VIDEO IMAGES |
US8639050B2 (en) * | 2010-10-19 | 2014-01-28 | Texas Instruments Incorporated | Dynamic adjustment of noise filter strengths for use with dynamic range enhancement of images |
-
2012
- 2012-01-31 WO PCT/JP2012/052083 patent/WO2012127904A1/ja active Application Filing
- 2012-01-31 US US14/002,144 patent/US9153015B2/en not_active Expired - Fee Related
- 2012-01-31 CN CN201280014653.4A patent/CN103460682B/zh not_active Expired - Fee Related
- 2012-01-31 JP JP2013505836A patent/JP5595585B2/ja active Active
- 2012-01-31 EP EP12761251.3A patent/EP2690860B1/en not_active Not-in-force
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008017458A (ja) * | 2006-06-08 | 2008-01-24 | Matsushita Electric Ind Co Ltd | 画像処理装置、画像処理方法、画像処理プログラムおよび集積回路 |
JP2008171059A (ja) * | 2007-01-09 | 2008-07-24 | Rohm Co Ltd | 画像処理回路、半導体装置、画像処理装置 |
JP2009005252A (ja) * | 2007-06-25 | 2009-01-08 | Olympus Corp | 画像処理装置 |
JP2010183182A (ja) * | 2009-02-03 | 2010-08-19 | Olympus Corp | 画像処理装置、プログラム、及び方法、並びに撮像システム |
JP2010220030A (ja) * | 2009-03-18 | 2010-09-30 | Mitsubishi Electric Corp | 映像補正回路および映像表示装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2690860A4 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014077126A1 (ja) * | 2012-11-13 | 2014-05-22 | 日本電気株式会社 | 画像処理装置、画像処理方法及びプログラム |
JPWO2014077126A1 (ja) * | 2012-11-13 | 2017-01-05 | 日本電気株式会社 | 画像処理装置、画像処理方法及びプログラム |
US9621766B2 (en) | 2012-11-13 | 2017-04-11 | Nec Corporation | Image processing apparatus, image processing method, and program capable of performing high quality mist/fog correction |
JP2014232938A (ja) * | 2013-05-28 | 2014-12-11 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
CN110211184A (zh) * | 2019-06-25 | 2019-09-06 | 珠海格力智能装备有限公司 | 一种led显示屏幕中灯珠定位方法、定位装置 |
US20220343469A1 (en) * | 2019-11-06 | 2022-10-27 | Canon Kabushiki Kaisha | Image processing apparatus |
US11756165B2 (en) | 2019-11-06 | 2023-09-12 | Canon Kabushiki Kaisha | Image processing apparatus, method, and storage medium for adding a gloss |
US11836900B2 (en) * | 2019-11-06 | 2023-12-05 | Canon Kabushiki Kaisha | Image processing apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN103460682B (zh) | 2016-08-17 |
JPWO2012127904A1 (ja) | 2014-07-24 |
US9153015B2 (en) | 2015-10-06 |
EP2690860A4 (en) | 2014-10-29 |
US20130336596A1 (en) | 2013-12-19 |
JP5595585B2 (ja) | 2014-09-24 |
EP2690860B1 (en) | 2016-04-20 |
EP2690860A1 (en) | 2014-01-29 |
CN103460682A (zh) | 2013-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5595585B2 (ja) | 画像処理装置及び方法 | |
JP4273428B2 (ja) | 画像処理装置、画像処理方法、画像処理方法のプログラム及び画像処理方法のプログラムを記録した記録媒体 | |
US8155468B2 (en) | Image processing method and apparatus | |
US9390482B2 (en) | Image processing apparatus and method of processing image | |
JP2005323365A (ja) | デジタル映像信号フィルタリング装置及び方法 | |
JP6097588B2 (ja) | 画像処理装置及び画像処理方法 | |
JP2005341564A (ja) | ノイズ処理が可能なガンマ補正装置およびその方法 | |
US9830690B2 (en) | Wide dynamic range imaging method | |
US8526736B2 (en) | Image processing apparatus for correcting luminance and method thereof | |
EP2355039A1 (en) | Image generating apparatus and method for emphasizing edge based on image characteristics | |
JPWO2004002135A1 (ja) | 動き検出装置及びそれを用いたノイズリダクション装置 | |
CN112819721B (zh) | 一种图像彩色噪声降噪的方法和系统 | |
US20090002562A1 (en) | Image Processing Device, Image Processing Method, Program for Image Processing Method, and Recording Medium Having Program for Image Processing Method Recorded Thereon | |
JP2008072450A (ja) | 画像処理装置及び画像処理方法 | |
JP2012108898A (ja) | 画像処理装置、画像処理方法 | |
JP2006504312A5 (ja) | ||
JP2006504312A (ja) | シャープネス強調 | |
JP2012032739A (ja) | 画像処理装置及び方法、並びに画像表示装置 | |
JP4174656B2 (ja) | 画像表示装置および画像処理装置、並びに画像処理方法 | |
JP5365878B2 (ja) | フィルタ装置、フィルタリング方法 | |
Lin et al. | An adaptive color transient improvement algorithm | |
KR102270230B1 (ko) | 영상 처리 장치 및 영상 처리 방법 | |
US20100091195A1 (en) | De-ringing Device and Method | |
JP5933332B2 (ja) | 画像処理装置、画像処理方法、画像処理プログラム、および画像処理プログラムを記憶した記録媒体 | |
CN116939374B (zh) | 镜头阴影矫正方法、装置及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12761251 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14002144 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2013505836 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012761251 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |