WO2005013622A1 - 色成分の混在配列された画像を処理する画像処理装置、画像処理プログラム、電子カメラ、および画像処理方法 - Google Patents
色成分の混在配列された画像を処理する画像処理装置、画像処理プログラム、電子カメラ、および画像処理方法 Download PDFInfo
- Publication number
- WO2005013622A1 WO2005013622A1 PCT/JP2004/009601 JP2004009601W WO2005013622A1 WO 2005013622 A1 WO2005013622 A1 WO 2005013622A1 JP 2004009601 W JP2004009601 W JP 2004009601W WO 2005013622 A1 WO2005013622 A1 WO 2005013622A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image processing
- processing apparatus
- similarity
- component
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 232
- 238000003672 processing method Methods 0.000 title claims description 12
- 238000003384 imaging method Methods 0.000 claims abstract description 55
- 230000035945 sensitivity Effects 0.000 claims abstract description 47
- 238000009499 grossing Methods 0.000 claims abstract description 46
- 230000002093 peripheral effect Effects 0.000 claims abstract description 15
- 238000006243 chemical reaction Methods 0.000 claims description 32
- 238000000034 method Methods 0.000 claims description 32
- 238000004458 analytical method Methods 0.000 claims description 16
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 claims description 9
- 230000009977 dual effect Effects 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 7
- 230000014509 gene expression Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 239000002699 waste material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4015—Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/76—Circuitry for compensating brightness variation in the scene by influencing the image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2209/00—Details of colour television systems
- H04N2209/04—Picture signal generators
- H04N2209/041—Picture signal generators using solid-state devices
- H04N2209/042—Picture signal generators using solid-state devices having a single pick-up sensor
- H04N2209/045—Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
- H04N2209/046—Colour interpolation to calculate the missing colour values
Definitions
- Image processing apparatus image processing program, electronic camera, and image processing method
- the present invention relates to an image processing technique for converting a first image (for example, RAW data) in which color components are mixedly arranged to generate a second image in which at least one type of component is arranged in pixel units.
- a first image for example, RAW data
- second image in which at least one type of component is arranged in pixel units.
- this kind of spatial filter processing involves color interpolation processing and color system conversion processing into luminance and color difference YC b C r for R AW data (for example, bay array data) output from a single-chip image sensor. Is performed on the luminance / color difference YC b C r (particularly, the luminance component Y).
- a typical ⁇ -filter is known as a typical noise removal filter.
- the R AW data When color interpolation is performed on the R AW data of a single-chip image sensor, the R AW data The original signal existing in the data and the interpolated signal generated by averaging the original signal are arranged on one screen. At this time, the spatial frequency characteristics are slightly different between the original signal and the interpolation signal.
- Patent Document 1 a low-pass filter process is performed only on the original signal, and the space between the original signal and the intercept signal is reduced. The difference in frequency characteristics is reduced.
- the original signal is subjected to low-pass filter processing using the intercept signal close to the original signal. Therefore, even in this processing, the color interpolation and the spatial filter processing are performed stepwise, and there is still a problem that fine image information is easily lost.
- Patent Document 2 discloses an image processing apparatus that directly performs color system conversion processing on RAW data.
- the color system conversion is performed by weighted addition of the RAW data according to the coefficient table.
- coefficient terms for edge enhancement and noise removal can be fixedly included in advance.
- Patent Literature 1 and Patent Literature 2 do not specifically refer to a technique for changing a conversion filter of a first image according to imaging sensitivity as in an embodiment described later. Therefore, for a high SZN image due to low imaging sensitivity, the smoothing of the conversion filter was too strong, and there was a possibility that the image would be excessively blurred. On the other hand, for low SZN images due to high imaging sensitivity, the smoothing of the conversion filter was insufficient, conversely, and false colors could be conspicuous or the images could be grainy. Disclosure of the invention
- An object of the present invention is to efficiently and easily perform advanced spatial filter processing adapted to an image structure in view of the above-mentioned problems.
- Another object of the present invention is to provide an image processing technique capable of appropriately removing noise while maintaining resolution and contrast even when the imaging sensitivity changes.
- the invention will be described.
- the image processing apparatus comprises: a first image in which any one of the first to n-th color components (n 2) is distributed in pixel units; a first to n-th color component in pixel units.
- This is an image processing device that converts the image into a second image.
- This image processing device includes a smoothing unit.
- the smoothing unit performs smoothing using a first color component of a peripheral pixel on a pixel position having a first color component in the first image.
- the smoothing unit uses the smoothed first color component as the first color component at the pixel position of the second image.
- the smoothing unit includes a control unit.
- the control unit changes the characteristics of the filter for smoothing according to the imaging sensitivity at which the first image was captured.
- the first color component is a color component that carries a luminance signal among the first to n-th color components.
- the first to n-th color components are red, green, and blue, and the first color component is green.
- control unit changes the size of the filter (reference pixel range) according to the imaging sensitivity.
- control unit changes the element value of the filter (contribution ratio of the peripheral reference component to the smoothing target) according to the imaging sensitivity.
- the smoothing unit includes a similarity determination unit and a switching unit.
- the similarity determination unit determines the degree of similarity between pixels in a plurality of directions.
- the switching unit outputs the first color component of the first image as it is as the first color component of the second image or outputs the smoothed first color component to the second image according to the determination result. Switch whether to output as the first color component of.
- the similarity determination unit determines the similarity by calculating the similarity between pixels in at least four directions.
- Another image processing apparatus includes a first image in which one of the first to n-th color components (n2) is distributed in pixel units, and at least one signal component in pixel units.
- This is an image processing device that converts the image into a second image.
- This image processing device includes a signal generation unit.
- the signal generation unit generates a signal component of the second image by performing a weighted addition of the color components of the first image.
- the signal generation unit includes a control unit.
- the control unit changes a weighted addition coefficient used when performing weighted addition of the color components of the first image according to the imaging sensitivity at the time of capturing the first image.
- the signal generator generates a signal component different from the first to n-th color components.
- the signal generation unit generates a luminance component different from the first to n-th color components.
- control unit changes the weighted addition coefficient according to the imaging sensitivity for the pixel position having the first color component in the first image.
- control unit changes the range of the weighted addition according to the imaging sensitivity.
- control unit changes the weighting coefficient of the weighted addition according to the imaging sensitivity.
- the signal generation unit includes a similarity determination unit.
- the similarity determination unit determines the degree of similarity between pixels in a plurality of directions.
- the control unit changes the weight coefficient of the weighted addition according to the similarity determination result in addition to the imaging sensitivity.
- control unit is configured such that “the similarity determination result cannot distinguish the similarity in any direction” or “the similarity in any direction is stronger than a predetermined level.
- the color component inherent in the processing target pixel of the first image and the same color component of the peripheral pixels are weighted and added.
- the similarity determination unit determines the similarity by calculating the similarity between pixels in at least four directions.
- Another image processing apparatus of the present invention converts a first image in which a plurality of types of color components constituting a color system are mixed and arranged on a pixel array, and converts at least one type of signal component (hereinafter, referred to as a signal component).
- This is an image processing device that generates a second image in which “new components” are aligned in pixel units. '
- This image processing device includes a similarity determination unit, a coefficient selection unit, and a conversion processing unit.
- the similarity determination unit determines the similarity of the image in a plurality of directions at the processing target pixel of the first image.
- the coefficient selection unit selects a predetermined coefficient table in accordance with the similarity direction determination by the similarity determination unit.
- the conversion processing unit generates a new component by weighting and adding the color components of the local area including the pixel to be processed according to the selected coefficient table.
- the coefficient selection unit changes the coefficient table to one having a different spatial frequency characteristic according to the analysis of the image structure based on the similarity. By changing such a coefficient table, adjustment of the spatial frequency component of the new component is realized.
- the spatial frequency component of the new component to be generated is adjusted by switching the coefficient table to one having a different spatial frequency characteristic according to the analysis of the image structure based on the similarity.
- the weight ratio of the color component is made to correspond to the weight ratio of the color system conversion.
- the conventional “color interpolation processing” is not required, and the “color system conversion processing” and the “spatial filter processing considering the image structure” can be performed by a single weighted addition.
- image processing of RAW data or the like which has conventionally taken a long time, can be significantly simplified and significantly speeded up.
- the coefficient selection unit analyzes the image structure near the pixel to be processed by judging the magnitude of the similarity value.
- the coefficient selector changes the coefficient table to one having a different spatial frequency characteristic according to the analysis result.
- the coefficient selection unit changes the coefficient table to one having a different array size. With such a change, the coefficient table is changed to one having a different spatial frequency characteristic.
- the coefficient selection unit preferably includes a high-frequency component of the signal component. To a stronger or wider band. Change to a "coefficient table of a type with a stronger degree of noise removal".
- the coefficient selecting unit may determine that the similarity in a plurality of directions is substantially equal in the direction determination and that the similarity is weak in the analysis of the image structure. Suppress components strongly or broadly Change to a "coefficient table with a stronger degree of noise removal".
- the coefficient selection unit emphasizes the high-frequency component in the direction of weak similarity.
- a coefficient table of a stronger type is preferably, when it is determined in the analysis of the image structure that the strength difference between the directions of similarity is large.
- the coefficient selection unit emphasizes the high-frequency component of the signal component. To a stronger type coefficient table.
- the coefficient selection unit changes the coefficient table to one with a higher degree of noise removal as the imaging sensitivity at the time of capturing the first image is higher.
- the weight ratio between the color components is kept substantially constant before and after the change of the coefficient table.
- the weight ratio between the color components is a weight ratio for color system conversion.
- Another image processing device of the present invention includes a smoothing unit and a control unit.
- the smoothing unit performs a smoothing process on the image data by performing weighted addition of a pixel to be processed and peripheral pixels of the image data.
- control unit changes the reference range of the peripheral pixels according to the imaging sensitivity at the time of capturing the image data.
- An image processing program causes a computer to function as the image processing device according to any one of (1) to (27).
- An electronic camera includes: the image processing device according to any one of (1) to (27); and an imaging unit configured to image a subject and generate a first image.
- a first image captured by an imaging unit is processed by an image processing device to generate a second image.
- the image processing method according to the present invention provides a method for generating a first image in which any one of the first to n-th color components (n 2) is allocated to each pixel by using at least one signal component for each pixel. This is an image processing method for converting into a complete second image.
- the image processing method includes a step of generating a signal component of the second image by weighted addition of the color components of the first image.
- the step of changing the weighted addition coefficient used when weighting and adding the color components of the first image is performed according to the imaging sensitivity when the first image is captured.
- Another image processing method of the present invention converts a first image in which a plurality of types of color components constituting a color system are mixed and arranged on a pixel array, and converts at least one type of signal component (hereinafter referred to as a signal component).
- This is an image processing method that generates a second image in which “new components” are aligned in pixel units.
- This image processing method has the following steps.
- [S1] A step of determining image similarity in a plurality of directions in a processing target pixel of the first image.
- [S2] A step of selecting a predetermined coefficient table according to the similarity direction determination in the similarity determination step.
- [S3] A step of weight-adding the color components of the local area including the pixel to be processed by the selected coefficient table to generate a new component.
- the coefficient table is changed to one having a different spatial frequency characteristic according to the analysis of the image structure based on the similarity. With this change, the spatial frequency component of the new component is adjusted.
- FIG. 1 is a block diagram showing a configuration of the electronic camera 1.
- FIG. 2 is a flowchart showing a rough operation of the color system conversion processing.
- FIG. 3 is a flowchart showing the setting operation of the index HV.
- FIG. 4 is a flowchart showing the operation of setting the index DN.
- FIG. 5 is a flowchart (1/3) showing a process of generating a luminance component.
- FIG. 6 is a flowchart (2/3) showing the process of generating the luminance component.
- FIG. 7 is a flowchart (3Z3) showing a process of generating a luminance component.
- FIG. 8 is a diagram showing the relationship between the index (HV, DN) and the similar direction.
- FIG. 9 is a diagram illustrating an example of the coefficient table.
- FIG. 10 is a diagram showing an example of the coefficient table.
- FIG. 11 is a diagram illustrating an example of the coefficient table.
- FIG. 12 is a diagram illustrating an example of the coefficient table.
- FIG. 13 is a diagram illustrating an example of the coefficient table.
- FIG. 14 is a flowchart illustrating the operation of the RGB color interpolation.
- FIG. 15 is a diagram illustrating an example of the coefficient table.
- FIG. 16 is a diagram illustrating an example of the coefficient table.
- FIG. 17 is a flowchart for explaining the operation of the RGB color interpolation. BEST MODE FOR CARRYING OUT THE INVENTION
- FIG. 1 is a block diagram of an electronic camera 1 corresponding to the first embodiment.
- a photographing lens 20 is attached to an electronic camera 1.
- the imaging surface of the imaging element 21 is arranged.
- a RGB primary color filter having a bay array is arranged.
- the image signal output from the image sensor 21 is digitally converted into RAW data (corresponding to the first image) through an analog signal processing unit 22 and an A / D conversion unit 10 and then transmitted through a bus. Is temporarily stored in the memory 13.
- the memory 13 has an image processing unit (for example, one chip dedicated to image processing; a microprocessor) 11, a control unit 12, a compression / expansion unit 14, an image display unit 15, and a recording unit via a bus. 17 and the external interface section 19 are connected.
- image processing unit for example, one chip dedicated to image processing; a microprocessor
- the electronic camera 1 is provided with an operation unit 24, a monitor 25, and a timing control unit 23. Further, a memory card 16 is attached to the electronic camera 1. The recording unit 17 compresses and records the processed image on the memory card 16.
- the electronic camera 1 can be connected to an external computer 18 via an external interface unit 19 (such as a USB).
- an external interface unit 19 such as a USB
- the similarity determination unit described in the claims is based on “the similarity in the vertical and horizontal directions is determined by the image processing unit 11, and each pixel of the RAW data is converted into cases 1 to 12 (described later) by determining the similarity direction. Classification function].
- the coefficient selecting unit described in the claims is a function of the image processing unit 11, “a function of switching and using a group of coefficient tables having different spatial frequency characteristics based on a similarity determination, and a table of coefficients according to ease 1 to 12. Function to select coefficient table from group].
- the conversion processing unit described in the claims corresponds to the “function of obtaining a new component (here, a luminance component) by weight-adding the color components of the local area of the RAW data according to the coefficient table” of the image processing unit 11.
- the first image described in the claims corresponds to the RAW data.
- the second image described in the claims corresponds to the image data after the color system conversion.
- FIG. 2 to 7 are operation flowcharts of the image processing unit 11.
- Fig. 2 shows the general flow of color system conversion.
- Fig. 3 and Fig. 4 show the operation to find the index (HV, DN) for determining the direction of similarity.
- 5 to 7 show the processing for generating a luminance component.
- the image processing unit 11 determines the direction of similarity in the vertical and horizontal directions of the screen, centering on the pixel to be processed of the raw data, and obtains the index HV (step S1).
- This index HV is set to “1” when the vertical similarity is stronger than the horizontal, and is set to “1 1” when the horizontal similarity is stronger than the vertical. "0" is set if not connected.
- the image processing section 11 determines the direction of similarity with respect to the slanting direction of the screen, centering on the pixel to be processed of the RAW data, and obtains the index DN (step S2).
- This index DN is set to ⁇ 1 '' when the similarity of oblique 45 degrees is stronger than 135 degrees, and is set to 1 when the similarity of oblique 135 degrees is stronger than 45 degrees. "1" is set, and "0" is set if these similarities cannot be distinguished.
- the image processing unit 11 performs a “chromaticity component generation process” (step S4) together with the “luminance component generation process” (step S3).
- Step S12 First, the image processing unit 11 calculates the difference between pixels in the vertical and horizontal directions at the coordinates [i, j] of the RAW data, and sets it as the similarity.
- the image processing unit 11 calculates the similarity Cv [i, j] in the vertical direction and the similarity Ch [i, j] in the horizontal direction using Equations 1 to 4 below. (Note that the absolute value II in the equation may be replaced by a square operation or the like.)
- Ch [i, j] (
- Ch [i, j] (
- Step S13 Next, the image processing unit 11 compares the similarities in the vertical and horizontal directions.
- Step S14 For example, if the following condition 2 is satisfied, the image processing unit 11 determines that the vertical and horizontal similarities are substantially the same, and sets 0 to the index HV [i, j].
- Step S 1 5 If condition 2 is not satisfied and the following condition 3 is satisfied, the image processing unit 11 determines that the vertical similarity is strong and sets the index HV [i, j] to 1 Set.
- Step S16 If the conditions 2 and 3 are not satisfied, the image processing unit 11 determines that the horizontal similarity is strong, and sets 1 to the index HV [i, j].
- the similarity was calculated for both the RB position and the G position.
- the similarity was calculated only for the RB position, and the direction index HV of the RB position was set.
- the direction index of the G position may be determined with reference to the surrounding HV value. For example, it is also possible to calculate the average value of the indices of four points adjacent to the G position and convert it to an integer to use it as the direction index of the G position.
- Step S31 First, at the coordinates [i, j] of the RAW data, the image processing section 11 calculates a pixel-to-pixel difference in the diagonal 45-degree direction and the diagonal 135-degree direction to determine the similarity. For example, the image processing unit 11 calculates the similarity C45 [i, j] in the diagonal 45-degree direction and the similarity C135 [i, j] in the diagonal 135-degree direction using the following Expressions 5 to 8. And ask.
- Step S 3 2 the image processing unit 1 1, in this way, when calculating the similarity to the oblique direction of 45 degrees and the swash Me 13 5-degree direction, based on these similarity of the two diagonal directions similarity Are determined to be the same.
- such a determination can be realized by determining whether the following condition 5 is satisfied.
- the threshold value Th5 is such that when the difference between the two degrees of similarity C45 [i, j] and C135 [i, j] is very small, it is erroneously determined that one similarity is strong due to noise. Play a role to avoid. Therefore, it is preferable to set the threshold Th5 to a high value for a color image having much noise.
- Step S33 The image processing unit 11 sets 0 to the index DN [i, j] when the diagonal similarities are substantially the same from such a determination.
- Step S34 On the other hand, if a direction having strong oblique similarity can be determined, it is determined whether or not the similarity in oblique 45 degrees is strong.
- such a determination can be realized by determining whether the following condition 6 is satisfied.
- Step S35 If the similarity in the diagonal 45-degree direction is strong (if Condition 5 is not satisfied and Condition 6 is satisfied), the image processing unit 11 determines that the index DN Set [i, j] to 1.
- Step S36 On the other hand, if the similarity in the diagonal 135-degree direction is strong (when the conditions 5 and 6 are not satisfied), set 1 to the index DN [i, j].
- the similarity was calculated for both the RB position and the G position, but for the sake of simplicity, the similarity was calculated only for the RB position, and the direction index DN of the RB position was set.
- the direction index of the G position may be determined with reference to the DN values in the vicinity. For example, it is also possible to calculate the average value of the indices of four points adjacent to the G position and convert it to an integer to use it as the direction index of the G position.
- Step S41 The image processing section 11 determines whether or not the index (HV, DN) 1S ( ⁇ , 0) of the pixel to be processed.
- the image processing unit 11 shifts the operation to Step S42.
- Step S42 The image processing section 11 acquires information on the imaging sensitivity (corresponding to the amplifier gain of the imaging element) at the time of capturing the RAW data from the control section 12.
- the image processing unit 11 shifts the operation to step S46.
- the image processing unit 11 shifts the operation to Step S43.
- Step S43 The image processing section 11 performs similarity strength determination. For example, such strength judgment is based on the similarity Cv [i, j] and Ch [i, j] used for calculating the index HV, the similarity C4 5 [i, j] used for calculating the index DN, and the like. Judgment is made based on whether any of Cl 35 [i, j] satisfies the following condition 7.
- the threshold value th6 is a boundary value for determining whether a portion having isotropic similarity is a “flat portion” or a “portion having significant undulation information”, and is set according to the actual RAW data value. Is set in advance.
- the image processing unit 11 shifts the operation to step S44. On the other hand, when the condition 7 is not satisfied, the image processing unit 11 shifts the operation to Step S45.
- Step S44 Here, since the condition 7 is satisfied, it can be determined that the pixel to be processed has weak similarity with the peripheral pixels, that is, a portion having significant undulation information. Therefore, the image processing unit 11 selects a coefficient table 1 (see FIG. 9) indicating a weak LPF characteristic in order to leave this significant undulation information.
- This coefficient table 1 has RG This is a coefficient table that can be used in common for the B position. After this selection operation, the image processing section 11 shifts the operation to step S51.
- Step S45 Here, since the condition 7 is not satisfied, it can be determined that the pixel to be processed has strong similarity with the surrounding pixels, that is, it is a flat portion. Therefore, the image processing unit 11 selects one of the coefficient tables 2 and 3 (see Fig. 9) that strongly and broadly suppresses high-frequency components in order to reliably remove the small amplitude noise that stands out in the flat part. I do.
- This coefficient table 2 is a coefficient table selected when the pixel to be processed is at the RB position.
- coefficient table 3 is a coefficient template selected when the pixel to be processed is at the G position.
- the image processing unit 11 shifts the operation to step S51.
- Step S46 Here, since the imaging sensitivity is high, it can be determined that the SZN of the RAW data is low. Therefore, the image processing unit 11 selects a coefficient table 4 (see FIG. 9) that suppresses high-frequency components more strongly and more widely in order to reliably remove the noise of the RAW data.
- the coefficient table 4 is a coefficient table that can be used in common for the RGB positions. After this selection operation, the image processing section 11 shifts the operation to step S51.
- Step S47 Here, the processing target pixel has anisotropic similarity. Therefore, the image processing unit 11 obtains the difference between “similarity in similar directions” and “similarity in dissimilar directions”.
- Such a strength difference is obtained from, for example, a difference or a ratio between “vertical similarity Cv [i, j]” and “horizontal similarity Ch [i, j]” used for calculating the index HV. Can be. Also, for example, it is obtained from the difference or ratio between “similarity C45 [i, j] in the diagonal 45 ° direction” used in the calculation of the index DN and “similarity C135 [i, j] in the diagonal 135 ° direction”. You can also.
- Step S48 The image processing section 11 judges a threshold value of the obtained strength difference according to the following condition 8.
- the threshold value th7 is a value for discriminating whether or not the processing target pixel has the image structure of the contour portion, and is set in advance according to the actual value of the raw data.
- the image processing unit 11 shifts the operation to step S50.
- the operation shifts to step S49.
- Step S49 Here, since the condition 8 is not satisfied, it is presumed that the pixel to be processed is not the contour of the image. Therefore, the image processing unit 11 uses a coefficient table group having an array size of 3 ⁇ 3 as shown in FIGS.
- the image processing unit 11 determines a pixel to be processed based on a condition in which “direction determination of similarity using the index (HV, DN)” and “color component of the pixel to be processed” are combined. Classified into easel ⁇ 12. Note that the following “x” may be any of 1, 0, _1. ⁇ R position or B position ⁇
- the image processing unit 11 sets the coefficient table group with weak contour enhancement (coefficient tables 5, 7, 9, 11, 13, 15, 17, 19, and 19 shown in FIGS. 9 to 13). , 21, 23, 25, 27), select the following coefficient table.
- casel l— 2 Select coefficient table 25.
- the image processing unit 11 After selecting such a coefficient table, the image processing unit 11 shifts the operation to step S51.
- Step S50 Here, since the condition 8 is satisfied, the pixel to be processed is estimated to be the contour of the image. Therefore, the image processing unit 11 includes a group of coefficient tables with strong edge enhancement (coefficient tables with an array size of 5 ⁇ 5, shown in FIGS. 9 to 13, 6, 8, 10, 12, 14, 16, 18, 20) , 22, 24, 26, 28) to select the coefficient table.
- the image processing unit 11 classifies the processing target pixels into cases 1 to 12 as in step S49.
- the image processing unit 11 uses a group of coefficient tables with strong edge enhancement (coefficient tables 6, 8, 10, 12, 14, 16, 18, 18 shown in Figs. 9 to 13). 20, 22, 24, 26, 28), select the following coefficient table.
- a coefficient table in which coefficients are preferentially allocated in the direction of relatively strong similarity is selected. Furthermore, this coefficient table enables the contour enhancement of the image by distributing negative coefficient terms in a direction substantially orthogonal to the similar direction. After selecting such a coefficient table, the image processing unit 11 shifts the operation to step S51.
- Step S51 The coefficient table is selected for each pixel by a series of operations described above.
- the image processing unit 11 multiplies the coefficient term of the coefficient table thus selected by multiplying the color component of the local area including the pixel to be processed of the RAW data and adds the result.
- This weighting ratio is equal to the weighting ratio when calculating the luminance component Y from the RGB color components. Therefore, in the above-described weighted addition, the luminance component Y is directly generated from the RAW data in pixel units.
- a group of coefficient tables having different spatial frequency characteristics is prepared in advance, and the group of coefficient tables is switched and used according to the analysis result of the image structure (steps S43 and S44). 8).
- the originally separate image processing of “color system conversion” and “spatial filter processing in consideration of the image structure” with one weighted addition.
- step S 43 when it is determined that “similarity in multiple directions is isotropic” and “similarity is strong”, “a coefficient table of a type with a high degree of noise removal” is selected (step S 43, step S45). For this reason, at the same time as the color system conversion, noise conspicuous in flat portions of the image can be strongly suppressed.
- a coefficient table having a weak LPF characteristic is selected (step S43, step S44). Therefore, high-quality image data including a lot of image information can be generated.
- the mode when it is determined that the difference in similarity is large in a plurality of directions, the mode is switched to a “coefficient table of a type having a stronger degree of contour enhancement” that emphasizes high-frequency components in dissimilar directions (Ste S48, step S50).
- the coefficient table is changed to one with a higher degree of noise removal as the imaging sensitivity is higher (steps S42 and S46). As a result, simultaneously with the color system conversion, the noise that increases at high imaging sensitivity can be further suppressed.
- the electronic camera (including the image processing device) of the second embodiment performs color interpolation on the R-AW data (corresponding to the first image) in the RGB bay array to obtain image data (the second image data) in which the RGB signal components are arranged in pixel units. (Corresponding to two images).
- the configuration of the electronic camera (FIG. 1) is the same as in the first embodiment. Therefore, description is omitted.
- FIG. 14 is a flowchart illustrating color interpolation processing according to the second embodiment.
- the image processing unit 11 determines whether or not the G pixel [i, j] of the RAW data to be processed is a place where similarity cannot be distinguished in any direction. Similarity of "Is there any significant directionality in the image structure and is it highly isotropic?" Judge by judgment.
- the image processing unit 11 obtains an index (HV, DN) for the G pixel [i, j]. This processing is the same as that of the first embodiment (FIGS. 3 and 4), and thus the description is omitted.
- the image processing unit 11 determines whether or not the obtained index (H V, D N) force S (0, 0).
- the image processing unit 11 shifts the operation to Step S63.
- the image processing unit 11 shifts the operation to step S62.
- Step S62 In this step, there is a significant directionality in the image structure.
- the G pixel [i, j] to be processed is located at the outline or detail of the image, and is likely to be an important image structure.
- the image processing unit 11 avoids smoothing processing (steps S63 and S64) described later in order to keep important image structures faithful. That is, the image processing unit 11 uses the value of the G pixel [i, j] of the raw data as the G color component of the pixel [i, j] after color interpolation.
- the image processing unit 11 shifts the operation to Step S65.
- Step S63 On the other hand, in this step, there is no significant directionality in the image structure. Therefore, there is a high possibility that the noise is point-like noise isolated from the flat part of the image or the surroundings.
- the image processing unit 11 can suppress the noise of the G pixel to a low level without impairing an important image structure by performing smoothing only on such a portion.
- the image processing unit 11 determines the degree of smoothing with reference to the imaging sensitivity at the time of capturing the RAW data, in addition to the above-described similarity determination (determination of the image structure).
- FIG. 15 is a diagram showing a coefficient table prepared in advance to change the degree of smoothing. This coefficient table specifies the weighting factor when the central G pixel [i, j] to be processed and the peripheral G pixels are weighted and added.
- the image processing unit 11 selects the coefficient table shown in FIG. 15 (A).
- This coefficient table has a weighting ratio of 4: 1 between the central G pixel and the peripheral G pixel, and is a coefficient table with a low degree of smoothing.
- the image processing unit 11 selects the coefficient table shown in FIG. 15 (B).
- This coefficient table has a weight ratio of 2: 1 between the central G pixel and the peripheral G pixel, and is an intermediate coefficient table with an intermediate degree of smoothing.
- the image processing unit 11 selects the coefficient table shown in FIG. 15 (C).
- This coefficient table has a weighting ratio of the central G pixel to the peripheral G pixel of 1: 1 and is a coefficient table with a high degree of smoothing.
- the degree of smoothing may be changed using the coefficient table shown in FIG.
- FIG. 16 a case where the coefficient table shown in FIG. 16 is used will be described.
- the coefficient table shown in FIG. 16A is selected.
- This coefficient table has a size of 3 ⁇ 3 pixels in the vertical and horizontal directions, and smoothing is performed on the unevenness of the spatial pixel values below this range. As a result, smoothing processing is performed on the undulation (high-frequency spatial frequency component) of this fine size, and the degree of smoothing can be relatively reduced.
- the coefficient table shown in FIG. 16 (B) is selected.
- weighting coefficients are distributed in a diamond shape within a range of 5 ⁇ 5 pixels vertically and horizontally.
- a rhombus table corresponding to a diagonal of 4.24 ⁇ 4.24 pixels is calculated in terms of vertical and horizontal pixel intervals.
- undulations spatial frequency components in the middle and high frequencies below this range are subject to smoothing, and the degree of smoothing is slightly higher.
- the coefficient table shown in FIG. 16 (C) is selected.
- This coefficient table has a size of 5 ⁇ 5 pixels in the vertical and horizontal directions, and smoothing is performed on the unevenness of the spatial pixel values below this range. As a result, undulations (mid-spatial frequency components) below this range are included in the smoothing target, and the degree of smoothing is further increased.
- the image processing unit 11 relatively increases the weight coefficient of the center G pixel and / or increases the coefficient test. Reduce the size of a pull. Such a change in the coefficient table can reduce the smoothing.
- the image processing unit 11 relatively reduces the weight coefficient of the center G pixel and increases the size of the coefficient table or coefficient table. By changing the coefficient table, smoothing can be enhanced. ⁇
- Step S64 The image processing unit 11 weights and adds the values of the peripheral G pixels to the G pixel [i, j] to be processed according to the weight coefficient of the selected coefficient table.
- the image processing unit 11 uses the value of the G pixel [i, j] after the weighted addition as the 0 color component of the pixel after color interpolation, "].
- the image processing unit 11 shifts the operation to Step S65.
- Step S65 The image processing section 11 repeatedly executes the above-described adaptive smoothing processing (steps S61 to S64) on the G pixels of the RAW data.
- the image processing unit 11 shifts the operation to step S66.
- Step S66 Subsequently, the image processing section 11 performs an interpolating process on the RB position (empty lattice position of the G color component) of the RAW data to generate an interpolating component of G color.
- Gvl35 (G [i, j-l] + G [i, j + l]) / 2
- Ghl35 (G [i-l, j] + G [i + l, j]) / 2
- Step S67 Subsequently, the image processing section 11 performs an interpolation process for the R color component. For example, here, for pixels [i + l, j], [i, j + l], and [i + 1, j + 1] other than the R position [i, j], Perform processing.
- R [i + 1, j] (R [i, j] + R [i + 2, j]) / 2+ (2-G [i + l, j] -G [i, j]-G [ i + 2, j]) / 2
- R [i, j + l] (R [i, j] + R [i, j + 2]) / 2+ (2'G [i, j + 1] -G [i, j] _G [i , J + 2]) / 2
- R [i + 1, j + l] (R [i, j] + R [i + 2, j] + R [i, j + 2] + R [i + 2, j + 2]) / 4
- Step S68 Subsequently, the image processing section 11 performs an interpolating process for the B color component. For example, here, for the pixels [i + 1, j], [i, j + 1] and [i + l, j + 1] other than the B position [i, j], Is carried out.
- B [i + 1, j] (B [i, j] + B [i + 2, j]) / 2+ (2-G [i + l, j] -G [i, j] -G [ i + 2, j]) / 2
- B [i, j + l] (B [i, j] + B [i, j + 2]) / 2+ (2-G [i, j + 1]-G [i, j] -G [ i, j + 2]) / 2
- B [i + 1, j + l] (B [i, j] + B [i + 2, j] + B [i, j + 2] + B [i + 2, j + 2]) / 4
- the electronic camera (including the image processing device) according to the third embodiment performs color interpolation on RAW data (corresponding to the first image) in the RGB Bayer array to obtain image data (second image) in which RGB signal components are arranged in pixel units. (Corresponding to the image).
- the configuration of the electronic camera (FIG. 1) is the same as in the first embodiment. Therefore, description is omitted.
- FIG. 17 is a flowchart illustrating color interpolation processing according to the third embodiment.
- Step S71 For the G pixel [i, j] of the RAW data to be processed, the image processing unit 11 “cannot determine whether the similarity is stronger than a predetermined level in any direction”, Or no flat direction is found, and whether the area is highly flat or not ”by the similarity judgment.
- the image processing unit 11 obtains similarities Cv, Ch, C45, and C135 for the G pixel [i, j]. This processing is the same as in the first embodiment, and a description thereof will not be repeated.
- the image processing unit 11 determines whether or not all of the obtained similarities C V, Ch, C 45, and C 135 are equal to or smaller than a predetermined threshold based on the following conditional expressions.
- the threshold value in the equation is used to determine whether the similarity indicates a significant change in pixel value. Value. Therefore, as the imaging sensitivity increases, it is preferable to increase the threshold value in consideration of an increase in noise.
- the image processing unit 11 shifts the operation to step S73.
- the image processing unit 11 shifts the operation to step S72.
- Steps S72 to S78 are the same as steps S62 to S68 in the second embodiment, and thus description thereof is omitted.
- RGB color interpolation is completed.
- step S43 of the first embodiment described above if it is determined that “similarity in multiple directions is isotropic” and “similarity is weak”, “the type of noise removal is stronger Coefficient table ”may be selected.
- a part having a weak similarity can be regarded as noise and strongly removed.
- undulation information at an isotropic location a location that is clearly not a contour
- step S48 of the first embodiment described above when it is determined that the strength difference between the directions of similarity is small, the high frequency component of the signal component is emphasized. May be selected. In this case, simultaneously with the color system conversion, a fine image structure having no directionality can be emphasized.
- the color system conversion into the luminance component has been described.
- the present invention is not limited to this.
- the present invention may be implemented when a color system is converted into a color difference component.
- spatial filter processing especially LPF processing
- the present invention is applied to the color system conversion has been described.
- the present invention is not limited to this. For example, by replacing the coefficient table for color system conversion with the coefficient table for color interpolation, it is possible to perform "color interpolation processing" and "advanced spatial filter processing considering the image structure" together. become.
- an external computer 18 may execute the operations shown in FIGS. 2 to 7 by using an image processing program.
- the image processing service of the present invention may be provided through a communication line such as the Internet.
- the image processing function of the present invention may be added later to the electronic camera by rewriting the firmware of the electronic camera.
- the present invention is an invention that can be used for an image processing device and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
- Color Television Image Signal Generators (AREA)
- Facsimile Image Signal Circuits (AREA)
- Color Image Communication Systems (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP04747070A EP1641285A4 (en) | 2003-06-30 | 2004-06-30 | IMAGE PROCESSING DEVICE COMPRISING DIFFERENT COLOR ELEMENTS, IMAGE PROCESSING PROGRAM, ELECTRONIC CAMERA, AND IMAGE PROCESSING METHOD |
JP2005512463A JPWO2005013622A1 (ja) | 2003-06-30 | 2004-06-30 | 色成分の混在配列された画像を処理する画像処理装置、画像処理プログラム、電子カメラ、および画像処理方法 |
US11/320,460 US20060119896A1 (en) | 2003-06-30 | 2005-12-29 | Image processing apparatus, image processing program, electronic camera, and image processing method for smoothing image of mixedly arranged color components |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003-186629 | 2003-06-30 | ||
JP2003186629 | 2003-06-30 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/320,460 Continuation US20060119896A1 (en) | 2003-06-30 | 2005-12-29 | Image processing apparatus, image processing program, electronic camera, and image processing method for smoothing image of mixedly arranged color components |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005013622A1 true WO2005013622A1 (ja) | 2005-02-10 |
Family
ID=34113557
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/009601 WO2005013622A1 (ja) | 2003-06-30 | 2004-06-30 | 色成分の混在配列された画像を処理する画像処理装置、画像処理プログラム、電子カメラ、および画像処理方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US20060119896A1 (ja) |
EP (1) | EP1641285A4 (ja) |
JP (1) | JPWO2005013622A1 (ja) |
CN (1) | CN1817047A (ja) |
WO (1) | WO2005013622A1 (ja) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006338296A (ja) * | 2005-06-01 | 2006-12-14 | Noritsu Koki Co Ltd | 微小ノイズ抑制のための画像処理方法及びプログラム及びこの方法を実施するノイズ抑制モジュール |
JP2007143120A (ja) * | 2005-10-18 | 2007-06-07 | Ricoh Co Ltd | ノイズ除去装置、ノイズ除去方法、ノイズ除去プログラム及び記録媒体 |
WO2007097170A1 (ja) * | 2006-02-23 | 2007-08-30 | Nikon Corporation | スペクトル画像処理方法、コンピュータ実行可能なスペクトル画像処理プログラム、スペクトルイメージングシステム |
JP2007306501A (ja) * | 2006-05-15 | 2007-11-22 | Fujifilm Corp | 画像処理方法、画像処理装置、および画像処理プログラム |
JP2009516256A (ja) * | 2005-11-10 | 2009-04-16 | ディー−ブルアー テクノロジーズ リミティド | モザイクドメインにおける画像向上 |
JP2009100150A (ja) * | 2007-10-16 | 2009-05-07 | Acutelogic Corp | 画像処理装置及び画像処理方法、画像処理プログラム |
US7817852B2 (en) | 2006-07-20 | 2010-10-19 | Casio Computer Co., Ltd. | Color noise reduction image processing apparatus |
US8045153B2 (en) | 2006-02-23 | 2011-10-25 | Nikon Corporation | Spectral image processing method, spectral image processing program, and spectral imaging system |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5399739B2 (ja) * | 2009-02-25 | 2014-01-29 | ルネサスエレクトロニクス株式会社 | 画像処理装置 |
JP5665508B2 (ja) * | 2010-11-30 | 2015-02-04 | キヤノン株式会社 | 画像処理装置及び方法、並びにプログラム及び記憶媒体 |
US8654225B2 (en) * | 2011-05-31 | 2014-02-18 | Himax Imaging, Inc. | Color interpolation system and method thereof |
TWI455570B (zh) * | 2011-06-21 | 2014-10-01 | Himax Imaging Inc | 彩色內插系統及方法 |
CN102857765B (zh) * | 2011-06-30 | 2014-11-05 | 英属开曼群岛商恒景科技股份有限公司 | 彩色内插系统及方法 |
WO2014008329A1 (en) * | 2012-07-03 | 2014-01-09 | Marseille Networks, Inc. | System and method to enhance and process a digital image |
CN104618701B (zh) * | 2015-01-13 | 2017-03-29 | 小米科技有限责任公司 | 图像处理方法及装置、电子设备 |
WO2016158317A1 (ja) * | 2015-03-31 | 2016-10-06 | 京セラドキュメントソリューションズ株式会社 | 画像処理装置および画像形成装置 |
JP7079680B2 (ja) * | 2018-07-05 | 2022-06-02 | 富士フイルムヘルスケア株式会社 | 超音波撮像装置、および、画像処理装置 |
WO2022126516A1 (en) * | 2020-12-17 | 2022-06-23 | Covidien Lp | Adaptive image noise reduction system and method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09233395A (ja) * | 1996-02-26 | 1997-09-05 | Toshiba Corp | 固体撮像装置 |
JP2001292455A (ja) * | 2000-04-06 | 2001-10-19 | Fuji Photo Film Co Ltd | 画像処理方法および装置並びに記録媒体 |
WO2002056604A1 (fr) * | 2001-01-09 | 2002-07-18 | Sony Corporation | Dispositif de traitement d'images |
JP2003087808A (ja) * | 2001-09-10 | 2003-03-20 | Fuji Photo Film Co Ltd | 画像処理方法および装置並びにプログラム |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5450502A (en) * | 1993-10-07 | 1995-09-12 | Xerox Corporation | Image-dependent luminance enhancement |
JP3221291B2 (ja) * | 1995-07-26 | 2001-10-22 | ソニー株式会社 | 画像処理装置、画像処理方法、ノイズ除去装置及びノイズ除去方法 |
US5596367A (en) * | 1996-02-23 | 1997-01-21 | Eastman Kodak Company | Averaging green values for green photosites in electronic cameras |
US5901242A (en) * | 1996-07-03 | 1999-05-04 | Sri International | Method and apparatus for decoding spatiochromatically multiplexed color images using predetermined coefficients |
US6392699B1 (en) * | 1998-03-04 | 2002-05-21 | Intel Corporation | Integrated color interpolation and color space conversion algorithm from 8-bit bayer pattern RGB color space to 12-bit YCrCb color space |
US6075889A (en) * | 1998-06-12 | 2000-06-13 | Eastman Kodak Company | Computing color specification (luminance and chrominance) values for images |
US6625325B2 (en) * | 1998-12-16 | 2003-09-23 | Eastman Kodak Company | Noise cleaning and interpolating sparsely populated color digital image using a variable noise cleaning kernel |
EP1059811A2 (en) * | 1999-06-10 | 2000-12-13 | Fuji Photo Film Co., Ltd. | Method and system for image processing, and recording medium |
JP2001186353A (ja) * | 1999-12-27 | 2001-07-06 | Noritsu Koki Co Ltd | 画像処理方法および画像処理プログラムを記録した記録媒体 |
JP4402262B2 (ja) * | 2000-06-07 | 2010-01-20 | オリンパス株式会社 | プリンタ装置及び電子カメラ |
US20020047907A1 (en) * | 2000-08-30 | 2002-04-25 | Nikon Corporation | Image processing apparatus and storage medium for storing image processing program |
EP1337115B1 (en) * | 2000-09-07 | 2010-03-10 | Nikon Corporation | Image processor and colorimetric system converting method |
JP4406195B2 (ja) * | 2001-10-09 | 2010-01-27 | セイコーエプソン株式会社 | 画像データの出力画像調整 |
US20030169353A1 (en) * | 2002-03-11 | 2003-09-11 | Renato Keshet | Method and apparatus for processing sensor images |
-
2004
- 2004-06-30 EP EP04747070A patent/EP1641285A4/en not_active Withdrawn
- 2004-06-30 JP JP2005512463A patent/JPWO2005013622A1/ja not_active Withdrawn
- 2004-06-30 CN CNA2004800188227A patent/CN1817047A/zh active Pending
- 2004-06-30 WO PCT/JP2004/009601 patent/WO2005013622A1/ja active Application Filing
-
2005
- 2005-12-29 US US11/320,460 patent/US20060119896A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09233395A (ja) * | 1996-02-26 | 1997-09-05 | Toshiba Corp | 固体撮像装置 |
JP2001292455A (ja) * | 2000-04-06 | 2001-10-19 | Fuji Photo Film Co Ltd | 画像処理方法および装置並びに記録媒体 |
WO2002056604A1 (fr) * | 2001-01-09 | 2002-07-18 | Sony Corporation | Dispositif de traitement d'images |
JP2003087808A (ja) * | 2001-09-10 | 2003-03-20 | Fuji Photo Film Co Ltd | 画像処理方法および装置並びにプログラム |
Non-Patent Citations (1)
Title |
---|
See also references of EP1641285A4 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006338296A (ja) * | 2005-06-01 | 2006-12-14 | Noritsu Koki Co Ltd | 微小ノイズ抑制のための画像処理方法及びプログラム及びこの方法を実施するノイズ抑制モジュール |
JP4645896B2 (ja) * | 2005-06-01 | 2011-03-09 | ノーリツ鋼機株式会社 | 微小ノイズ抑制のための画像処理方法及びプログラム及びこの方法を実施するノイズ抑制モジュール |
JP2007143120A (ja) * | 2005-10-18 | 2007-06-07 | Ricoh Co Ltd | ノイズ除去装置、ノイズ除去方法、ノイズ除去プログラム及び記録媒体 |
JP2009516256A (ja) * | 2005-11-10 | 2009-04-16 | ディー−ブルアー テクノロジーズ リミティド | モザイクドメインにおける画像向上 |
KR101292458B1 (ko) * | 2005-11-10 | 2013-07-31 | 테세라 인터내셔널, 인크 | 모자이크 도메인에서의 이미지 개선 |
WO2007097170A1 (ja) * | 2006-02-23 | 2007-08-30 | Nikon Corporation | スペクトル画像処理方法、コンピュータ実行可能なスペクトル画像処理プログラム、スペクトルイメージングシステム |
US8045153B2 (en) | 2006-02-23 | 2011-10-25 | Nikon Corporation | Spectral image processing method, spectral image processing program, and spectral imaging system |
US8055035B2 (en) | 2006-02-23 | 2011-11-08 | Nikon Corporation | Spectral image processing method, computer-executable spectral image processing program, and spectral imaging system |
JP4826586B2 (ja) * | 2006-02-23 | 2011-11-30 | 株式会社ニコン | スペクトル画像処理方法、コンピュータ実行可能なスペクトル画像処理プログラム、スペクトルイメージングシステム |
JP2007306501A (ja) * | 2006-05-15 | 2007-11-22 | Fujifilm Corp | 画像処理方法、画像処理装置、および画像処理プログラム |
US7817852B2 (en) | 2006-07-20 | 2010-10-19 | Casio Computer Co., Ltd. | Color noise reduction image processing apparatus |
JP2009100150A (ja) * | 2007-10-16 | 2009-05-07 | Acutelogic Corp | 画像処理装置及び画像処理方法、画像処理プログラム |
Also Published As
Publication number | Publication date |
---|---|
EP1641285A4 (en) | 2009-07-29 |
CN1817047A (zh) | 2006-08-09 |
EP1641285A1 (en) | 2006-03-29 |
JPWO2005013622A1 (ja) | 2006-09-28 |
US20060119896A1 (en) | 2006-06-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005013622A1 (ja) | 色成分の混在配列された画像を処理する画像処理装置、画像処理プログラム、電子カメラ、および画像処理方法 | |
JP4815807B2 (ja) | Rawデータから倍率色収差を検出する画像処理装置、画像処理プログラム、および電子カメラ | |
US8363123B2 (en) | Image pickup apparatus, color noise reduction method, and color noise reduction program | |
US7373020B2 (en) | Image processing apparatus and image processing program | |
JP5315158B2 (ja) | 画像処理装置及び画像処理方法 | |
JP5260635B2 (ja) | パンクロマチック画像を用いたノイズ低減されたカラーの画像 | |
CN1812592B (zh) | 用于处理滤色器阵列的图像数据的方法和设备 | |
JP4054184B2 (ja) | 欠陥画素補正装置 | |
US7634153B2 (en) | Image processing apparatus and control method thereof | |
US8456544B2 (en) | Image processing apparatus, image pickup apparatus, storage medium for storing image processing program, and image processing method for reducing noise in RGB bayer array image data | |
JP4668185B2 (ja) | 画像処理方法 | |
WO2002060186A1 (fr) | Procede de traitement d'image, programme de traitement d'image et processeur d'image | |
WO2004112401A1 (ja) | 画像処理方法、画像処理プログラム、画像処理装置 | |
WO2005112470A1 (ja) | 画像処理装置および画像処理プログラム | |
JP4329542B2 (ja) | 画素の類似度判定を行う画像処理装置、および画像処理プログラム | |
US7623705B2 (en) | Image processing method, image processing apparatus, and semiconductor device using one-dimensional filters | |
US8184183B2 (en) | Image processing apparatus, image processing method and program with direction-dependent smoothing based on determined edge directions | |
JP5121675B2 (ja) | 画像処理装置、映像表示装置、撮像装置、画像処理方法 | |
US20050220350A1 (en) | Image processing apparatus, image processing method, image processing program, and recording medium storing image processing program | |
US20100215267A1 (en) | Method and Apparatus for Spatial Noise Adaptive Filtering for Digital Image and Video Capture Systems | |
JP2003101774A (ja) | 画像処理装置 | |
JP5153842B2 (ja) | 画像処理装置、画像処理プログラム | |
JP4053362B2 (ja) | 補間処理方法、補間処理プログラムおよびこれを記録した記録媒体ならびに画像処理装置およびこれを備えた画像形成装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2005512463 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11320460 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20048188227 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2004747070 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 2004747070 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 11320460 Country of ref document: US |