WO2005112470A1 - 画像処理装置および画像処理プログラム - Google Patents
画像処理装置および画像処理プログラム Download PDFInfo
- Publication number
- WO2005112470A1 WO2005112470A1 PCT/JP2005/008827 JP2005008827W WO2005112470A1 WO 2005112470 A1 WO2005112470 A1 WO 2005112470A1 JP 2005008827 W JP2005008827 W JP 2005008827W WO 2005112470 A1 WO2005112470 A1 WO 2005112470A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- interpolation
- circuit
- image processing
- sensitivity difference
- pixel
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 148
- 230000035945 sensitivity Effects 0.000 claims abstract description 122
- 238000000034 method Methods 0.000 claims description 118
- 238000002156 mixing Methods 0.000 claims description 98
- 230000008569 process Effects 0.000 claims description 56
- 238000003384 imaging method Methods 0.000 claims description 17
- 238000012935 Averaging Methods 0.000 claims description 16
- 230000003321 amplification Effects 0.000 claims description 7
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 7
- 238000005259 measurement Methods 0.000 claims description 2
- 238000005303 weighing Methods 0.000 claims description 2
- 238000006243 chemical reaction Methods 0.000 abstract description 8
- 230000009467 reduction Effects 0.000 abstract description 6
- 239000000872 buffer Substances 0.000 description 115
- 238000010586 diagram Methods 0.000 description 47
- 230000000694 effects Effects 0.000 description 27
- 238000012986 modification Methods 0.000 description 17
- 230000004048 modification Effects 0.000 description 17
- 238000012937 correction Methods 0.000 description 12
- 238000001514 detection method Methods 0.000 description 11
- 238000001914 filtration Methods 0.000 description 10
- 239000004296 sodium metabisulphite Substances 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 239000000203 mixture Substances 0.000 description 4
- 238000003672 processing method Methods 0.000 description 4
- 238000011156 evaluation Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005111 flow chemistry technique Methods 0.000 description 1
- 238000007429 general method Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
Definitions
- the present invention provides an image processing apparatus and an image processing apparatus that processes an image output from a single-chip imaging system, a two-chip imaging system, or a three-chip pixel-shifted imaging system to generate a color digital image having three color component values for each pixel. It concerns the processing program. Background technology
- Single-chip imaging systems used in digital cameras and the like use a single-chip image sensor with a different color filter for each pixel.In the output image from the device, one type of color is used for each pixel. Only component values can be obtained. As a result, in order to generate a color digital image, interpolation processing is needed to compensate for missing color component values at each pixel. This interpolation process is also required when using a two-chip imaging system or a three-pixel pixel shift imaging system. Unless the processing method is devised when performing the interpolation processing, the final color image will be degraded such as blurred or false colors. For this reason, various methods have been proposed.
- FIG. 1 is an explanatory diagram showing a conventional example based on edge detection described in Japanese Patent Application Laid-Open No. 8-289686.
- a single-chip Bayer array image sensor having a color filter arrangement shown in FIG.
- a cross-shaped neighborhood is taken around B5, which is the pixel of interest, and the interpolation values Gh, Gv for G in the horizontal and vertical directions for the pixel of interest are As shown in (1),
- Gv (G2 + G8) / 2 + (2 * B5-Bl-B9) / 4 (1) It is estimated.
- the present invention has been made in view of such problems of the related art, and has an image processing apparatus and an image processing apparatus having a configuration capable of performing an interpolation process with high accuracy and little side effect even when there is a variation in sensor sensitivity.
- the purpose is to provide a processing program. Disclosure of the invention
- the image processing apparatus employs an image processing apparatus in which an image is captured by an imaging device having a sensitivity difference between pixels for acquiring a specific color component.
- a plurality of interpolation means for compensating for missing color components at each pixel of the data;
- a mixing means for mixing outputs from the plurality of interpolation means; and
- a control means for controlling at least one of the interpolation means and the mixing means in accordance with the sensitivity difference.
- An image processing apparatus is an image processing apparatus that processes data captured by an image sensor having a sensitivity difference between pixels for acquiring a specific color component.
- Each pixel of the data has a plurality of interpolating means for compensating for missing color components, and a mixing means for mixing outputs from the plurality of interpolating means, wherein the interpolating means and the mixing means determine the sensitivity difference. It is characterized by being set according to. According to this configuration, the interpolation means and the mixing means are set so as to be able to cope with a case where there is a sensitivity difference in the input data, so that an interpolation result without failure is obtained.
- the image processing apparatus is characterized in that at least one of the plurality of interpolation means is a linear filter, and a coefficient thereof is set according to the sensitivity difference. I do. According to this configuration, since the filter coefficient is set so that the effect of the sensitivity difference does not affect the interpolation result, an interpolation result that does not fail even for input data having a sensitivity difference can be obtained.
- the image processing apparatus according to the first embodiment or the second embodiment.
- the coefficient is set so that an amplification degree of a frequency component generated in the data due to the sensitivity difference is suppressed to a predetermined range. According to this configuration, it is possible to eliminate the influence of the sensitivity difference on the interpolation result by removing the frequency component that the sensitivity difference gives to the interpolation result.
- control means includes sensitivity difference estimating means for estimating a sensitivity difference of the image sensor, and performs control based on the estimation result.
- the sensitivity difference estimating means estimates the sensitivity difference based on a type of the image sensor or an imaging condition of the data. I do.
- the method of estimating a general sensitivity difference (for example, comparing the values of pixels of the same color component in an image) makes it possible to reduce the amount of occurrence of the sensitivity difference specific to a specific sensor or specific shooting conditions. It can respond optimally to patterns.
- the image processing apparatus wherein the mixing means has a selection means for selecting any one of the plurality of interpolation results, and the control means comprises: The selection means determines which of the interpolation results to select in accordance with According to this configuration, an optimal interpolation result can be selected according to the sensitivity difference, and even if there is a sensitivity difference, interpolation processing without failure can be performed.
- the image processing apparatus wherein the mixing means includes weighted averaging means for weighing the plurality of interpolation results, and the control means is configured to perform the averaging according to the sensitivity difference.
- the weight of the weighted averaging means is determined. According to this configuration, a plurality of interpolation results are obtained according to the sensitivity difference. Optimal weighted averaging can be performed, and interpolation processing without failure can be performed even when there is a sensitivity difference.
- control means has a sensitivity difference measuring means for measuring an amount of a sensitivity difference remaining in the result of the plurality of interpolation results.
- the mixing means is controlled based on the measurement result. According to this configuration, after directly measuring the effect of the sensitivity difference included in the interpolation result, the mixing means can be controlled so as to reduce the effect of the sensitivity difference, so that the sensitivity difference can be more reliably calculated than the indirect sensitivity difference estimation. This can prevent the interpolation result from being broken.
- control means includes a direction determination means for determining an edge direction in the vicinity of each pixel of the data, and the mixing is performed based on a result of the direction determination. Controlling the means.
- the image processing program for color component interpolation transmits, to a computer, data captured by an image sensor having a sensitivity difference between pixels for acquiring a specific color component.
- the image processing program for color component interpolation stores, in a computer, data captured by an image sensor having a sensitivity difference between pixels for acquiring a specific color component.
- the plurality of interpolation processing procedures and the mixing procedure are set according to the sensitivity difference.
- the controlling step includes a step of setting the coefficient of the filter so that the amplification degree of the frequency component generated in the data due to the sensitivity difference is suppressed to a predetermined range.
- the image processing program for color component interpolation includes: changing a sensitivity difference of the image sensor to a type of the image sensor or a photographing condition of the data. It includes a procedure for estimating based on the base.
- the step of mixing includes a step of performing a weighted average process of performing a weighted average of the results of the plurality of interpolation processes.
- the controlling step is Determining a weight of the weighted averaging process according to the sensitivity difference.
- the controlling step includes a step of determining an edge direction near each pixel of the data. Controlling the mixing procedure based on the processing result of the step of determining the edge direction.
- the image processing program for color component interpolation includes a software Xa for performing color component interpolation processing of image data having a sensitivity difference between pixels for acquiring a specific color component. Can do it.
- FIG. 1 is an explanatory diagram of the prior art.
- FIG. 2 is a configuration diagram of the first embodiment.
- FIG. 3 is a configuration diagram of the G interpolation circuit 106.
- FIG. 4 is an explanatory diagram of the interpolation method of the interpolation circuit A113.
- FIG. 5 is a flowchart of the processing of the G interpolation circuit 106.
- FIG. 6 is an explanatory diagram of the G step.
- FIG. 7 is a characteristic diagram of the function g.
- FIG. 8 is an explanatory diagram of the neighborhood pattern processed by the RB interpolation circuit 109.
- FIG. 9 is an explanatory diagram of the processing of the RB interpolation circuit 109.
- FIG. 10 is a flowchart corresponding to the first embodiment.
- FIG. 11 is a configuration diagram of the second embodiment.
- FIG. 12 is a configuration diagram of the G interpolation circuit 206.
- FIG. 13 is a flowchart of the processing of the G interpolation circuit 206.
- FIG. 14 is a characteristic diagram of the linear filter.
- FIG. 15 is a configuration diagram of the third embodiment.
- FIG. 16 is a configuration diagram of the G interpolation circuit 306.
- FIG. 17 is a flowchart corresponding to the second embodiment.
- FIG. 18 is a characteristic diagram of a filter used in the level difference detection circuit 320.
- FIG. 19 is a configuration diagram of the fourth embodiment.
- FIG. 20 is a configuration diagram of the RB interpolation circuit 409.
- FIG. 21 is an explanatory diagram of the interpolation method of the G interpolation circuit 406.
- FIG. 22 is a flowchart of the process of the RB interpolation circuit 409.
- FIG. 23 is an explanatory diagram of data used in the RB interpolation circuit 409.
- FIG. 24 is a characteristic diagram of a filter used in the LPF circuit 420.
- FIG. 25 is a configuration diagram of the G interpolation circuit 406.
- FIG. 26 is a configuration diagram of the G interpolation circuit 206 in a modification of the second embodiment.
- FIG. 27 is a configuration diagram of the RB interpolation circuit 409 in the modification of the fourth embodiment.
- FIG. 28 is an explanatory diagram illustrating an example of a color difference signal.
- FIG. 29 is a flowchart corresponding to a modification of the second embodiment.
- FIG. 30 is a flowchart corresponding to the fourth embodiment.
- FIG. 31 is a flowchart corresponding to a modification of the fourth embodiment.
- FIG. 2 is a configuration diagram of the first embodiment
- FIG. 3 is a configuration diagram of the G interpolation circuit 106
- FIG. 4 is an explanatory diagram of an interpolation method of the interpolation circuit A113
- FIG. 5 shows a processing procedure of the G interpolation circuit 106.
- FIG. 6 is an explanatory diagram of the G step
- FIG. 7 is a characteristic diagram of the function g
- FIG. 8 is an explanatory diagram of a neighborhood pattern processed by the RB interpolator 109
- FIG. 9 is an explanatory diagram of the process of the RB interpolator 109
- a digital camera 100 includes an optical system 101, a single-chip Bayer array CCD102, and an A / D circuit 103 that performs gain adjustment, A / D conversion of the output from the single-chip Bayer array CCD102, and outputs the digital image. Is provided.
- a noise reduction circuit 105 for reducing noise caused by the single-chip Bayer array CCD 102 with respect to the output of the A / D circuit 103 and recording the result in the single-chip image buffer 104 is provided.
- the output of the A / D circuit 103 is stored in an image buffer (first buffer), and the image signal is supplied from the first buffer to the noise reduction circuit 105. Is also good.
- the single-chip image buffer 104 functions as a second buffer.
- the digital camera 100 supplements the image in the single-chip image buffer 104 with the G component at the pixel position where the G component is missing by interpolation from surrounding pixels, thereby forming a G image buffer 107 (third buffer).
- a G interpolation circuit 106 for outputting to the The optical system 101 and the single-chip Bayer array CCD 102 constitute an “imaging device” of the image processing apparatus according to claim 1.
- the digital camera 100 further gain-adjusts the value of the pixel in which the R or B component is obtained with respect to the image in the single-chip image buffer 104, the WB correction circuit 108, and all the pixels obtained from the G image buffer 107. It has an RB interpolation circuit 109 that outputs a color image in which color components missing in all pixels are supplemented from the G component and the R and B components subjected to WB correction by the WB correction circuit 109. Also, an image quality adjustment circuit 110 for performing gradation conversion, edge enhancement, compression, etc. on the output of the RB interpolation circuit 109, a recording circuit 111 for recording the output of the image quality adjustment circuit 110 on a recording medium, and an optical system.
- the G interpolation circuit 106 has the configuration shown in FIG. In FIG. 3, a G interpolation circuit 106 performs two different types of interpolation processing on the image in the single-chip image buffer 104 at the pixel where the G component is missing, and an interpolation circuit B1 14 and an interpolation circuit It has a mixing circuit 116 that mixes the outputs of Al 13 and interpolation circuit Bl 14 to generate G components for all pixels.
- the interpolation circuit A113 and the interpolation circuit B114 of the G interpolation circuit 106 correspond to the “plurality of interpolation means for compensating for missing color components” according to claim 1 of the present invention. Further, the mixing circuit 116 corresponds to “a mixing means for mixing outputs from a plurality of interpolation means”.
- the G interpolation circuit 106 includes a step amount estimating circuit 115 for reading out the vicinity of the pixel processed by the mixing circuit 116 from the single-plate image buffer 104 and estimating the sensitivity difference of the G component occurring in the vicinity.
- a mixing ratio determination circuit 117 that determines a mixing ratio in the mixing circuit 116 of the outputs from the two types of interpolation circuits based on the level difference output from the level difference estimation circuit 115 is included.
- the level difference estimating circuit 115 and the mixing ratio determining circuit 117 correspond to the “controlling means for controlling the interpolating means and the mixing means according to the sensitivity difference”.
- the operation of the digital camera 100 will be described.
- an optical image by the optical system 101 is first captured by the single-chip Bayer array CCD 102 and output as an analog signal.
- the analog signal is subjected to A / D conversion after gain adjustment by the A / D conversion circuit 103, and becomes a single-plate digital image having only one kind of color component for each pixel.
- a noise reduction circuit 105 performs processing for reducing noise caused by the characteristics of the single-chip Bayer array CCD 102 on this image, and outputs the processed noise to a single-chip image buffer 104.
- the G interpolation circuit 106 reads, for each pixel of the image in the single-chip image buffer 104, the neighborhood of 5 ⁇ 5 of the central pixel (pixel of interest) and performs the processing shown in the flowchart of FIG. Then, by interpolating the G component in the pixel where the G component is missing, the G component without missing in all pixels is Output to 107.
- the interpolation circuit A113 calculates the G component VI for the central pixel of X. If the G component has already been obtained at the center pixel, the interpolation circuit A113 sets the value to V 1. If the G component has not been obtained, a known interpolation method based on direction discrimination as described below is used. In the following description, it is assumed that data as shown in FIG. 4 is obtained near the central pixel. In Fig. 4, the G component is obtained for each of the Gr.Gb pixels.However, in order to distinguish the arrangement, Gr on the same row as the R pixel and Gr on the same row as the B pixel. It is described.
- the average Gr ' (Gr0 + Grl) / 2 of the left and right G pixels (GrO and Grl) of the center pixel (R in Fig. 4) is calculated.
- the average Gb ' (GbO + Gbl) / 2 of the G pixels (GbO and Gbl) above and below the center pixel (R in Fig. 4) is calculated.
- the interpolation circuit B114 calculates the G component V2 for the central pixel of X.
- the interpolation circuit B114 stores the coefficient C—i, j (i, j is an integer from 1 to 5) of a 5 ⁇ 5 linear filter in the internal ROM, and obtains the following equation (3),
- V2 ( ⁇ C_k.1 * X (k, l)) / ⁇ C_k.1 (3)
- Equation (3) X (k, l) represents the pixel value at the coordinates (k.1) of the neighborhood data X, and the sum is an integer set (k , 1) Take everything.
- the filter coefficients C-i and j are set so that their frequency characteristics become 0 at the horizontal and vertical Nyquist frequencies. Therefore, the filter coefficient C_i.j is, as described in claim 5 of the present invention, “the frequency component of the frequency component generated in the data due to the difference in sensitivity (between the pixels acquiring the specific color component).
- the amplification degree is set to be within a predetermined range.
- the frequency component caused by the difference in sensitivity corresponds to the Nyquist frequency.
- the step difference estimation circuit 115 estimates how much a sensitivity difference of the G component has occurred in X.
- the single-chip Bayer array CCD102 has a color filter array as shown in Fig. 6, but due to the characteristics of the sensor, under certain shooting conditions, the same G component is acquired between the Gr and Gb pixels. Regardless, there is a difference in sensitivity, and there is a difference between the pixel values of Gr and Gb even when shooting a flat subject. This difference is hereinafter referred to as a G step.
- the step difference estimation circuit 115 calculates the upper limit value of the step difference between the Gr pixel and the Gb pixel generated when various objects are photographed under various lighting conditions in advance, and the surrounding R pixel value and B pixel value.
- the conditions and amount of the G step difference depend on the manufacturer of the single-chip Bayer array CCD 102 and the imaging conditions. Therefore, in the present embodiment, the above-described table is prepared for each model number of the single-chip Bayer array CCD 102 and for each shutter speed. Then, when the step amount estimating circuit 115 operates, the control unit 112 obtains the model number of the single-chip Bayer array CCD 102 and the shutter speed at the time of photographing, and outputs them to the step amount estimating circuit 115. The step amount estimating circuit 115 obtains the step amount using a table that matches the model number of the designated single-chip Bayer array CCD102 and the shutter speed.
- the level difference estimating circuit 115 corresponds to “sensitivity difference estimating means for estimating the sensitivity difference of the imaging device” according to claim 6.
- the sensitivity difference estimating means estimates the sensitivity difference based on the type of the imaging device or the imaging condition of the data from the model number of the single-chip Bayer array CCD102 and the shutter speed at the time of imaging.
- S4 The mixture ratio determination circuit 117 mixes the result of VI among the interpolation results VI and V2 obtained in the processing of Sl and S2 according to the step amount estimation value F input from the step amount estimation circuit 115.
- the usage rate ⁇ indicating how much is used in the circuit 116 is calculated.
- the mixing ratio determination circuit 117 uses the VI using the function g determined as shown in Fig. 7 so that even if the step remains in the interpolation result, it does not exceed the upper limit step amount T that is not visually noticeable.
- the ratio ⁇ is determined and output to the mixing circuit 116.
- the mixing circuit 116 corresponds to “weighted averaging means for averaging a plurality of interpolation results”.
- the control means (mixing ratio determining circuit 117, step amount estimating circuit 115) is for "determining the weight of the weighted averaging means according to the sensitivity difference".
- the mixing circuit 116 writes V3 at the corresponding pixel position of the G image buffer 107 as the value of the G pixel at the central pixel of 5 ⁇ 5 neighborhood.
- the G image buffer 107 can obtain a G component without any missing pixels.
- the WB correction circuit 108 reads one pixel at a time from the single-chip image buffer 104, and if the obtained color component is R or B, multiplies the pixel value by the gain necessary for white balance. Output to the RB interpolation circuit 109.
- the pixel value is directly output to the RB interpolation circuit 109.
- the RB interpolation circuit 109 sequentially holds the gain-adjusted pixel values output from the WB correction circuit 108 in an internal buffer. Then, after the process of the WB correction circuit 108 is completed for all the pixels of the single-chip image buffer 104, the RGB three-color components for all the pixels in the buffer are calculated by a known method. First, the RB interpolation circuit 109 reads 5 ⁇ 5 neighborhood of each pixel of the internal buffer. As shown in Figs. 8 (a) and (b), there are various nearby patterns depending on the color component obtained at the target pixel, but the processing method is the same.
- the RB interpolation circuit 109 also reads from the G image buffer 107 a 5x5 neighborhood corresponding to a 5x5 neighborhood read from the internal buffer. Then, the final RGB components are calculated by the following procedure.
- the neighborhood data read from the internal buffer is X
- the neighborhood data read from the G image buffer 107 is Y.
- data XR is generated from X by setting the values of pixels for which no R component has been obtained to 0.
- data XB is generated from X in which the values of pixels for which the B component has not been obtained are set to 0.
- R ' RL + GH-GL1.
- the image quality adjustment circuit 110 stores a color image in which the missing color components have been compensated for in each pixel of the single-chip image buffer 104. It is output. Thereafter, the image quality adjustment circuit 110 performs processing such as color conversion, edge enhancement, and gradation conversion on the input color image, and outputs the processed color image to the recording circuit 111.
- processing such as color conversion, edge enhancement, and gradation conversion on the input color image, and outputs the processed color image to the recording circuit 111.
- the recording circuit 111 records the input image on a recording medium (not shown) and the recording process ends, the entire imaging operation of the digital camera 100 is completed.
- FIG. 10 shows a flowchart of the processing procedure using software. This software performs A / D conversion of the output from the single-plate Bayer CCD and saves the MW data file for development. Hereinafter, each step of the flowchart will be described.
- S7 Read veneer data from the RAW data file and store it in Knob 1 as a two-dimensional array.
- S9 The buffer 2 is scanned first in the row direction and then in the column direction to select unprocessed pixels for S10 to S12. Here, if there is no unprocessed pixel, the process proceeds to S13. If there are unprocessed pixels, go to S10.
- S10 Read the 5x5 block near the unprocessed pixel from buffer 2.
- S13 WB processing equivalent to the WB correction circuit 108 is performed on the image in the buffer 2, and the image is output to the buffer 2.
- S14 Scan buffer 2 first in the row direction and then in the column direction to select unprocessed pixels for S15 to S17. At this time, if there is no unprocessed pixel, go to S18.If there is an unprocessed pixel, go to S15.
- the S15 Read the neighboring 5x5 block corresponding to the pixel position of the unprocessed pixel from buffer 2 and buffer 3.
- FIGS. 11 to 14 and FIG. 26 show a second embodiment of the present invention.
- 11 is a configuration diagram of the second embodiment
- FIG. 12 is a configuration diagram of the G interpolation circuit 206
- FIG. 13 is a flowchart of the processing of the G interpolation circuit 206
- FIG. 14 is an interpolation circuit C213
- FIG. 26 is a characteristic diagram of a linear filter used in the direction discriminating circuit 218, and
- FIG. 26 is a diagram of a G interpolation circuit 206 according to a modification of the second embodiment.
- the second embodiment is directed to a digital camera as in the first embodiment.
- the digital camera 200 of the second embodiment is different from the digital camera 100 (FIG. 2) of the first embodiment except that the G interpolation circuit 106 is changed to a G interpolation circuit 206.
- FIG. 12 shows the configuration of the G interpolation circuit 206.
- the G interpolation circuit 206 has two types of interpolation circuits C213 and D214, a level difference estimation circuit 115, a direction discrimination circuit 218, and a selection circuit 216.
- the level difference estimating circuit 115 inputs an image signal from the single-chip image buffer 104 and outputs the image signal to the LPF control circuit 217.
- the interpolation circuit C213 and the interpolation circuit D214 input parameters required for interpolation from the LPF control circuit 217 and an image signal from the single-chip image buffer 104, respectively, and output the signals to the selection circuit 216.
- the selection circuit 216 also receives an image signal from the direction determination circuit 218 and outputs the image signal to the G image buffer 107.
- the direction determination circuit 218 inputs an image signal from the single-plate image buffer 104.
- the G interpolation circuit 206 reads out the neighboring blocks of each pixel from the single-chip image in the single-chip image buffer 104 and reads the G component of the central pixel. Is calculated. However, the size of the read block is NxN (N is an integer of 5 or more), which is larger than that of the G interpolation circuit 106.
- the two types of interpolation circuits C213 and D214 correspond to the “plurality of interpolation means for compensating for missing color components”.
- the selection circuit 216 corresponds to the “mixing means” described in claim 1.
- the level difference estimating circuit 115, the LPF control circuit 217, and the direction discriminating circuit 218 correspond to the “control means” described in claim 1.
- the level difference estimating circuit 115 corresponds to the “sensitivity difference estimating means” described in claim 6.
- the sensitivity difference estimating means is for “estimating the sensitivity difference based on the type of the image sensor or the photographing condition of the data”.
- FIG. 13 is a flowchart of a process for the neighborhood of each pixel of the G interpolation circuit 206.
- X be the data of the read neighboring block.
- the sensitivity difference (G step difference) is obtained between the Gr pixel and the Gb pixel under a certain shooting condition despite the same G component acquisition. ) Is attached.
- the level difference estimating circuit 115 estimates the level difference between the Gr pixel and the Gb pixel in X, similarly to S3 in FIG. 5 of the first embodiment.
- the parameter ⁇ for the LPF control circuit 217 to control the processing in the interpolation circuit C213 and the interpolation circuit D214 is determined in accordance with the level difference F inputted from the level difference estimation circuit 115. .
- the interpolation circuit C213 Will be described as an example.
- the interpolator C213 has two types of linear filter coefficients.
- the frequency characteristics of the interpolator C213 are shown in Figs. 14 (a) and (b) for the filters P-i, j, Q_i, j in the interpolator C213, for example. It is set as follows.
- a new filter coefficient C_i, j is calculated by the mixing ratio (control ratio) input from the LPF control circuit 217 according to equation (6).
- the filter coefficient C_i.j corresponds to the “coefficient of each filter” described in claim 3. Further, the filter coefficient C i.j is “set according to the sensitivity difference” as described in claim 4.
- the filter coefficient P_i, j in the interpolation circuit C213 corresponds to the “coefficient” according to claim 5.
- FIG. 7 shows an example of a desirable characteristic of the function g.
- X (k, l) represents the pixel value at the coordinates (k, 1) of the neighborhood data X, and the sum is In this case, all the pairs of integers representing the coordinates of the pixel from which the G component is obtained, 1) are taken.
- the interpolation circuit D214 performs filtering using a finoleta that mixes two types of filter coefficients as in the case of the interpolation circuit C213, but replaces them with P-- i, j and Q_i, j. The difference is that P ⁇ _i, j and Q ⁇ _i. That is, instead of Equation (5), Equation (8),
- the direction discriminating circuit 218 discriminates the direction of the edge in the read neighborhood in order to determine which of the output VI of the interpolating circuit C213 and the output V2 of the interpolating circuit D214 is to be used in the selecting circuit 216.
- a bandpass filter F1 for detecting horizontal edges and a bandpass filter F2 for detecting vertical edges are stored in the internal ROM.
- the coefficient of F1 is set such that its frequency characteristic is shown in FIG. 14 (c)
- the coefficient of F2 is set such that its frequency characteristic is shown in FIG. 14 (d).
- the direction discrimination circuit 218 multiplies the G component of X by the filters F1 and F2, and obtains Dh and Dv as a result. At that time, in order to limit the processing to the G component, the sum is calculated only for the pixel position where the G component is obtained, similarly to the interpolation circuit C213. In general, if the direction of the nearby edge is closer to horizontal than at 45 °, Dv is larger, and if it is closer to vertical than at 45 °, Dh is larger. Then, the direction discrimination circuit 218 compares the obtained Dh with the magnitude of Dv, and when Dh is less than Dv, judges that it is a horizontal edge and outputs 1 to the selection circuit 216. If Dh> Dv, it is determined as a vertical edge and 0 is output to the selection circuit 216.
- the direction discriminating circuit 218 corresponds to “a direction discriminating unit that determines an edge direction near each pixel of data” according to claim 12.
- V3 (Dv / (Dv + Dh)) * V2 + (Dh / (Dv + Dh)) * V1 (9)
- the level difference is determined for each pixel and the processing is performed according to the level difference.
- an interpolation process which is not affected by the level difference in advance is performed so that any level difference does not cause a failure.
- FIG. 26 shows such a modification.
- FIG. 26 shows the configuration of the G interpolation circuit 206 according to a modification of the second embodiment.
- the G interpolation circuit 206 is different from the configuration of FIG. 12 in that the level difference estimation circuit 115 and the LPF control circuit 217 are omitted.
- an interpolation circuit E223 is provided in place of the interpolation circuit C213, and an interpolation circuit F224 is provided in place of the interpolation circuit D214.
- the operation of the other circuits is the same in the modified example.
- the interpolation circuit E223 is different from the interpolation circuit C213 in that the interpolation circuit C213 has two types of filters P i .j. Filtering is performed only with i and j.
- the interpolation circuit F224 performs filtering only with P P—i, j.
- the selection circuit 216 in the example of FIG. 26 has an operation of selecting either the interpolation result of the interpolation circuit E223 or the interpolation result of the interpolation circuit F224. This is equivalent to “selection means for selecting”.
- the configuration in FIG. 26 corresponds to the requirement described in claim 2. That is, the interpolation circuit E223 and the interpolation circuit F224 correspond to “a plurality of interpolation means for compensating for missing color components in each pixel of data” in claim 2.
- the selection circuit 116 corresponds to “mixing means for mixing outputs from the plurality of interpolation means”.
- the interpolating means and the mixing means are “set according to a sensitivity difference (between pixels acquiring the specific color component)” according to claim 2.
- the corresponding processing can be realized by software also in the modification shown in FIG.
- the inter-G processing of FIG. 11 may be replaced with the flowchart of FIG.
- the flowchart of FIG. 29 will be described.
- S70 Performs the same filter processing as interpolator E223, and sets the result to VI.
- S71 Performs the same filter processing as interpolation circuit F224, and sets the result to V2.
- S72 The same edge direction discrimination processing as in the direction discrimination circuit 218 is performed, and the process proceeds to S73 where the edge is determined as a horizontal edge and S74 where the edge is determined as a vertical edge.
- S73 The result VI of S70 is set as the final G component interpolation result V3.
- S74 The result V2 of S71 is set as the final G component interpolation result V3.
- FIG. 15 to 18 show a third embodiment of the present invention.
- FIG. 15 is a block diagram of the third embodiment
- FIG. 16 is a block diagram of the G interpolation circuit 306
- FIG. 17 is a flowchart of RAW development software corresponding to the third embodiment
- FIG. FIG. 6 is a characteristic diagram of a filter used in the circuit 320.
- Third embodiment example Is directed to a digital camera as in the first embodiment.
- the digital camera 300 of the third embodiment is different from the digital camera 100 of the first embodiment (FIG. 2) except that the G interpolation circuit 106 is changed to a G interpolation circuit 306.
- the composition and work are the same for ffl.
- the same components as those in FIG. 2 are denoted by the same reference numerals, and detailed description is omitted.
- FIG. 16 is a configuration diagram of the G interpolation circuit 306, which includes two types of interpolation circuits A113 and B114, an internal buffer A317 and an internal buffer B318 holding respective outputs, a step detection circuit 320, and a selection circuit. Consists of 319.
- the level difference detection circuit 320 and the selection circuit 319 input from the internal buffer A317 and the internal buffer B318.
- the level difference detection circuit 320 outputs to the selection circuit 319, and the selection circuit 319 outputs to the G image buffer 107.
- the interpolation circuit A113 and the interpolation circuit B114 perform the same operation as in the first embodiment.
- FIG. 17 is a flowchart showing the processing of the G interpolation circuit 306. Similar to the G interpolation circuit 106 in the first embodiment, the G interpolation circuit 306 converts the image obtained by compensating for the lack of the G component in all pixels with respect to the single image in the single image buffer 104 into a G image. Generated in buffer 107.
- a buffer is provided inside, and once the interpolation results of the two types of G components for the entire single image in the single image buffer 104 are generated in the buffer, Each pixel determines which of the two types of interpolation results is to be selected, and the final output is obtained.
- FIG. 17 the flowchart of FIG. 17 will be described.
- the interpolation circuit B114 performs the same process on X as S2 in the flowchart of FIG. 5 in the first embodiment. Therefore, the coefficient C i, j of the linear filter corresponds to the “coefficient” described in claim 5. Then, the processing result V2 is written to the pixel Q at the pixel position corresponding to P in the internal buffer B318.
- the G interpolation circuit 306 scans the internal buffer A317, identifies one pixel P that has not been subjected to the processing described below, and reads the neighboring MxM block. At the same time, an MxM block near the pixel Q corresponding to the pixel position of P is read from the internal buffer B318.
- M is a predetermined integer of 7 or more.
- the neighborhood block of P is Na
- the neighborhood block of Q is Nb.
- the level difference detection circuit 320 determines the level difference between the Gr pixel and the Gb pixel in the read neighboring Na based on how much the Nyquist frequency component is included in the neighborhood. For this purpose, the level difference detection circuit 320 internally holds a filter coefficient of size MxM such that the frequency characteristics in the horizontal and vertical directions are as shown in FIG. Then, this filter is applied to Na, and the absolute value of the output is used as the step difference estimated value F, and the amplitude intensity at the Nyquist frequency at the pixel position P is estimated.
- the step difference detection circuit 320 determines that the step near the pixel P is below the level that is conspicuous when the step difference estimated value F is equal to or larger than the internally set threshold T, and outputs 1 to the selection circuit 319. I do. In addition, the step difference estimated value F is determined internally. If the difference is equal to or smaller than the threshold T, 0 is output to the selection circuit 319.
- S48 On the other hand, when 0 is input from the level difference detection circuit 320, the pixel value of the center (attention) of the pixel Q is read from the internal buffer 7B318, and the pixel corresponding to the pixel ⁇ of the image in the single-chip image buffer 104 is read. The final interpolation result at the position is V3. S49: Then, the final interpolation result V3 is written to the pixel position of the G image buffer 107 corresponding to the pixel ⁇ . S50: When the processes from S44 to S49 are completed for all the pixels in the internal buffer A317, the G interpolation circuit 306 terminates all the processes. If all the processes have not been completed, the process returns to S44, and the loop process from S44 to S50 is repeated.
- the same processing as the processing by the G interpolation circuit 306 can be performed by software.
- the processing of S9 to S12 is replaced with the processing of S40 to S50 of FIG.
- the processing may be shifted from the processing in S50 to the processing in S13 in FIG.
- the two types of interpolation circuits A113 and B114 in FIG. 16 correspond to the “plurality of interpolation means” described in claim 1.
- the selection circuit 319 corresponds to the “mixing means” described in claim 1.
- the selection circuit 319 (“mixing means”) selects one of the interpolation results of the two types of interpolation circuits A113 and B114, and thus selects one of the plurality of interpolation results described in claim 8. It has a selection means for selecting whether the Further, the level difference detection circuit 320 corresponds to the “control means” according to claim 1. Note that the level difference detection circuit 320 corresponds to “a sensitivity difference measuring means for measuring an amount of a sensitivity difference remaining in a result among a plurality of interpolation results” according to claim 11.
- FIG. 19 to 25 and 27 show a fourth embodiment of the present invention.
- 19 is a configuration diagram of the fourth embodiment
- FIG. 20 is a configuration diagram of the RB interpolation circuit 409
- FIG. 21 is an explanatory diagram of the interpolation method of the G interpolation circuit 406,
- FIG. 22 is a flowchart of the processing of the RB interpolation circuit 409.
- FIG. 23 is an explanatory diagram of data used in the RB interpolation circuit 409
- FIG. 24 is a characteristic diagram of a filter used in the LPF circuit 420
- FIG. 25 is a configuration diagram of the G interpolation circuit 406
- FIG. 15 is a configuration diagram of an RB interpolation circuit 409 in a modification of the embodiment.
- This embodiment is directed to a digital camera as in the first embodiment.
- the digital camera 400 is different from the digital camera 100 (FIG. 2) of the first embodiment in that a G interpolation circuit 406 is used instead of the G interpolation circuit 106, a G image buffer H417 and a G image are used instead of the G image buffer 104.
- the difference is that an RB interpolation circuit 409 is provided instead of the buffer V418 and the RB interpolation circuit 109, but the configuration and operation in other points are the same.
- the same components as those in FIG. 2 are denoted by the same reference numerals, and detailed description is omitted.
- the G interpolation circuit 406 has the configuration shown in FIG. 25, and includes an interpolation circuit H441 and an interpolation circuit V442.
- the interpolation circuit H441 outputs to the G image buffer H417
- the interpolation circuit V442 outputs to the G image buffer V418.
- the RB interpolation circuit 409 has the configuration shown in FIG. 20, and includes an internal buffer 424 for holding the output from the WB correction circuit 108, a G image buffer H417, a G image buffer V418, and an internal buffer.
- An LPF circuit 420 input from the 424, a step amount estimation circuit 423 and a direction determination circuit 422 connected to the internal buffer 424 are provided.
- a direction discriminating circuit 422 an internal buffer 424, a G image buffer H417, a G image buffer V418, an LPF circuit 420, and a mixing circuit 421 connected to the image quality adjusting circuit 110 are provided. . Next, the operation of the present embodiment will be described.
- the G interpolation circuit 406 performs the following operation for each pixel of the image in the single-chip image buffer 104. First, the type of the color component obtained at each pixel in the single-chip image buffer 104 is examined. If the obtained color component is G, the processing for that pixel is skipped, and the processing moves to the next pixel.
- the interpolation circuit H441 calculates the average Gh of the left and right G pixels of the center pixel P, and the interpolation circuit H442 calculates the average Gv of the upper and lower G pixels. For example, if the neighborhood of 3x3 is as shown in Fig. 21, Gh and Gv are given by Eq.
- Gv (Gb0 + Gbl) / 2 (10) Then, Gh is written into an address corresponding to P in the G image buffer H417, and Gv is written into an address corresponding to P in the G image buffer V418.
- the WB correction circuit 108 reads out the single-chip image buffer 104 one pixel at a time, and when the obtained color component is R or B, After multiplying the pixel value by the gain necessary for balance, the pixel value is output to the internal buffer 424 of the RB interpolation circuit 409. If the obtained color component is G, the pixel value is directly output to the internal buffer 424 of the RB interpolation circuit 409.
- a single-chip image after WB correction is obtained in the internal buffer 424. This image is hereinafter referred to as Z.
- the RB interpolation circuit 409 performs an interpolation process on the vicinity of each pixel of Z according to the following principle. Then, the obtained RGB three color components are sequentially output to the image quality adjustment circuit 110.
- the horizontal and vertical color difference signals RG and BG are generated at the position of the R pixel or B pixel, and the result is interpolated to obtain the Obtain the color difference components in the horizontal and vertical directions.
- the color differences in the horizontal and vertical directions are mixed according to the direction of the nearby edge, and the color difference component of the central pixel in the vicinity is estimated.
- the G component of the neighboring central pixel is estimated, and the color difference component obtained in (2) is added to the estimation result to obtain the final R component and B component for the central pixel.
- a sensitivity difference occurs between G pixels in different rows or columns due to the characteristics of the single-chip Bayer array CCD 102.
- FIG. 28 is an explanatory diagram showing what kind of color difference signal can be obtained in the flat part when there is a G step due to the process (1).
- FIG. 28 (a) shows single-plate data when a flat gray object is photographed, but the values differ between the Gr pixel and the Gb pixel due to the apparent sensitivity difference.
- processing (1) is performed on this veneer data (Figs. 28 (b) and (c))
- the horizontal color difference component and the vertical A value difference occurs between the color difference components despite the flat portion.
- the difference is constant regardless of the pixel position.
- the RB interpolation circuit 409 estimates the amount of the G step in the vicinity, and changes the interpolation method based on the estimated amount of the step to remove the influence of the step.
- the horizontal and vertical color difference components are divided into high-frequency components and low-frequency components, and the high-frequency components are mixed at a mixing ratio based on the direction of the neighboring edges.
- the mixing ratio is made variable according to the level difference.When the level difference is small, the mixing ratio is the same as the high-frequency component, and when the level difference is large, the mixing ratio is uniform regardless of the direction. Near mixing ratio Control so that Then, the mixed low-frequency component and high-frequency component are added to create a final color difference component.
- the interpolation circuit H441 and the interpolation circuit V442 correspond to the “interpolating means” of claim 1.
- the mixing circuit 421 corresponds to the “mixing means” of claim 1.
- the level difference estimating circuit 423 and the direction discriminating circuit 422 correspond to the “control means” of claim 1.
- the level difference estimating circuit 423 corresponds to the “sensitivity difference estimating means” of Claim 6.
- the mixing circuit 421 corresponds to “weighted averaging means” in claim 9
- the direction discriminating circuit 422 corresponds to “direction discriminating means” in claim 12.
- the RB interpolation circuit 409 reads the vicinity of N ⁇ N of the target pixel Q from the internal buffer 424.
- N is a predetermined integer.
- the read neighbor data is referred to as X.
- the G interpolation result at the pixel positions corresponding to the R and B pixels included in X is read from each of the G image buffer H417 and the G image buffer V418. Let these data be YH and YV, respectively.
- the data pattern there are cases where the shaded central pixel in X is R and G, but the processing is the same. Therefore, the flow processing will be described below only for the pattern shown in FIG.
- the direction discrimination circuit 422 calculates the mixing ratio of the horizontal color difference component and the vertical color difference component in the above (2).
- the direction discrimination circuit 422 first multiplies the G component of X by a bandpass filter F1 that detects a horizontal edge and a bandpass filter F2 that detects a vertical edge, and the results Dh and Get Dv.
- the method of calculation and the frequency characteristics of F 1 and F 2 are the same as S 34 in FIG. 13 in the second embodiment.
- the direction determination circuit 422 uses Dh and Dv to calculate The utilization rate 3 of the horizontal interpolation result and the utilization rate r of the vertical interpolation result are calculated by using Equation (11),
- the step amount estimating circuit 423 determines the ratio ⁇ for mixing the processing results of the interpolation affected by the step and the interpolation processing not affected by the step.
- the level difference estimating circuit 423 stores, in the internal ROM, a table in which the R pixel value and the ⁇ pixel value in the vicinity and the generated G level difference are associated with each other, similarly to S3 in FIG. 5 according to the first embodiment. Have. Then, the average R ′ of the R pixels and the average B ′ of the B pixels included in X are calculated, and these are indexed to access the table to obtain the G level difference F. Next, the final mixing ratio ⁇ is output to the mixing circuit 421 by referring to a table in which the G step amount F and the mixing ratio ⁇ are associated. This table is also provided in the ROM and preferably has the characteristics shown in FIG.
- the LPF circuit 420 After the above processing is completed, the color difference R-G.B-G at the center of the neighborhood is estimated.
- the LPF circuit 420 first performs the following filtering on the data of X.YH.YV.
- F3.F4.F5 is a linear filter of the same size as X, YH, and YV, and the coefficient values are stored in the internal ROM of the LPF circuit 420.
- a single pass filter F3 having a frequency characteristic indicated by characteristic A in FIG. 24 is applied to YH and YV to obtain Ghl.Gvl.
- F3—k, 1 represents a coefficient of the filter F3.
- YH (k, l) indicates the pixel value at the coordinates (k, l) of YH. Both the denominator and the numerator are an integer set (the coordinates of the pixel position at which the data is obtained in YH). k, 1) And take the sum.
- Gh2 and Gv2 are obtained by applying a low-pass filter F4 having a frequency characteristic indicated by characteristic B in FIG. 24 to YH and YV.
- the calculation method is the same as in equation (12).
- the ⁇ -pass filter F4 is applied to the R.B component of X to obtain R2 and B2.
- B2 ( ⁇ F4_s, t X (s, t)) I ⁇ F4_s, t (13)
- X (k, l) indicates the pixel value at the coordinates (k, 1) of X.
- both the denominator and the numerator are an integer set (k, 1) representing the coordinates of the R pixel position in X Take the sum of all
- both the denominator and the numerator take the sum of all integer pairs (s, t) representing the coordinates of the B pixel position in X.
- the 1 ⁇ !? circuit 420 outputs the result 6111 112,0 1 ⁇ 2,112.82 to the mixing circuit 421.
- the color difference component R-G.B-G at the center pixel of X is obtained as follows based on the input data.
- the color difference component Crl.Cbl which is not affected by the G step is calculated by the following equation (14).
- the process of S64 corresponds to the requirement of claim 10. That is, the claims
- the “multiple interpolation results” of 10 correspond to Gh and Gv in equation (10).
- the “plurality of frequency components” correspond to Ghl and Gvl in equation (12) and Gh2-Ghl and Gv2_Gvl in equation (14).
- “Different weights for each frequency component” correspond to the coefficient 1/2 for Ghl and Gvl in equation (14), and ⁇ for Gh2-Ghl and r for Gv2-Gvl in equation (14).
- the LPF circuit 420 multiplies the G component of X by a mouth-pass filter F5, estimates the value GO of the G component of the central pixel of X, and outputs it to the mixing circuit 421.
- F5 has a frequency characteristic in the horizontal and vertical directions indicated by characteristic C in FIG.
- the formula for GO is:
- GO ( ⁇ F5_k, 1 X (k, 1)) I ⁇ F5_k.1 (17)
- X (k, l) indicates the pixel value at the coordinate (k.1) of X
- the denominator and the numerator are the sums of all integer pairs (k, 1) representing the coordinates of the G pixel position in X.
- the image quality adjustment circuit 110 When the processing of the RB interpolation circuit 409 is completed for all the pixels of the internal buffer 424, the image quality adjustment circuit 110 outputs a color image in which each pixel of the single-chip image buffer 104 has a missing color component. Will be. Subsequent processing is the same as in the first embodiment.
- the processing similar to that of the present embodiment can be realized as software for developing a RAW data file, as in the case of the first embodiment. Specifically, in the flowchart of FIG. 10, a buffer 4 is provided in addition to the buffer 3, and a specific step is changed.
- FIG. 30 shows the flowchart after the change. Below, the flow chart Each step will be described.
- S80 Same as S7 in Fig. 10.
- S81 Same as S8 in Fig. 10.
- S82 Search for unprocessed R pixel positions or B pixel positions in buffer 2. If an unprocessed one is found, proceed to S83; otherwise, proceed to S86.
- S83 Read the 3x3 neighborhood of the unprocessed pixel.
- S84 Within the read neighborhood, calculate the average value Gh of the G pixels horizontally adjacent to the center pixel, and output it to the buffer 3 at the position corresponding to the target pixel.
- S85 Calculate the average value Gv of the G pixels vertically adjacent to the central pixel in the read neighborhood, and output it to the position corresponding to the target pixel in the buffer 4.
- the calculation expression is as shown in Expression (10).
- S86 Same as S13 in Fig. 10.
- S87 Search for pixels in buffer 2 that have not been processed from S87 to S90. If found, go to S88, otherwise go to S91.
- S88 For an unprocessed pixel (hereinafter referred to as Q) in buffer 2, read the neighborhood of NxN (N is a predetermined integer) around the corresponding pixel position.
- this data is referred to as X.
- the step amount is determined for each pixel and the processing is performed in accordance with the judgment.However, no step is affected beforehand so that no failure occurs even if any step occurs. It is also conceivable to implement only interpolation processing.
- FIG. 27 shows such a modification.
- FIG. 27 shows a configuration of an RB interpolation circuit 409 according to the modification, and a step amount estimation circuit 423 is omitted from FIG.
- a mixing circuit 431 is provided instead of the mixing circuit 421. The operation of the other circuits is the same in the modification. After calculating Crl, Cbl, and G0 according to Equations (14) and (17), the mixing circuit 431 calculates Instead of (18), the following equation (19),
- the requirements described in FIG. 27 correspond to the requirements described in claim 2. That is, the interpolation circuit E223 and the interpolation circuit F224 correspond to “a plurality of interpolation means for compensating for missing color components in each pixel of data” in claim 2.
- the selection circuit 116 corresponds to “mixing means for mixing outputs from the plurality of interpolation means”.
- the interpolating means and the mixing means are “set in accordance with the sensitivity difference (between pixels acquiring the specific color component)” according to claim 2.
- S102 Same processing as S64 in FIG. 22, but does not calculate Equations (15) and (16).
- S103 The same as S65 in FIG.
- S104 This is the same process as S66 in Fig. 22, except that the final R component and B component are calculated by using equation (19) instead of equation (18).
- the image processing program according to claim 13 corresponds to the RAW development software (FIGS. 5, 10, 13, 17, 22.30) described in the first to fourth embodiments.
- the image processing program of claim 13 similar to the invention of claim 1, it is possible to perform interpolation without failure even when there is a sensitivity difference by optimally mixing a plurality of interpolation results according to the sensitivity difference. Play an effect To do.
- the image processing program of claim 14 corresponds to the RAW development software (FIGS. 10, 29, 30, 31) described in the modified example of the second embodiment and the modified example of the fourth embodiment.
- the image processing program according to claim 14 is similar to the invention according to claim 2, since the interpolation means and the mixing means are set so as to be able to cope with a case where there is a sensitivity difference in the input data. The effect is obtained that a result is obtained.
- An image processing program according to claim 15 corresponds to an embodiment of the image processing apparatus according to claims 3 and 5. That is, it corresponds to the first to third embodiments and has the following operational effects.
- An image processing program according to claim 16 corresponds to an embodiment of the image processing apparatus according to claims 4 and 5. That is, it corresponds to the first to third embodiments, but the second embodiment corresponds to the modified example shown in FIG.
- the image processing program according to claim 16 has the following effects.
- B By removing the frequency component that the sensitivity difference gives to the interpolation result, the effect of the sensitivity difference on the interpolation result can be prevented.
- An image processing program according to a seventeenth aspect corresponds to the first and second embodiments, and has the operation and effect according to the seventh aspect.
- a general method of estimating the sensitivity difference can be used to calculate the amount or pattern of the sensitivity difference that is specific to a specific sensor or specific shooting conditions. Can respond optimally.
- the “procedure for estimating the sensitivity difference” is S3 in FIG. 5
- the “type of image sensor” is the model number of the image sensor in the description of S8 in FIG. 5
- the “photographing condition” is This corresponds to the shutter speed in the description of S3 in FIG.
- An image processing program according to claim 18 corresponds to the first and fourth embodiments, and has the effects and advantages described in claim 9. That is, a plurality of interpolation results can be optimally weighted and averaged according to the sensitivity difference, and interpolation processing without failure can be performed even when there is a sensitivity difference.
- An image processing program corresponds to the second and fourth embodiments, and has the effects and advantages described in the twenty-second aspect. That is, high-precision interpolation can be performed by mixing a plurality of interpolation results according to the edge direction.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Color Television Image Signal Generators (AREA)
- Facsimile Image Signal Circuits (AREA)
- Color Image Communication Systems (AREA)
- Image Input (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/596,207 US8259202B2 (en) | 2004-05-13 | 2005-05-10 | Image processing device and image processing program for acquiring specific color component based on sensitivity difference between pixels of an imaging device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004143128A JP4717371B2 (ja) | 2004-05-13 | 2004-05-13 | 画像処理装置および画像処理プログラム |
JP2004-143128 | 2004-05-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005112470A1 true WO2005112470A1 (ja) | 2005-11-24 |
Family
ID=35394532
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/008827 WO2005112470A1 (ja) | 2004-05-13 | 2005-05-10 | 画像処理装置および画像処理プログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US8259202B2 (ja) |
JP (1) | JP4717371B2 (ja) |
WO (1) | WO2005112470A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100231745A1 (en) * | 2006-06-16 | 2010-09-16 | Ming Li | Imaging device and signal processing method |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102005058415A1 (de) * | 2005-12-07 | 2007-06-14 | Olympus Soft Imaging Solutions Gmbh | Verfahren zur Farbkorrekturberechnung |
JP2009005166A (ja) * | 2007-06-22 | 2009-01-08 | Olympus Corp | 色補間装置及び画像処理システム |
US8035704B2 (en) | 2008-01-03 | 2011-10-11 | Aptina Imaging Corporation | Method and apparatus for processing a digital image having defective pixels |
US8111299B2 (en) * | 2008-08-15 | 2012-02-07 | Seiko Epson Corporation | Demosaicking single-sensor camera raw data |
US8131067B2 (en) * | 2008-09-11 | 2012-03-06 | Seiko Epson Corporation | Image processing apparatus, image processing method, and computer-readable media for attaining image processing |
TWI376646B (en) * | 2008-12-23 | 2012-11-11 | Silicon Motion Inc | Pixel processing method and image processing system |
JP5359553B2 (ja) * | 2009-05-25 | 2013-12-04 | 株式会社ニコン | 画像処理装置、撮像装置及び画像処理プログラム |
JP5401191B2 (ja) * | 2009-07-21 | 2014-01-29 | 富士フイルム株式会社 | 撮像装置及び信号処理方法 |
KR101665137B1 (ko) | 2010-04-07 | 2016-10-12 | 삼성전자주식회사 | 이미지 센서에서 발생되는 잡음을 제거하기 위한 장치 및 방법 |
JP5672776B2 (ja) * | 2010-06-02 | 2015-02-18 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
JP5780747B2 (ja) * | 2010-12-15 | 2015-09-16 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
US8654225B2 (en) * | 2011-05-31 | 2014-02-18 | Himax Imaging, Inc. | Color interpolation system and method thereof |
JP5883992B2 (ja) * | 2013-05-23 | 2016-03-15 | 富士フイルム株式会社 | 画素混合装置およびその動作制御方法 |
JP6626272B2 (ja) * | 2015-05-26 | 2019-12-25 | ハンファテクウィン株式会社 | 画像処理装置、画像処理方法及び撮像装置 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04140992A (ja) * | 1990-10-02 | 1992-05-14 | Canon Inc | 撮像装置 |
JPH07236147A (ja) * | 1993-08-31 | 1995-09-05 | Sanyo Electric Co Ltd | 単板式カラービデオカメラの色分離回路 |
JPH08298669A (ja) * | 1995-03-17 | 1996-11-12 | Eastman Kodak Co | 適応カラー補間単一センサカラー電子カメラ |
JP2002238057A (ja) * | 2001-02-08 | 2002-08-23 | Ricoh Co Ltd | 撮像装置、輝度補正方法、およびその方法をコンピュータで実行するためのプログラム |
JP2003230158A (ja) * | 2002-02-01 | 2003-08-15 | Sony Corp | 画像処理装置および方法、プログラム、並びに記録媒体 |
JP2004363902A (ja) * | 2003-06-04 | 2004-12-24 | Nikon Corp | 撮像装置 |
JP2005160044A (ja) * | 2003-10-31 | 2005-06-16 | Sony Corp | 画像処理装置および画像処理方法、並びにプログラム |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6563538B1 (en) * | 1997-09-26 | 2003-05-13 | Nikon Corporation | Interpolation device, process and recording medium on which interpolation processing program is recorded |
JP3660504B2 (ja) * | 1998-07-31 | 2005-06-15 | 株式会社東芝 | カラー固体撮像装置 |
US7084905B1 (en) * | 2000-02-23 | 2006-08-01 | The Trustees Of Columbia University In The City Of New York | Method and apparatus for obtaining high dynamic range images |
JP2002016930A (ja) * | 2000-06-28 | 2002-01-18 | Canon Inc | 撮像装置及び方法、信号処理装置及び方法 |
JP2002209223A (ja) * | 2001-01-09 | 2002-07-26 | Sony Corp | 画像処理装置および方法、並びに記録媒体 |
US7079705B2 (en) * | 2002-10-30 | 2006-07-18 | Agilent Technologies, Inc. | Color interpolation for image sensors using a local linear regression method |
-
2004
- 2004-05-13 JP JP2004143128A patent/JP4717371B2/ja not_active Expired - Fee Related
-
2005
- 2005-05-10 US US11/596,207 patent/US8259202B2/en not_active Expired - Fee Related
- 2005-05-10 WO PCT/JP2005/008827 patent/WO2005112470A1/ja active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04140992A (ja) * | 1990-10-02 | 1992-05-14 | Canon Inc | 撮像装置 |
JPH07236147A (ja) * | 1993-08-31 | 1995-09-05 | Sanyo Electric Co Ltd | 単板式カラービデオカメラの色分離回路 |
JPH08298669A (ja) * | 1995-03-17 | 1996-11-12 | Eastman Kodak Co | 適応カラー補間単一センサカラー電子カメラ |
JP2002238057A (ja) * | 2001-02-08 | 2002-08-23 | Ricoh Co Ltd | 撮像装置、輝度補正方法、およびその方法をコンピュータで実行するためのプログラム |
JP2003230158A (ja) * | 2002-02-01 | 2003-08-15 | Sony Corp | 画像処理装置および方法、プログラム、並びに記録媒体 |
JP2004363902A (ja) * | 2003-06-04 | 2004-12-24 | Nikon Corp | 撮像装置 |
JP2005160044A (ja) * | 2003-10-31 | 2005-06-16 | Sony Corp | 画像処理装置および画像処理方法、並びにプログラム |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100231745A1 (en) * | 2006-06-16 | 2010-09-16 | Ming Li | Imaging device and signal processing method |
US8471936B2 (en) * | 2006-06-16 | 2013-06-25 | Sony Corporation | Imaging device and signal processing method |
Also Published As
Publication number | Publication date |
---|---|
JP4717371B2 (ja) | 2011-07-06 |
US20080043115A1 (en) | 2008-02-21 |
US8259202B2 (en) | 2012-09-04 |
JP2005328215A (ja) | 2005-11-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005112470A1 (ja) | 画像処理装置および画像処理プログラム | |
JP4700445B2 (ja) | 画像処理装置および画像処理プログラム | |
JP4610930B2 (ja) | 画像処理装置、画像処理プログラム | |
JP5574615B2 (ja) | 画像処理装置、その制御方法、及びプログラム | |
KR100505681B1 (ko) | 베이어 패턴 컬러 신호에 대한 적응형 필터로 보간을수행하여 해상도를 높이는 보간기, 이를 구비한 디지털영상 신호 처리 장치, 및 그 방법 | |
US7667738B2 (en) | Image processing device for detecting chromatic difference of magnification from raw data, image processing program, and electronic camera | |
JP4350706B2 (ja) | 画像処理装置及び画像処理プログラム | |
CN1312939C (zh) | 图像处理方法、图像处理程序、图像处理装置 | |
JP5672776B2 (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
US20130057734A1 (en) | Image processing apparatus, image processing method, information recording medium, and program | |
JP2013219705A (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
JP4329542B2 (ja) | 画素の類似度判定を行う画像処理装置、および画像処理プログラム | |
US7982787B2 (en) | Image apparatus and method and program for producing interpolation signal | |
JP6282123B2 (ja) | 画像処理装置、画像処理方法及びプログラム | |
JP6415093B2 (ja) | 画像処理装置、画像処理方法、及びプログラム | |
US20120287299A1 (en) | Image processing device, storage medium storing image processing program, and electronic camera | |
JP2009194721A (ja) | 画像信号処理装置、画像信号処理方法、及び撮像装置 | |
JP5153842B2 (ja) | 画像処理装置、画像処理プログラム | |
JP4483746B2 (ja) | 欠陥画素補正方法及び装置 | |
JP4239483B2 (ja) | 画像処理方法、画像処理プログラム、画像処理装置 | |
JP4583871B2 (ja) | 画素信号生成装置、撮像装置および画素信号生成方法 | |
JP4196055B2 (ja) | 画像処理方法、画像処理プログラム、画像処理装置 | |
JP4122082B2 (ja) | 信号処理装置およびその処理方法 | |
JP2014110507A (ja) | 画像処理装置および画像処理方法 | |
JP6987621B2 (ja) | 画像処理装置、画像処理方法及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11596207 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase | ||
WWP | Wipo information: published in national office |
Ref document number: 11596207 Country of ref document: US |