WO2004059987A1 - 撮像装置及び方法 - Google Patents
撮像装置及び方法 Download PDFInfo
- Publication number
- WO2004059987A1 WO2004059987A1 PCT/JP2003/015437 JP0315437W WO2004059987A1 WO 2004059987 A1 WO2004059987 A1 WO 2004059987A1 JP 0315437 W JP0315437 W JP 0315437W WO 2004059987 A1 WO2004059987 A1 WO 2004059987A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- color
- noise
- image
- value
- signal
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 62
- 239000011159 matrix material Substances 0.000 claims abstract description 179
- 238000012545 processing Methods 0.000 claims abstract description 165
- 238000003384 imaging method Methods 0.000 claims abstract description 102
- 230000003595 spectral effect Effects 0.000 claims abstract description 77
- 238000006243 chemical reaction Methods 0.000 claims abstract description 37
- 230000035945 sensitivity Effects 0.000 claims description 65
- 230000008569 process Effects 0.000 claims description 46
- 238000012937 correction Methods 0.000 description 38
- 230000009467 reduction Effects 0.000 description 37
- 239000003086 colorant Substances 0.000 description 28
- 238000011156 evaluation Methods 0.000 description 28
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 27
- 238000010586 diagram Methods 0.000 description 26
- 238000005286 illumination Methods 0.000 description 18
- 230000006870 function Effects 0.000 description 13
- 238000009826 distribution Methods 0.000 description 11
- 230000006835 compression Effects 0.000 description 5
- 238000007906 compression Methods 0.000 description 5
- 230000002596 correlated effect Effects 0.000 description 5
- 230000000875 corresponding effect Effects 0.000 description 5
- 230000006837 decompression Effects 0.000 description 5
- 238000001914 filtration Methods 0.000 description 4
- 239000000463 material Substances 0.000 description 4
- 230000003321 amplification Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 3
- 239000000975 dye Substances 0.000 description 3
- 238000003199 nucleic acid amplification method Methods 0.000 description 3
- 230000001360 synchronised effect Effects 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 241000282412 Homo Species 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000009877 rendering Methods 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000003786 synthesis reaction Methods 0.000 description 2
- 241000196324 Embryophyta Species 0.000 description 1
- 241000735495 Erica <angiosperm> Species 0.000 description 1
- 241000233866 Fungi Species 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000002238 attenuated effect Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000013078 crystal Substances 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000006185 dispersion Substances 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 235000012736 patent blue V Nutrition 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000004611 spectroscopical analysis Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000002945 steepest descent method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/12—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
Definitions
- the present invention relates to an imaging apparatus and method for imaging an object, preferably an imaging apparatus and method for imaging, particularly for imaging sensitivity.
- the image pickup device uses, for example, a color filter 1 of three primary colors of RGB as shown in FIG.
- a color filter 1 of three primary colors of RGB as shown in FIG.
- a color filter 1 is formed by a so-called Bayer array using a total of four B filters that transmit only the light of B) as a minimum unit.
- FIG. 2 shows a signal processing unit 11 that performs various kinds of processing on an RGB signal obtained by a CCD (Charge Coupled Device) image sensor having an RGB color filter 1.
- FIG. 3 is a block diagram illustrating a configuration example.
- the offset correction processing unit 21 removes an offset component included in the image signal supplied from the front end 13 that performs predetermined processing on the signal acquired by the CCD image sensor, and converts the obtained image signal into white. Output to the balance correction processing section 22.
- the white balance correction processing unit 22 corrects the balance of each color based on the color temperature of the image signal supplied from the offset correction processing unit 21 and the difference in sensitivity of each filter of the color filter 1 .
- the correction is performed by the white balance correction processing unit 22, and the obtained color signal is output to the gamma correction processing unit 23.
- the gamma correction processing unit 23 performs gamma correction on the signal supplied from the white balance correction processing unit 22 and outputs the obtained signal to the vertical direction synchronization processing unit 24.
- a delay element is provided in the vertical direction synchronization processing section 24, and a time lag in the vertical direction of the signal supplied from the gamma correction processing section 23 is synchronized.
- the RGB signal generation processing unit 25 includes an interpolation process for interpolating the color signal supplied from the vertical direction synchronization processing unit 24 into a phase in the same space, a noise removal process for removing a noise component of the signal, a signal Performs filtering to limit the band, and high-frequency correction to correct the high-frequency components of the signal band.
- the resulting RGB signals are sent to the luminance signal generation processing unit 26 and color difference signal generation processing unit 27. Output.
- the luminance signal generation processing unit 26 combines the RGB signals supplied from the RGB signal generation processing unit 25 at a predetermined combination ratio to generate a luminance signal (Y).
- the color difference signal generation processing unit 27 synthesizes the RGB signals supplied from the RGB signal generation processing unit 25 at a predetermined synthesis ratio to generate color difference signals (C b, C r).
- the luminance signal (Y) generated by the luminance signal generation processing unit 26 and the color difference signals (C b, C r) generated by the color difference signal generation processing unit 27 are output to, for example, the signal processing unit 11. Output to the provided monitor.
- an image capturing apparatus When an image capturing apparatus as described above captures an object and generates an image, the image may not be reproduced in a desired color because the appearance differs depending on the viewing environment at the time of observation.
- the curve L 3 shows the spectral sensitivity of R
- the curve L2 shows the spectral sensitivity of G
- the curve L3 shows the spectral sensitivity of B
- the curve L 11 in FIG. 4 indicates the spectral sensitivity of R
- the curve L 12 indicates the spectral sensitivity of G
- the curve L 13 indicates the spectral sensitivity of B.
- the present invention provides an imaging apparatus and an imaging method capable of performing linear matrix processing using a coefficient in consideration of color reproducibility and noise reduction according to an environment and conditions of imaging. With the goal.
- An image pickup apparatus comprises an image pickup apparatus having an image pickup device for picking up an object, which comprises a color filter having different spectral characteristics, and an adjusting means for adjusting a color reproduction value and a noise value representing a sense of noise.
- a matrix coefficient determining unit that determines a matrix coefficient based on the adjustment by the adjusting unit; and a matrix conversion processing unit that performs a matrix conversion process on an image captured by the imaging element unit based on the matrix coefficient.
- the imaging method according to the present invention includes a color filter having different spectral characteristics, and expresses a color reproduction value and a sense of noise in an imaging method in which an imaging device having an imaging element unit that captures an image of an object.
- a third step of performing a conversion process is performed.
- FIG. 2 is a block diagram illustrating a configuration example of a signal processing unit provided in a conventional imaging device.
- FIG. 3 is a diagram illustrating an example of spectral sensitivity characteristics.
- FIG. 4 is a diagram illustrating another example of the spectral sensitivity characteristic.
- FIG. 5 is a block diagram illustrating a configuration example of an imaging device to which the present invention has been applied.
- FIG. 6 is a diagram illustrating an example of a four-color color filter provided in an imaging device to which the present invention has been applied.
- FIG. 7 is a diagram illustrating an example of a visibility curve.
- FIG. 8 is a diagram showing characteristics of the evaluation coefficient.
- FIG. 9 is a block diagram illustrating a configuration example of a camera system LSI included in an imaging device to which the present invention has been applied.
- FIG. 10 is a block diagram illustrating a configuration example of the signal processing unit in FIG.
- FIG. 11 is a flowchart illustrating the creation processing of the image processing apparatus.
- FIG. 12 is a flowchart illustrating details of the four-color color filter determination process in step S1 of FIG.
- FIG. 13 is a diagram showing an example of a virtual curve.
- FIGS. 14 (A) to 14 (C) are diagrams showing examples of UMG values for each filter.
- FIG. 15 is a diagram illustrating an example of spectral sensitivity characteristics of a four-color filter.
- FIG. 16 is a flowchart illustrating details of the linear matrix determination process in step S2 of FIG.
- FIG. 17 is a diagram illustrating an example of a color difference evaluation result.
- FIG. 18 is a diagram showing the chromaticity of a predetermined object by the four-color color filter.
- FIG. 19 shows another four-color color filter provided in the imaging apparatus to which the present invention is applied. It is a figure showing an example.
- FIG. 20 is a flowchart showing the adaptive determination of the linear matrix coefficient M.
- FIG. 21 is a diagram showing how the noise reduction index changes when the color reproduction index is changed.
- FIG. 22 is a diagram illustrating an example of spectral sensitivity characteristics of a four-color color filter.
- FIG. 23 is a diagram showing a histogram of an image.
- FIG. 5 is a block diagram showing a configuration example of an imaging device to which the present invention is applied.
- the imaging device shown in FIG. 5 has a color identifying four types of colors (light) on the front surface (the surface facing the lens 42) of an image sensor 45 such as a CCD (Charge Coupled Device). There is a file evening.
- the color filters provided in the image sensor 45 in FIG. 5 are four-color filters 61 shown in FIG.
- the four-color color filter 61 has an R filter that transmits only red light, a B filter that transmits only blue light, and only green light in the first wavelength band.
- the minimum unit is a total of four filters, a G1 filter that transmits light and a G2 filter that transmits only green light in the second wavelength band, which has a high correlation with the G1 filter. .
- the G1 fill and the G2 fill are located diagonally to each other within the minimum unit.
- the human eye is sensitive to luminance. Therefore, in the four-color color filter 61 shown in FIG. 6, by obtaining more accurate luminance information, the gradation of the luminance can be increased and the appearance of the eyes can be improved.
- a color filter of G2 which has a spectral sensitivity characteristic close to the visibility curve, has been added (R, G corresponding to R, G, B in Fig. 1). A newly determined green G2 filter has been added to the l and B filters).
- a filter evaluation coefficient used when determining the four-color filter 61 for example, a UMG (Unified Measure of UMG) coefficient that takes into account both “color reproducibility” and “noise reduction property” is used. Goodness) is used.
- UMG Unified Measure of UMG
- FIG. 8 is a diagram showing characteristics of each filter evaluation coefficient.
- Figure 8 shows the number of filters that can be evaluated at one time, whether the spectral reflectance of the object is taken into account, and whether noise reduction is taken into account for each evaluation coefficient. It is shown.
- the Q factor (Q-factor) can be evaluated only at one time at a time, and the spectral reflectance of the object and noise reduction are not considered.
- the factor (-factor) can evaluate multiple filters at once, but does not take into account the reduction of the spectral reflectance and noise of the object.
- F0M can evaluate multiple filters at once, and takes into account the spectral reflectance of the object, but does not consider noise reduction.
- the UMG used to determine the four-color fill filter 61 can evaluate multiple fill filters at once, taking into account the spectral reflectance of the object, and reducing noise. The reduction of is also considered.
- the Q factor is described in "HE L Neugebauer” Quality Factor for Filters Whose Spectral Transmittances are Different from Color Mixture Curves, and Its Application to Color Photography "JOURNAL OF THE OPTICAL SOCIETY OF AMERICA, VOLUME 46, NUMBER 10.
- the microcomputer 41 controls the entire operation according to a predetermined control program.
- the microcomputer 41 includes an exposure control by an aperture 43, an opening and closing control of a shirt 44, a control of an electronic shutter of a TG (Timing Generator) 46, a gain control by a front end 47, a camera system LSI (Large). Scale Integrated Circuit) Performs mode control and parameter control of 48.
- the aperture 43 adjusts the passage (aperture) of the light collected by the lens 42 and controls the amount of light taken in by the image sensor 45.
- the shirt 44 controls the passage of the light collected by the lens 42 based on the instruction of the microcomputer 41.
- the image sensor 45 further has an image sensor constituted by a CCD or a CMOS (Complementary Metal Oxide Semiconductor), and passes through a four-color color filter 61 formed in front of the image sensor.
- the incident light is converted into electrical signals, and four types of color signals (R signal, G signal, G2 signal, B signal) are output to the front end 47.
- the image sensor 45 is provided with the four-color filter 61 shown in FIG. 6, and the components of the wavelengths of the respective bands of R, Gl, G2, and B are obtained from the light incident through the lens 42. Is extracted. The details will be described later with reference to FIG.
- the front end 47 performs, for the color signal supplied from the image sensor 45, a correlated double sampling process for removing a noise component, a gain control process, a digital conversion process, and the like. Various processing is performed by the front end 47, and the obtained image data is output to the camera system LSI48.
- the camera system LSI 48 performs various processes on the image data supplied from the front end 47, and generates, for example, a luminance signal and a color signal and outputs the signals to the image monitor 50. Output and display the image corresponding to the signal.
- the image memory 49 is composed of, for example, a DRAM (Dynamic Random Access Memory) or an SDRAM (Synchronous Dynamic Random Access Memory), and is appropriately used when the camera system LSI 48 performs various processes.
- the external storage medium 51 composed of a semiconductor memory, a disk, and the like is, for example, configured to be detachable from the image pickup apparatus in FIG. 5 and is compressed by a camera system LSI48 in a JPEG (Joint Photographic Expert Group) format. The data is stored.
- the image monitor 50 is composed of, for example, an LCD (Liduid Crystal Display), and displays a captured image, various menu screens, and the like.
- LCD Liduid Crystal Display
- FIG. 9 is a block diagram showing a configuration example of the camera system LSI 48 shown in FIG. 5.
- Each block constituting the camera system LSI 48 is connected to a microcomputer interface (I / F) 73 through a microcomputer interface 73 shown in FIG. It is controlled by the microcomputer 41 shown below.
- the signal processing unit 71 performs various processing such as interpolation processing, filtering processing, matrix calculation processing, luminance signal generation processing, and color difference signal generation processing on the four types of color information supplied from the front end 47. Then, for example, the generated image signal is output to the image monitor 50 via the monitor interface 77.
- the image detection section 72 performs detection processing such as autofocus, autoexposure, and auto white balance based on the output of the front end 47, and outputs the result to the microcomputer 41 as appropriate.
- the memory controller 75 controls the transmission and reception of data between the processing blocks or the transmission and reception of data between a predetermined processing block and the image memory 49. For example, the image data supplied from the signal processing unit 71 is controlled. One night via memory interface 74 Output to the image memory 49 and store it.
- the image compression / decompression unit 76 compresses, for example, the image data supplied from the signal processing unit 71 in a JPEG format, and compresses the obtained data via a microcomputer interface 73 to an external storage medium 5. Output to 1 and store.
- the image compression / decompression unit 76 also decompresses (decompresses) the compressed data read from the external storage medium 51 and outputs it to the image monitor 50 via the monitor interface 77.
- FIG. 10 is a block diagram showing a detailed configuration example of the signal processing unit 71 shown in FIG. Each block constituting the signal processing section 71 is controlled by the microcomputer 41 via the microcomputer interface 73.
- the offset correction processing unit 91 removes a noise component (offset component) included in the image signal supplied from the front end 47 and outputs the obtained image signal to the white balance correction processing unit 92 .
- the white balance correction processing section 92 determines the balance of each color based on the color temperature of the image signal supplied from the offset correction processing section 91 and the difference in sensitivity between the four color filters 61 and the filters. Correct.
- the color signals obtained by the correction by the white balance correction processing section 92 and obtained are output to the vertical direction synchronization processing section 93.
- a delay element is provided in the vertical synchronization processing unit 93, and a vertical time shift of a signal output from the white balance correction processing unit 92 (hereinafter referred to as an RG1G2B signal) is synchronized. (Correction).
- the signal generation processing unit 94 performs interpolation processing for interpolating the 2 ⁇ 2 pixel color signal of the minimum unit of the RG1G2B signal supplied from the vertical synchronization processing unit 93 into the phase of the same space, and a noise component of the signal.
- the RG1 G2B signal obtained is subjected to noise removal processing to remove noise, filtering to limit the signal band, and high-frequency correction to correct the high-frequency components of the signal band.
- the linear matrix processing unit 95 calculates the RG1G2B signal by the equation (1) based on a predetermined linear matrix coefficient M (3 ⁇ 4 matrix) to generate RGB signals of three colors. • (1) The R signal generated by the linear matrix processing section 95 is output to the gamma correction processing section 96-1, the G signal is output to the gamma correction processing section 96-2, and the B signal is output to the gamma correction processing section. Output to 9 6 _ 3.
- the gamma correction processing units 96_1 to 96-3 perform gamma correction on each of the RGB signals output from the linear matrix processing unit 95, and convert the obtained RGB signals into luminance (Y). Output to the signal generation processing section 97 and the color difference (C) signal generation processing section 98.
- the luminance signal generation processing unit 97 combines the RGB signals supplied from the gamma correction processing units 96-1 to 96-3 at a predetermined combination ratio according to, for example, Equation (2), and outputs a luminance signal. (Y) is generated.
- the color difference signal generation processing unit 98 generates a color difference signal (C) by synthesizing the RGB signals supplied from the gamma correction processing units 96-1 to 96-3 at a predetermined synthesis ratio, Output to band-limited thinning processing section 99.
- the band limitation thinning processing section 99 generates a color difference signal (Cb, Cr) based on the color difference signal (C). Note that a signal obtained by single-plate 2 ⁇ 2 color coding generally has less band of color information than a luminance signal.
- the band-limiting thinning processing unit 99 performs the band-limiting process and the thinning process on the color-difference signal (C) supplied from the color-difference signal generation processing unit 98 to reduce the color information data, thereby reducing the color difference signal.
- C color-difference signal
- the luminance signal (Y) generated by the luminance signal generation processing unit 97, the color difference signal (C) generated by the color difference signal generation processing unit 98, or the color difference signal generated by the band-limited thinning processing unit 99 (C b, C r) is output to the image monitor 50 via the monitor interface 77 shown in FIG. 9, for example.
- the microcomputer 41 controls the TG 46 and causes the image sensor 45 to capture an image. That is, light of four colors is transmitted by a four-color color filter 61 formed on the front surface of an image sensor such as a CCD (hereinafter, referred to as a CCD image sensor) constituting the image sensor 45, and the transmitted light is transmitted. Is captured by the CCD image sensor. The light captured by the CCD image sensor is converted into four color signals, which are output to the front end 47.
- a CCD image sensor hereinafter, referred to as a CCD image sensor
- the front end 47 performs a correlated double sampling process, a gain control process, a digital conversion process, and the like on the color signal supplied from the image sensor 45 to remove a noise component, and obtains the obtained color signal.
- the image data is output to the camera system LSI 48.
- the offset component of the color signal is removed by the offset correction processing section 91, and the color temperature of the image signal and the four-color color are removed by the white balance correction processing section 92.
- the balance of each color is corrected based on the difference in sensitivity of each filter of one filter 61.
- the vertical direction synchronization processing unit 93 synchronizes (corrects) the time lag in the vertical direction of the signal corrected by the white balance correction processing unit 92, and the signal generation processing unit 94 Interpolation processing to interpolate the minimum unit of 2 X 2 pixel color signal of RG1 G2B signal supplied from vertical synchronization unit 93 to the same phase, noise to remove noise component of signal A removal process, a filtering process for limiting a signal band, a high frequency correction process for correcting a high frequency component of the signal band, and the like are performed.
- the signal (RG1 G2B signal) generated by the signal generation processing section 94 is converted based on a predetermined linear matrix coefficient M (3 ⁇ 4 matrix).
- An RGB signal of the color is generated.
- the R signal generated by the linear matrix processing unit 95 is output to the gamma correction processing unit 96-1, the G signal is output to the gamma correction processing unit 96-2, and the B signal is subjected to gamma correction processing. Output to the unit 9 6 1-3.
- Gamma correction is performed on each of the RGB signals obtained by the processing of the linear matrix processing unit 95 by the gamma correction processing units 96-1 to 96-3, and the obtained RGB signals are converted.
- the luminance signal generation processing section 97 and the color difference signal generation processing section 98 Is output.
- the respective signals of the R signal, the G signal, and the B signal supplied from the gamma correction processing sections 96-1 to 96-3 are processed.
- a luminance signal (Y) and a chrominance signal (C) are generated at a predetermined combination ratio.
- the luminance signal (Y) generated by the luminance signal generation processing unit 97 and the color difference signal (C) generated by the color difference signal generation processing unit 98 are output to the image compression / decompression unit 76 in FIG.
- the image data is compressed in the TPEG format.
- the obtained image data is output to the external storage medium 51 via the microcomputer interface 73 and stored.
- the microcomputer 41 reads out the image data stored in the external storage medium 51, and This is output to the image compression / decompression unit 76 of the camera system LSI 48.
- the image compression / decompression unit 76 the compressed image data is expanded and an image corresponding to the obtained data is displayed on the image monitor 50 via the monitor interface 77.
- step S1 a four-color filter determination process for determining the spectral sensitivity characteristics of the four-color filter 61 provided in the image sensor 45 shown in FIG. 5 is performed, and in step S2, a process shown in FIG.
- the linear matrix coefficient M determination processing for determining the matrix coefficient M set in the linear matrix processing section 95 is performed.
- the four-color color filter determination process executed in step S1 refer to the flowchart shown in FIG. 12, and for details of the linear matrix coefficient M determination process executed in step S2. Will be described later with reference to the flowchart shown in FIG.
- step S3 the signal processing unit 71 shown in FIG. 10 is created, and the process proceeds to step S4.
- System LSI 48 is created.
- step S5 An entire imaging device (eg, digital camera) as shown in FIG. 5 is created.
- step S6 the image quality ("color reproducibility" and "color discrimination") of the imaging device created in step S5 is evaluated, and the process ends. '
- the color of the object is calculated by multiplying the product of the spectral reflectance of the object, the spectral energy distribution of standard illumination, and the spectral sensitivity distribution (characteristics) of the sensor (color filter) that senses the object in the visible light range (eg, , 400 to 700 nm). That is, the object color is calculated by equation (3).
- Object color kf (spectral reflectance of object) ⁇ Spectral energy distribution of illumination)
- Ws Visible light region (usually 400 nm to 700 nm)
- the "spectral sensitivity characteristic of the sensor" in equation (3) is expressed by a color matching function, and the object of that object The color is represented by the tristimulus values of X, ⁇ , ⁇ .
- the value of X is calculated by equation (4-1)
- the value of ⁇ is calculated by equation (4-1-2)
- the value of ⁇ is calculated by equation (4-1-3).
- the value of the constant k in Equations (4-1) to (4-1-3) is calculated by Equation (4-4).
- an image of a predetermined object is captured by an imaging device such as a digital camera.
- the “spectral sensitivity characteristic of the sensor” in equation (3) is represented by the spectral sensitivity characteristic of the color filter, and the object color of the object is the color value of the number of filters (for example, when the RGB filter (3 types) is used) Calculates the object color of RGB value (3 values). If the imaging device is provided with an RGB filter that detects three colors, the value of R is calculated by equation (5-1-1), and the value of G is calculated by equation (5-2) And the value of B is calculated by equation (5-3).
- equation (5-1) is calculated by equation (5-4)
- the value of the constant k g in equation (5-2) is calculated by equation (5-5)
- the value of the constant k b in (5-3) is calculated by equation (5-6).
- G k g ⁇ , R (A) ⁇ ⁇ ) ⁇ ⁇ ) ⁇
- step S1 the four-color color filter determination process performed in step S1 shown in FIG. 11 will be described with reference to the flowchart shown in FIG.
- step S21 the color set to be used for calculating the UMG value is selected.
- a color image containing many color patches representing the existing colors and many color patches emphasizing human memory colors (skin color, plant green, sky blue, etc.) One get is selected.
- a color patch that can be used as a standard may be created from a short time such as SOCS (Standard Object Color Spectra Database) and used. Details of S0CS are disclosed in “Joji Tinima,” Statistical Color Reproduction Evaluation Using the Standard Object Color Spectroscopy Database (S0CS) ", Karaichi Forum JAPAN 99". The following describes the case where Macbeth Color Checker is selected as the color target.
- SOCS Standard Object Color Spectra Database
- the spectral sensitivity characteristics of the G2 filter are determined.
- the spectral sensitivity characteristics those that can be created from existing materials may be used, or a virtual curve C ( ⁇ ) is assumed using a cubic spline curve (cubic spline function) as shown in Fig. 13.
- the peak value A D of the virtual curve C ( ⁇ ) the value w (value obtained by dividing the sum of 1 and 2 by 2), value ⁇ ⁇ ⁇ (value obtained by subtracting 2 from W
- the values of w and Aw are based on the value of the half width.
- the method of changing ⁇ ⁇ , w, Aw is, for example, every 5 nm.
- the hypothetical curve C ( ⁇ ) is expressed by the following equation (6-1) in each range.
- step S23 the filter to be added (G2 filter) and the existing filter (R filter, G1 filter, B filter) are combined, and the minimum unit (set) of the four-color color filter is combined. Is created.
- step S24 UMG is used as a filter evaluation coefficient for the four-color color filter created in step S23, and a UMG value is calculated.
- the spectral sensitivity characteristics of each filter show a high evaluation for filters that have an appropriate overlap.
- the R characteristics and the G characteristics overlap over a wide wavelength band. It can be suppressed that a filter having such characteristics (a filter that amplifies noise when each color signal is separated) is given a high evaluation.
- FIG. 14 is a diagram showing an example of UMG values calculated in the three-color filter c.
- UMG values calculated in the three-color filter c.
- a filter having characteristics as shown in FIG. 7942 is calculated and the R and G characteristics overlap over a wide wavelength band.
- the The G value is calculated.
- a UMG value of “0.887 9” is calculated in a filter having characteristics as shown in FIG. 14 (C) where the respective characteristics of RGB moderately overlap. That is, the highest evaluation is shown for a filter having the characteristics shown in Fig. 14 (C), in which the respective characteristics of RGB moderately overlap.
- the four-color color filter Note that a curve L31 shown in FIG.
- FIG. 14 (A), a curve L41 shown in FIG. 14 (B), and a curve L51 shown in FIG. 14 (C) represent the spectral sensitivity of R
- the curve L42 shown in (B) and the curve L52 shown in Fig. 14 (C) represent the spectral sensitivities of G
- 43, and the curve L53 shown in FIG. 14 (C) represents the spectral sensitivity of B.
- step S25 it is determined whether or not the UMG value calculated in step S24 is equal to or greater than a predetermined threshold value "0.95", and is determined to be less than "0.95". If so, the process proceeds to step S26, and the created four-color color filter is rejected (not used). If the four-color color filter is rejected in step S26, then the processing is terminated (the processing after step S2 shown in FIG. 11 is not executed).
- step S25 if it is determined in step S25 that the UMG value calculated in step S24 is equal to or greater than 0.95, then in step S27, the four-color color filter is It is considered as a candidate for a philosophy to be used in the game.
- step S28 it is determined whether or not the four-color color fill set as the candidate fill set in step S27 can be realized with existing materials and dyes. If it is difficult to obtain the materials, dyes, and the like, it is determined that it is not feasible, the process proceeds to step S26, and the four colors are rejected.
- step S28 if it is determined that the material, dye, and the like can be obtained and are feasible, the process proceeds to step S29, where the created four-color color filter is determined as a filter used in the digital camera. Is done. After that, the processes after step S2 shown in FIG. 11 are executed.
- FIG. 15 is a diagram showing an example of the spectral sensitivity characteristics of the four-color color filter determined in step S29.
- the curve L61 represents the spectral sensitivity of R
- the curve L62 represents the spectral sensitivity of G1.
- the curve L63 represents the spectral sensitivity of G2
- the curve L64 represents the spectral sensitivity of B.
- the spectral sensitivity curve of G2 (curve L63) has a high correlation with the spectral sensitivity curve of G1 (curve L62).
- the spectral sensitivity of R, the spectral sensitivity of G (G 1, G 2), and the spectral sensitivity of B overlap each other within an appropriate range.
- the peak value of the spectral sensitivity curve of the filter to be added should be empirically within the range of 495 to 535 nm (near the peak value of the spectral sensitivity curve of the existing G filter).
- one of the two G filters that make up the minimum unit (R, G, G, B) shown in Fig. 1 is used as an additional color filter. Since a 4-color color filter can be created by itself, there is no need to make major changes in the creation process.
- the linear matrix processing unit 95 performs a conversion process of generating signals of three colors (R, G, B) from signals of four colors (R, G1, G2, B). Since this conversion process is a matrix process for an input signal value that is linear in brightness (a brightness value can be represented by a linear conversion), the conversion process performed in the linear matrix processing unit 95 will be appropriately described below. This is called linear matrix processing.
- step S41 for example, general daylight D65 (illumination light L ( ⁇ )) which is used as a standard light source in the CIE (Commission Internationale de l'Eclai range) Is selected as the illumination light.
- the illumination light may be changed to illumination light in an environment where the image processing apparatus is expected to be frequently used. If there are multiple possible lighting environments, it is conceivable to prepare multiple linear matrices.
- daylight D65 is selected as the illumination light will be described.
- step S42 reference values (reference values) Xr, Yr, and Zr are calculated. Specifically, the reference value Xr is calculated by equation (7-1), Yr is calculated by equation (7-2), and Zr is calculated by equation (7-3).
- color target is Macbeth Color Checker
- reference values for 24 colors are calculated.
- step S 4 3 the output value R f of 4-color filters, G 1 f, is G 2 f) B f is calculated. Specifically, R f is calculated by equation (9-1), G l f is calculated by equation (9-2), G 2 f is calculated by equation (9-3), and B f is It is calculated by (9-4).
- the constant k r is calculated by the equation (10-1), and the constant k gl is expressed by the equation (10-2).
- the constant k g2 is calculated by the equation (10-3), and the constant k b is calculated by the equation (10 4)
- a matrix for performing a conversion that approximates the filter output value calculated in step S43 to the reference value (XYZ ref ) calculated in step S42 is, for example, a minimum error in the XYZ color space.
- the matrix transformation (XYZ exD ) is represented by the following equation (1 2).
- E 2 ⁇ XYZref-XYZexpf-.-(13)
- the color space used in the error least squares method may be changed to a color space other than the XYZ color space. For example, by converting to a Lab, Luv, Lch color space (uniform perceptual color space) that is equivalent to human perception, and performing similar operations, it is possible to reproduce colors with less perceptual error. A matrix can be calculated. Since these color space values are calculated from the XYZ values by a non-linear conversion, a non-linear calculation algorithm is also used in the error least squares method.
- step S45 a linear matrix is determined. For example, if the final RGB image data to be created is represented by the following equation (15), the linear matrix (LinearM) is calculated as follows.
- RGBout [R. , G. , B. ] 1 ⁇ (15)
- Equation (17) is calculated using the inverse matrix of Rix.
- Equation (18) The matrix conversion equation of equation (1 2) and the ITU-R709.BT of equation (15) and equation (17) Equation (18) is calculated using the inverse matrix of the matrix.
- the right side of equation (18) includes an inverse matrix of the ITU-R709.BT matrix and a linear matrix as a value obtained by multiplying the matrix A described above.
- LinearM linear matrix
- the Lab color of the output value and the reference value when the max chart is captured by two types of image input devices an imaging device provided with a four-color color filter and an imaging device provided with a three-color color filter.
- the color difference in the space is calculated by the following equation (20).
- one L 2 is a lightness difference of two samples, a, - a 2, bt- b 2 represents the component difference of color phase and saturation of the two samples.
- FIG. 17 is a diagram illustrating a calculation result obtained by Expression (20). As shown in Fig. 17, in the case of an imaging device provided with a three-color color filter, the color difference is "3.32", whereas in the case of an image pickup device provided with a four-color color filter, This is “1.39”, and the “color appearance” is better for an imaging device provided with a four-color color filter (small color difference).
- the R value of the object R 1 is “49.4”, the G value is “64.1”, the B value is “149.5”, and the R value of the object R 2 is “6 6 0 ”, the G value is“ 63.7 ”, and the B value is“ 15.5.6 ”. Therefore, in the four-color color filter, the RGB values of the object R1 and the object R2 are different values, and the color of each object is identified as in the case of the eyes. In other words, “color discrimination” is improved by providing a filter that can identify four types of colors.
- the four-color color filter 61 is configured as shown in FIG. 6 such that the B filter is provided on the left and right of the G1 filter and the R filter is provided on the left and right of the G2 filter.
- it may be configured in an array as shown in FIG. 4-color color filter 61 shown in Fig. 19
- R filters are provided on the left and right of the G1 fill
- B filters are provided on the left and right of the G2 fill.
- the linear matrix coefficient ⁇ is determined by emphasizing noise reduction rather than color reproducibility depending on the scene or environment to be imaged, and adaptively to the linear matrix. If processing can improve image quality, or conversely, it is better to determine the linear matrix coefficient ⁇ with emphasis on color reproducibility rather than noise reduction, and to perform linear matrix processing adaptively. May be improved.
- the user since the use of the imaging device is different for each user, the user may want to arbitrarily determine the linear matrix coefficient ⁇ .
- the linear matrix coefficient ⁇ ⁇ ⁇ ⁇ ⁇ ⁇ is determined according to the flowchart shown in FIG. 20 in order to solve the above-described problem.
- step S50 the chart to be used and the illumination light are determined (step S50), the color reproducibility index ⁇ ( ⁇ ) is defined (step S51), and the noise reduction is determined.
- Defining index ⁇ ( ⁇ ) is defined (step S52)
- evaluation index EEV (M) Error Evaluation Value
- step S53 evaluation index EEV (M) is defined.
- a linear matrix coefficient M is determined on the basis of (Step S54).
- step 5 the coefficient of the evaluation index EEV (M) is adaptively changed according to the imaging conditions and the like, and the corresponding linear matrix coefficient M is determined. The details of each step are described below.
- To determine the linear matrix coefficient M it is necessary to determine the color chart and the light source that illuminates the color chart.
- Various color charts such as Macbeth Color Checker, Digital Camera Color Checker, IT8.7, etc. can be considered various reflection charts or transmission charts consisting of color patches with multiple uniform color planes.
- Illumination light that has a spectral sensitivity close to that of the environment in which the imaging device is frequently used (for example, D55 or a light source) can be considered. Note that the imaging device is considered to be used under various light sources depending on the purpose of use of the user, so that the illumination light is not limited to only light in a frequently used environment.
- the color reproducibility is defined as a difference between a target color and a color (hereinafter, referred to as an output color) indicated by a signal value subjected to linear matrix processing in the linear matrix processing unit 95 of the imaging device.
- a color hereinafter, referred to as an output color
- RGB values RGB values
- YCbCr values YCbCr values
- XYZ values XYZ values.
- color space values L * a * b * values, L * Using u * v * values
- the target color of the k-th color patch in the color chart is Lab ref — k (L * re- k , a * ref _ k , b * ref — k ), and the output color of the imaging device is L * a * b * sh .
- k L * sh., A * shot _ k, b * sh.
- AE k of this patch is as shown in equation (22).
- the color reproducibility index ⁇ ( ⁇ ) is the average ⁇ of each patch in the color chart.
- An E value, a value that emphasizes the color reproducibility of a specific color by weighting each patch, and the like can be considered.
- lotalPatchNum 1 where w k indicates a weighting factor for each patch, and TotalPatchNum indicates the total number of color patches.
- L * a * b * sh . tk is a function value of the linear matrix coefficient M, and thus ⁇ k and ⁇ are also function values of M.
- the noise reduction index ⁇ ( ⁇ ) is defined by the standard deviation of the signal values subjected to the linear matrix processing in the linear matrix processing unit 95 of the imaging device.
- the noise reduction index ⁇ ( ⁇ ) is defined by the standard deviation of the signal values subjected to the linear matrix processing in the linear matrix processing unit 95 of the imaging device.
- There are various signal values such as RGB values, YCbCr values, XYZ values, etc., but values in a color space that are equally perceived by the human eye (L * a * b * values, L * u * v
- the noise value is the standard deviation of each component of the color space, corresponding to the color space of the signal value.
- the noise value is OR, aG, ⁇ , and in the XYZ space, the noise value is ⁇ , ⁇ , ⁇ .
- these noise values are used to determine one noise index.
- the noise value ⁇ N k in the L * a * b * space is expressed as ffL * k for brightness noise, a * k and ab * k for color noise, and Is defined as taking into account the lightness and color noise as in equation (25), for example.
- wL * k , wa * k , and wb * k indicate weighting coefficients for each standard deviation value. These are appropriately set in correlation with the noise feeling felt by the human eye.
- the noise value o N k various values such as those using variance values of other color spaces can be considered.
- the noise reduction index ⁇ ⁇ ( ⁇ ) the average ⁇ ⁇ value of each patch in the color chart, a value that places importance on the noise reduction of a specific color by weighting each patch, and the like can be considered.
- EEV (M) l [j ⁇ wc ⁇ h (AE (M)) ⁇ + k ⁇ wn- ⁇ ⁇ ( ⁇ )) ⁇ ] ⁇ (28) where h, I, j, k, 1, indicates a function, wc indicates a weighting coefficient for color difference, and wn indicates a weighting coefficient for noise value.
- step S54 The linear matrix coefficient M is determined by applying the least error square method to the evaluation index E EV (M) defined in step S53.
- E EV (M) the evaluation index defined in step S53.
- step S54 wc and wn are appropriately determined, and the linear matrix coefficient M is calculated by applying the least-squares error method using, for example, the Newton method, Steepest Descent method, or Conjugate Gradien removal as a regression algorithm. decide.
- step S54 a weighting coefficient wc for the color difference and a weighting coefficient wn for the noise value of the evaluation index EEV (M) defined in step S53 according to the environmental conditions and the like when the subject is imaged by the imaging device.
- the linear matrix coefficient M is determined by the error least squares method.
- Figure 21 shows the color reproduction finger This shows how the noise value reduction index ⁇ ( ⁇ ) changes when the target ⁇ ( ⁇ ) is changed.
- the linear matrix coefficient ⁇ ⁇ is determined adaptively according to various imaging environments and conditions.
- a set of several linear matrix coefficients ⁇ is prepared in advance, and the user selects the linear matrix coefficients ⁇ as necessary and adjusts the color reproducibility index ⁇ ( ⁇ ) and the noise reduction index ⁇ ( ⁇ ). This is also possible.
- the linear matrix coefficient M is calculated according to steps S50 to S54 described above. The following describes a specific example in which is determined.
- a chart and illumination light to be used are determined (step S50).
- the color chart uses a Macbeth Color Checker (including 24 color patches) and the illumination light uses a D55 light source (550k standard daylight as defined by the CIE).
- each spectral data is, for example, shall have been measured using a spectroradiometer.
- the color reproduction: sex index ⁇ (M) is defined (step S51).
- the target color be the human eye, and use the color difference ⁇ E in Lab space as an index.
- the color of an object is the product of the "spectral reflectance of the object", “spectral energy distribution of illumination”, and “spectral sensitivity distribution of the sensor that senses the object” in the visible light region vis (usually 400M! To 700nm) (29) is defined by the value integrated over the range.
- Object color ⁇ vis (spectral reflectance of object) ⁇ (spectral luminance of illumination) ⁇ (spectral sensitivity of sensor that senses object)
- Equation (30) can be expressed using equation (29).
- the color in the XYZ space is converted to the color in the L * a * b * space using the equation (3 1).
- raw de Isseki RGBX raw _ k is a signal value outputted from the CCD imaging element (R raw _ k, G raw _ k, B raw k, X raw _ k) uses the formula (2 9) And Equation (3 2).
- J Spectral sensitivity distribution of the camera's CCD force
- Imaging apparatus uses linear matrix processing unit 9 5 in raw data RGBX raw _ k (R rawJf, G raff _ kl B raw _ k, raw _ k) the linear matrix coefficients M (m 0 ⁇ m 1 1) Since the linear matrix processing is performed, the imaging data after the linear matrix processing is as shown in equation (33).
- AE k uses the value of L * a * b * cam — k , it is also a function value of the linear matrix coefficient M, so it can be expressed as AE k (M).
- the color reproducibility index ⁇ ( ⁇ ) is defined as the average value of the color differences of each color patch as shown in Expression (37).
- the noise reduction index ⁇ ⁇ ( ⁇ ) is defined (step S52).
- the noise reduction index ⁇ ⁇ ( ⁇ ) is defined based on the ⁇ L ( ⁇ ) component included in the signal value after the linear matrix processing by the linear matrix processing unit 95 of the imaging device.
- the noise Noise_ included in the signal CV- ceD output from the CCD image sensor itself is defined as in Expression (38).
- ShotNoiseCoef and DarkNoise are values determined by the device characteristics of the CCD image sensor.
- DarkNoise represents a noise component that does not depend on the signal value (such as Fixed Pattern Noise)
- ShotNoise represents a noise component that depends on the signal value (such as Sensor Dark Noise, Photon Shot Noise).
- equation (39) the noise component included in the raw data of the k-th color patch of the imaging device to be evaluated is defined as equation (39).
- Noise Xraw k The literature (PD Burns and RS Berns, "Error Propagation Analysis in Color Measurement and Imaging", Color Research and Application, 1997) describes the following noise propagation theory.
- the covariance component ie, off-diagonal component
- the variance-covariance matrix ⁇ y of the output signal ⁇ is defined as in equation (42).
- Equation (42) is a theoretical equation for the propagation of the noise variance between color spaces that can be converted by linear conversion.
- RGB ⁇ XYZ conversion treatment can be by using 7 0 9 based matrix M 7 Q 9 represented by the formula (3 4) performs linear conversion, XYZ- L * a * b * conversion process, the formula (3 it is necessary to perform by Uni nonlinear conversion shown in 1).
- Equation (43) If the value obtained by converting the signal value after the linear matrix into the XYZ value is XYZcam-k (Xcam_k, Ycam-1k, Zcam-k), it can be expressed as Equation (43). it can.
- equation (4 8) can be derived from equation (4 7).
- equation (48) is a function of the linear matrix coefficient ⁇ , it can be expressed as CTL * k (M). Since the noise reduction index ⁇ ( ⁇ ) is the average value of each brightness noise of the color patch, it can be defined as in equation (49).
- an evaluation index EEV (M) considering the color reproducibility index ⁇ E (M) and the noise reduction index ⁇ ⁇ ( ⁇ ) defined as described above is defined as in equation (50) (step S 5). 5 3).
- EEV (M) (wcAE (M)) 2 + (fungus N (M)) 2
- Equation (5 1) is solved using the error least squares method, and the linear matrix coefficient ⁇ is calculated by the equation (5 Determined as in 2) (Step S54)
- Eqs. (5 2) and Eq. (5 3) are larger matrices between the coefficients, and are matrices that increase noise.
- the imaging device amplifies or attenuates the signal input from the CCD image sensor (hereinafter referred to as the input signal) based on the ISO sensitivity setting.
- the ISO sensitivity setting of the imaging device is changed from IS0100 to IS0200, For example, the signal is amplified twice as large as that of IS0100 and input to the imaging device.However, the imaging device uses the same linear matrix coefficient M for all input signals regardless of the ISO sensitivity setting state. If the ISO sensitivity is set high, noise components included in the input signal will be amplified together with the amplification of the input signal because the linear matrix processing is performed using this. Therefore, even if an attempt is made to obtain a high-resolution image by increasing the ISO sensitivity setting, an image including an amplified noise component is generated.
- the linear matrix coefficient M is determined in consideration of the noise component contained in the input signal which is amplified or attenuated based on the setting of the ISO sensitivity.
- Perform linear matrix processing using M For example, as shown in Table 1, the weighting factor for noise reduction (wn) Can be changed according to the ISO sensitivity setting, and the linear matrix coefficient M for each ISO sensitivity setting is determined by substituting wc and wn into equation (50). Therefore, the imaging apparatus can perform the linear matrix processing using the linear matrix coefficient M determined based on the setting state of the ISO sensitivity. Accordingly, the noise component is not amplified, and a high-resolution image can be obtained.
- Table 1 the weighting factor for noise reduction
- the linear matrix coefficient M is adaptively determined based on the environment in which the imaging device images the subject. For example, when a subject such as a night scene is imaged by an imaging device, most of the generated image may be occupied by a dark portion (dark portion), and the noise is very conspicuous in the dark portion. In such a case, it is better to consider noise component reduction rather than color reproducibility.
- the imaging apparatus determines a linear matrix coefficient M in consideration of noise reduction and color reproducibility based on a scene in which a subject is imaged, and uses the linear matrix coefficient M to determine the linear matrix coefficient M. Performs linear matrix processing. For example, as shown in the histogram of Fig. 23, when the area less than half of the luminance dynamic range of the imaging device contains 70% or more of the luminance component of the generated image, the noise reduction is reduced.
- the linear matrix coefficient M is determined with emphasis on, and in other cases the linear matrix coefficient M is determined in consideration of color reproducibility and noise reduction.
- the weighting factor (wn) for the noise reduction can be changed according to the imaging scene, and wc and wn are substituted into the equation (50), and for each imaging scene, Determine the linear matrix coefficient M. Therefore, the imaging device is Since linear matrix processing can be performed using the linear matrix coefficient M determined based on this, even if the generated image is mostly occupied by dark areas, it is possible to make noise components inconspicuous .
- Table 2 the weighting factor (wn) for the noise reduction can be changed according to the imaging scene, and wc and wn are substituted into the equation (50), and for each imaging scene, Determine the linear matrix coefficient M. Therefore, the imaging device is Since linear matrix processing can be performed using the linear matrix coefficient M determined based on this, even if the generated image is mostly occupied by dark areas, it is possible to make noise components inconspicuous . Table 2
- the linear matrix coefficient M is adaptively determined based on a request of a user who uses the imaging apparatus.
- an image generated by photographing a subject with an imaging device is required to have less noise than color reproducibility, depending on a user's intended use.
- the intended use is unknown to the imaging device manufacturer, and is a fact known only to users.
- the linear matrix coefficient M is determined based on the condition intended by the user, and linear matrix processing is performed using the linear matrix coefficient M.
- the weighting factor (wn) for the noise reduction can be changed according to the noise amount adjustment variable, and wn and wc are substituted into the expression (50), and each noise amount adjustment variable is
- the linear matrix coefficient M is determined in advance, and the determined linear matrix coefficient M is stored.
- a predetermined linear matrix coefficient M Is determined is performed using the linear matrix coefficient M. Therefore, since the imaging apparatus can perform the linear matrix processing using the linear matrix coefficient M determined according to the user's request, the imaging device performs the noise amount adjustment according to the user's use situation. It is possible. Table 3
- the four-color filter 61 formed in the front part of the image sensor 45 is determined according to the flowchart shown in FIG. 12.
- the signal processing unit 71 performs matrix processing on the signals (R, Gl, G2, B) whose luminance can be represented by linear transformation. Compared to the case where matrix processing is performed on the signal obtained after performing gamma processing as in the processing in the signal processing unit 11 shown in Fig. 2, more faithful colors can be reproduced in terms of color engineering.
- the determination of the linear matrix coefficient M is determined according to imaging conditions and the like. Reproducibility and low noise It is possible to improve the resistance.
- An image pickup apparatus includes: an adjustment unit that adjusts a color reproduction value that represents color reproduction faithful to the appearance of the human eye and a noise value that represents noise perceived by humans; And a matrix conversion processing unit that performs a matrix conversion process on an image captured by an image sensor unit included in the imaging apparatus based on the matrix coefficient.
- the linear matrix coefficient M can be determined adaptively according to the imaging environment and conditions, and linear matrix processing can be performed using the linear matrix coefficient M.
- the imaging method includes color filters having different spectral characteristics, and includes a color reproduction value representing a color reproduction faithful to the appearance of the human eye and a noise representing a noise feeling perceived by a human.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Color Television Image Signal Generators (AREA)
- Facsimile Image Signal Circuits (AREA)
- Image Processing (AREA)
- Color Image Communication Systems (AREA)
- Processing Of Color Television Signals (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP03813977A EP1578139A4 (en) | 2002-12-25 | 2003-12-02 | PICTURE DEVICE AND METHOD |
US10/540,404 US7489346B2 (en) | 2002-12-25 | 2003-12-02 | Image pick-up device and image pick-up method adapted with image pick-up sensitivity |
CN2003801076614A CN1732694B (zh) | 2002-12-25 | 2003-12-02 | 图像拾取装置和方法 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2002-375423 | 2002-12-25 | ||
JP2002375423A JP3861808B2 (ja) | 2002-12-25 | 2002-12-25 | 撮像装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004059987A1 true WO2004059987A1 (ja) | 2004-07-15 |
Family
ID=32677336
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2003/015437 WO2004059987A1 (ja) | 2002-12-25 | 2003-12-02 | 撮像装置及び方法 |
Country Status (7)
Country | Link |
---|---|
US (1) | US7489346B2 (ja) |
EP (1) | EP1578139A4 (ja) |
JP (1) | JP3861808B2 (ja) |
KR (1) | KR20050088335A (ja) |
CN (1) | CN1732694B (ja) |
TW (1) | TWI225940B (ja) |
WO (1) | WO2004059987A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2464125A1 (en) * | 2004-12-17 | 2012-06-13 | Nikon Corporation | Image processing method |
Families Citing this family (37)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060164533A1 (en) * | 2002-08-27 | 2006-07-27 | E-Phocus, Inc | Electronic image sensor |
JP4501634B2 (ja) | 2004-10-29 | 2010-07-14 | 富士フイルム株式会社 | マトリクス係数決定方法及び画像入力装置 |
US7864235B2 (en) | 2005-03-30 | 2011-01-04 | Hoya Corporation | Imaging device and imaging method including generation of primary color signals |
JP4859502B2 (ja) * | 2005-03-30 | 2012-01-25 | Hoya株式会社 | 撮像装置 |
JP4293174B2 (ja) * | 2005-09-28 | 2009-07-08 | ソニー株式会社 | 撮像装置および画像処理装置 |
JP2007103401A (ja) * | 2005-09-30 | 2007-04-19 | Matsushita Electric Ind Co Ltd | 撮像装置及び画像処理装置 |
KR100721543B1 (ko) * | 2005-10-11 | 2007-05-23 | (주) 넥스트칩 | 통계적 정보를 이용하여 노이즈를 제거하는 영상 처리 방법및 시스템 |
KR101147084B1 (ko) | 2005-12-20 | 2012-05-17 | 엘지디스플레이 주식회사 | 액정 표시장치의 구동장치 및 구동방법 |
JP4983093B2 (ja) * | 2006-05-15 | 2012-07-25 | ソニー株式会社 | 撮像装置および方法 |
TWI400667B (zh) * | 2006-09-06 | 2013-07-01 | Magic Pixel Inc | 色彩補插方法及應用其之影像處理裝置 |
JP4874752B2 (ja) | 2006-09-27 | 2012-02-15 | Hoya株式会社 | デジタルカメラ |
TWI318536B (en) * | 2006-09-29 | 2009-12-11 | Sony Taiwan Ltd | A method of color matching applied on the image apparatus |
US7847830B2 (en) * | 2006-11-21 | 2010-12-07 | Sony Ericsson Mobile Communications Ab | System and method for camera metering based on flesh tone detection |
US8532428B2 (en) * | 2007-02-27 | 2013-09-10 | Nec Corporation | Noise reducing apparatus, noise reducing method, and noise reducing program for improving image quality |
US8711249B2 (en) | 2007-03-29 | 2014-04-29 | Sony Corporation | Method of and apparatus for image denoising |
US8108211B2 (en) | 2007-03-29 | 2012-01-31 | Sony Corporation | Method of and apparatus for analyzing noise in a signal processing system |
US9285670B2 (en) * | 2007-09-14 | 2016-03-15 | Capso Vision, Inc. | Data communication between capsulated camera and its external environments |
JP5080934B2 (ja) * | 2007-10-22 | 2012-11-21 | キヤノン株式会社 | 画像処理装置及び方法、及び撮像装置 |
JP5178226B2 (ja) * | 2008-02-08 | 2013-04-10 | オリンパス株式会社 | 画像処理装置および画像処理プログラム |
JP4735994B2 (ja) * | 2008-08-27 | 2011-07-27 | ソニー株式会社 | 撮像装置及び方法、プログラム、並びに記録媒体 |
US20110075935A1 (en) * | 2009-09-25 | 2011-03-31 | Sony Corporation | Method to measure local image similarity based on the l1 distance measure |
KR101129220B1 (ko) * | 2009-11-03 | 2012-03-26 | 중앙대학교 산학협력단 | 레인지 영상의 노이즈 제거장치 및 방법 |
JP5460276B2 (ja) * | 2009-12-04 | 2014-04-02 | キヤノン株式会社 | 撮像装置及び撮像システム |
JP5308375B2 (ja) * | 2010-02-25 | 2013-10-09 | シャープ株式会社 | 信号処理装置、固体撮像装置、電子情報機器、信号処理方法、制御プログラムおよび記録媒体 |
JP2012010276A (ja) * | 2010-06-28 | 2012-01-12 | Sony Corp | 画像処理装置、画像処理方法及び画像処理プログラム |
US8731281B2 (en) * | 2011-03-29 | 2014-05-20 | Sony Corporation | Wavelet transform on incomplete image data and its applications in image processing |
JP2012253727A (ja) * | 2011-06-07 | 2012-12-20 | Toshiba Corp | 固体撮像装置及びカメラモジュール |
JP5954661B2 (ja) * | 2011-08-26 | 2016-07-20 | パナソニックIpマネジメント株式会社 | 撮像素子、及び撮像装置 |
US8780225B2 (en) * | 2011-10-12 | 2014-07-15 | Apple Inc. | Use of noise-optimized selection criteria to calculate scene white points |
US8654210B2 (en) | 2011-10-13 | 2014-02-18 | Canon Kabushiki Kaisha | Adaptive color imaging |
JP5895569B2 (ja) * | 2012-02-08 | 2016-03-30 | ソニー株式会社 | 情報処理装置、情報処理方法およびコンピュータプログラム |
US9531922B2 (en) * | 2013-10-30 | 2016-12-27 | Canon Kabushiki Kaisha | Image processing apparatus, image processing method, and computer readable storage medium for improving sharpness |
JP2015180045A (ja) * | 2014-02-26 | 2015-10-08 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
JP6013382B2 (ja) * | 2014-02-27 | 2016-10-25 | 富士フイルム株式会社 | 内視鏡システム及びその作動方法 |
CN104902153B (zh) * | 2015-05-25 | 2017-11-28 | 北京空间机电研究所 | 一种多光谱相机色彩校正方法 |
CN114009148B (zh) * | 2019-04-30 | 2024-05-14 | 昕诺飞控股有限公司 | 亮度分布确定 |
EP3896967A1 (en) | 2020-04-17 | 2021-10-20 | Leica Microsystems CMS GmbH | Digital imaging device and method for generating a digital color image |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001251644A (ja) * | 2000-03-03 | 2001-09-14 | Mitsubishi Electric Corp | 撮像装置 |
JP2001359114A (ja) * | 2000-06-09 | 2001-12-26 | Fuji Photo Film Co Ltd | 固体撮像素子を用いた画像取得装置および画像取得方法並びにその方法を実行するためのプログラムを記録した記録媒体 |
JP2003116146A (ja) * | 2001-08-10 | 2003-04-18 | Agilent Technol Inc | ディジタル・カメラにおける画質向上のための方法 |
JP2003235050A (ja) * | 2002-02-12 | 2003-08-22 | Nikon Corp | 画像処理装置、画像処理プログラム、および画像処理方法 |
JP2003284084A (ja) * | 2002-03-20 | 2003-10-03 | Sony Corp | 画像処理装置および方法、並びに画像処理装置の製造方法 |
JP2003284082A (ja) * | 2002-02-21 | 2003-10-03 | Eastman Kodak Co | 正確な電子色捕獲及び再生の装置および方法 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0815338B2 (ja) * | 1984-08-13 | 1996-02-14 | 株式会社日立製作所 | カラービデオカメラ |
JP3263924B2 (ja) * | 1990-09-14 | 2002-03-11 | ソニー株式会社 | カラー撮像装置 |
US5668596A (en) * | 1996-02-29 | 1997-09-16 | Eastman Kodak Company | Digital imaging device optimized for color performance |
JP3750830B2 (ja) * | 1996-08-30 | 2006-03-01 | ソニー株式会社 | 撮像装置における色補正装置 |
JP4534340B2 (ja) * | 2000-10-31 | 2010-09-01 | ソニー株式会社 | 色再現補正装置 |
-
2002
- 2002-12-25 JP JP2002375423A patent/JP3861808B2/ja not_active Expired - Lifetime
-
2003
- 2003-12-02 US US10/540,404 patent/US7489346B2/en not_active Expired - Fee Related
- 2003-12-02 CN CN2003801076614A patent/CN1732694B/zh not_active Expired - Fee Related
- 2003-12-02 EP EP03813977A patent/EP1578139A4/en not_active Ceased
- 2003-12-02 WO PCT/JP2003/015437 patent/WO2004059987A1/ja active Application Filing
- 2003-12-02 KR KR1020057012037A patent/KR20050088335A/ko not_active Application Discontinuation
- 2003-12-05 TW TW092134391A patent/TWI225940B/zh not_active IP Right Cessation
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001251644A (ja) * | 2000-03-03 | 2001-09-14 | Mitsubishi Electric Corp | 撮像装置 |
JP2001359114A (ja) * | 2000-06-09 | 2001-12-26 | Fuji Photo Film Co Ltd | 固体撮像素子を用いた画像取得装置および画像取得方法並びにその方法を実行するためのプログラムを記録した記録媒体 |
JP2003116146A (ja) * | 2001-08-10 | 2003-04-18 | Agilent Technol Inc | ディジタル・カメラにおける画質向上のための方法 |
JP2003235050A (ja) * | 2002-02-12 | 2003-08-22 | Nikon Corp | 画像処理装置、画像処理プログラム、および画像処理方法 |
JP2003284082A (ja) * | 2002-02-21 | 2003-10-03 | Eastman Kodak Co | 正確な電子色捕獲及び再生の装置および方法 |
JP2003284084A (ja) * | 2002-03-20 | 2003-10-03 | Sony Corp | 画像処理装置および方法、並びに画像処理装置の製造方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1578139A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2464125A1 (en) * | 2004-12-17 | 2012-06-13 | Nikon Corporation | Image processing method |
Also Published As
Publication number | Publication date |
---|---|
US7489346B2 (en) | 2009-02-10 |
JP3861808B2 (ja) | 2006-12-27 |
EP1578139A4 (en) | 2006-05-03 |
TW200424569A (en) | 2004-11-16 |
TWI225940B (en) | 2005-01-01 |
CN1732694B (zh) | 2012-11-14 |
US20060082665A1 (en) | 2006-04-20 |
EP1578139A1 (en) | 2005-09-21 |
KR20050088335A (ko) | 2005-09-05 |
CN1732694A (zh) | 2006-02-08 |
JP2004208079A (ja) | 2004-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2004059987A1 (ja) | 撮像装置及び方法 | |
WO2003079696A1 (fr) | Dispositif de traitement d'image, procede correspondant, et procede pour la fabrication du dispositif | |
US7663668B2 (en) | Imaging device | |
WO2010053029A1 (ja) | 画像入力装置 | |
EP1930853A1 (en) | Image signal processing apparatus and image signal processing | |
JP2006094112A (ja) | 撮像装置 | |
JP2009105576A (ja) | 画像処理装置及び方法、及び撮像装置 | |
EP1343331A2 (en) | Apparatus and method for accurate electronic color image capture and reproduction | |
JP4677699B2 (ja) | 画像処理方法、画像処理装置、撮影装置評価方法、画像情報保存方法および画像処理システム | |
WO2007148576A1 (ja) | 撮像システム及び撮像プログラム | |
JP2005278213A (ja) | 製造方法 | |
JP3863773B2 (ja) | 画像撮影方法および装置 | |
JP2000278707A (ja) | 画像データ変換方法、画像データ構成方法、画像データ変換装置および画像データを変換するプログラムを記録した記録媒体 | |
JPH11113005A (ja) | 撮像装置 | |
JP3966868B2 (ja) | 撮像装置、カメラ、及び信号処理方法 | |
JP3933651B2 (ja) | 撮像装置及びその信号処理方法 | |
JP4298595B2 (ja) | 撮像装置及びその信号処理方法 | |
JP2008085975A (ja) | 撮像素子特性評価方法および撮像素子特性評価装置 | |
JP2014042138A (ja) | 画像処理装置、コンピュータプログラム、およびデジタルカメラ | |
JP2007097202A (ja) | 画像処理装置 | |
KR100322189B1 (ko) | 디지털 영상의 광원 분광 분포 추정 방법 | |
JP4397724B2 (ja) | 撮像装置、カメラ、及び信号処理方法 | |
JP2001057680A (ja) | ホワイトバランス調整方法および装置並びに記録媒体 | |
JP5056006B2 (ja) | 撮像装置およびプログラム | |
JP2007267404A (ja) | 画像処理装置の製造方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2003813977 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2006082665 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10540404 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020057012037 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 20038A76614 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 1020057012037 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2003813977 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 10540404 Country of ref document: US |