US20090219416A1 - Image processing system and recording medium recording image processing program - Google Patents
Image processing system and recording medium recording image processing program Download PDFInfo
- Publication number
- US20090219416A1 US20090219416A1 US12/400,028 US40002809A US2009219416A1 US 20090219416 A1 US20090219416 A1 US 20090219416A1 US 40002809 A US40002809 A US 40002809A US 2009219416 A1 US2009219416 A1 US 2009219416A1
- Authority
- US
- United States
- Prior art keywords
- frequency component
- unit
- component
- means adapted
- basis
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012545 processing Methods 0.000 title claims abstract description 487
- 238000006243 chemical reaction Methods 0.000 claims abstract description 338
- 238000004364 calculation method Methods 0.000 claims abstract description 159
- 238000000926 separation method Methods 0.000 claims abstract description 93
- 230000015572 biosynthetic process Effects 0.000 claims abstract description 73
- 238000003786 synthesis reaction Methods 0.000 claims abstract description 73
- 238000000354 decomposition reaction Methods 0.000 claims abstract description 58
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 4
- 238000000605 extraction Methods 0.000 claims description 107
- 238000009499 grossing Methods 0.000 claims description 51
- 238000012937 correction Methods 0.000 claims description 31
- 239000000284 extract Substances 0.000 claims description 19
- 230000009467 reduction Effects 0.000 claims description 10
- 230000000295 complement effect Effects 0.000 claims description 5
- 238000012546 transfer Methods 0.000 description 99
- 238000010586 diagram Methods 0.000 description 49
- 230000009471 action Effects 0.000 description 17
- 230000003321 amplification Effects 0.000 description 15
- 238000003199 nucleic acid amplification method Methods 0.000 description 15
- 238000005516 engineering process Methods 0.000 description 13
- 230000006835 compression Effects 0.000 description 8
- 238000007906 compression Methods 0.000 description 8
- 238000000034 method Methods 0.000 description 8
- 206010052143 Ocular discomfort Diseases 0.000 description 6
- 230000035945 sensitivity Effects 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 5
- 238000001914 filtration Methods 0.000 description 5
- 230000003247 decreasing effect Effects 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 238000005070 sampling Methods 0.000 description 4
- 230000002411 adverse Effects 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000004907 flux Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration by non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G06T5/92—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/81—Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/135—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements
- H04N25/136—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements using complementary colours
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/20—Circuitry for controlling amplitude response
- H04N5/205—Circuitry for controlling amplitude response for correcting amplitude versus frequency characteristic
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20064—Wavelet transform [DWT]
Definitions
- the present invention relates to an image processing system arranged to perform a gradation conversion on an image signal and a recording medium recording an image processing program for performing the gradation conversion on the image signal.
- a space-invariant method of using a single gradation conversion curve for the image signal and a space-variant method of using a plurality of gradation conversion curves different for each local region are proposed.
- Japanese Patent No. 3465226 discloses a technology for dividing the image signal into a plurality of regions on the basis of texture information, performing a gradation conversion processing by calculating gradation conversion curve for each region on the basis of a histogram, and performing a weighting interpolation on the basis of a distance between the respective regions.
- a gradation conversion processing by calculating gradation conversion curve for each region on the basis of a histogram
- a weighting interpolation on the basis of a distance between the respective regions.
- Japanese Unexamined Patent Application Publication No. 8-56316 discloses a technology for separating the image signal into a high frequency component and a low frequency component, performing a contrast emphasis processing on the low frequency component, and synthesizing the low frequency component after the contrast emphasis processing with the high frequency component.
- a technology for separating the image signal into a high frequency component and a low frequency component performing a contrast emphasis processing on the low frequency component, and synthesizing the low frequency component after the contrast emphasis processing with the high frequency component.
- Japanese Unexamined Patent Application Publication No. 2004-128985 discloses a technology for estimating a noise amount for each block unit on the basis of a noise model and performing different noise reducing processings for each block unit.
- a technology for estimating a noise amount for each block unit on the basis of a noise model and performing different noise reducing processings for each block unit By employing such a technology, it is possible to perform a space-variant noise reducing processing, and it is possible to obtain the high quality image signals in which degradation of the edge component is little.
- an image processing system arranged to perform a gradation conversion on an image signal
- the image processing system including: separation means adapted to separate the image signal into an invalid component caused by noise and other valid component; conversion means adapted to perform the gradation conversion on the valid component; and synthesis means adapted to synthesize an image signal on which the gradation conversion has been performed on the basis of the valid component on which the gradation conversion has been performed and the invalid component.
- a recording medium recording an image processing program for instructing a computer to perform a gradation conversion on an image signal, the image processing program instructing the computer to execute: a separation step of separating the image signal into an invalid component caused by noise and other valid component; a conversion step of performing the gradation conversion on the valid component; and a synthesis step of synthesizing an image signal on which the gradation conversion has been performed on the basis of the valid component on which the gradation conversion has been performed and the invalid component.
- FIG. 1 is a block diagram of a configuration of an image processing system according to a first embodiment of the present invention
- FIG. 2 is a block diagram of a configuration example of a frequency decomposition unit according to the first embodiment
- FIG. 3A is an explanatory diagram for describing a wavelet transform, illustrating an image signal in a real space according to the first embodiment
- FIG. 3B is an explanatory diagram for describing the wavelet transform, illustrating the signal after the first wavelet transform has been performed according to the first embodiment
- FIG. 3C is an explanatory diagram for describing the wavelet transform, illustrating the signal after the second wavelet transform has been performed according to the first embodiment
- FIG. 4 is a block diagram of a configuration example of a conversion characteristic calculation unit according to the first embodiment
- FIG. 5 is a block diagram of a configuration example of a high frequency separation unit according to the first embodiment
- FIG. 6 is a block diagram of a configuration example of a gradation processing unit according to the first embodiment
- FIG. 7 is an explanatory diagram for describing a division into regions of a low frequency component in a synthesis operation for gradation conversion curves according to the first embodiment
- FIG. 8 is an explanatory diagram for describing distances d 1 to d 4 between a target pixel and neighboring four regions in the synthesis operation for gradation conversion curves according to the first embodiment
- FIG. 9 is a block diagram of a configuration example of a frequency synthesis unit according to the first embodiment.
- FIG. 10 is a diagram illustrating another configuration example of the image processing system according to the first embodiment.
- FIG. 11 is a flow chart showing a main routine of an image processing program according to the first embodiment
- FIG. 12 is a flow chart showing a processing for a conversion characteristic calculation in step S 3 of FIG. 11 according to the first embodiment
- FIG. 13 is a flow chart showing a processing for a high frequency separation in step S 4 of FIG. 11 according to the first embodiment
- FIG. 14 is a flow chart showing a gradation processing in step S 5 of FIG. 11 according to the first embodiment
- FIG. 15 is a block diagram of a configuration of an image processing system according to a second embodiment of the present invention.
- FIG. 16 is a diagram illustrating a configuration of a Bayer-type primary color filter according to the second embodiment
- FIG. 17 is a diagram illustrating a configuration of a color-difference line-sequential type complementary color filter according to the second embodiment
- FIG. 18 is a block diagram of a configuration example of a frequency decomposition unit according to the second embodiment.
- FIG. 19 is a block diagram of a configuration example of a conversion characteristic calculation unit according to the second embodiment 2;
- FIG. 20 is a block diagram of a configuration example of a high frequency separation unit according to the second embodiment.
- FIG. 21 is a block diagram of a configuration example of a gradation processing unit according to the second embodiment.
- FIG. 22 is a flow chart showing a main routine of an image processing program according to the second embodiment.
- FIG. 23 is a flow chart showing a processing for a conversion characteristic calculation in step S 51 of FIG. 22 according to the second embodiment
- FIG. 24 is a flow chart showing a processing for a high frequency separation in step S 52 of FIG. 22 according to the second embodiment
- FIG. 25 is a flow chart showing a gradation processing in step S 53 of FIG. 22 according to the second embodiment
- FIG. 26 is a block diagram of a configuration of an image processing system according to a third embodiment of the present invention.
- FIG. 27A is an explanatory diagram for describing a DCT (discrete cosine transform), illustrating an image signal in a real space according to the third embodiment
- FIG. 27B is an explanatory diagram for describing the DCT (discrete cosine transform), illustrating a signal in a frequency space after the DCT transform according to the third embodiment
- FIG. 28 is a block diagram of a configuration example of a high frequency separation unit according to the third embodiment.
- FIG. 29 is a flow chart showing a main routine of an image processing program according to the third embodiment.
- FIG. 30 is a flow chart showing a processing for a high frequency separation in step S 80 of FIG. 29 according to the third embodiment
- FIG. 31 is a block diagram of a configuration of an image processing system according to a fourth embodiment of the present invention.
- FIG. 32 is a block diagram of a configuration example of a noise reducing unit according to the fourth embodiment.
- FIG. 33 is a block diagram of a configuration example of a gradation processing unit according to the fourth embodiment.
- FIG. 34 is a flow chart showing a main routine of an image processing program according to the fourth embodiment.
- FIG. 35 is a flow chart showing a processing for a noise reduction in step S 100 of FIG. 34 according to the fourth embodiment.
- FIG. 36 is a flow chart showing a gradation processing in step S 102 of FIG. 34 according to the fourth embodiment.
- FIG. 1 to FIG. 14 illustrate a first embodiment of the present invention
- FIG. 1 is a block diagram of a configuration of an image processing system.
- the image processing system illustrated in FIG. 1 is an example constituted as an image pickup system including an image pickup unit.
- the image processing system includes a lens system 100 , an aperture 101 , a CCD 102 , an amplification unit 103 , an A/D conversion unit (in the drawing, which is simply referred to as “A/D”) 104 , a buffer 105 , an exposure control unit 106 , a focus control unit 107 , an AF motor 108 , a frequency decomposition unit 109 constituting separation means and frequency decomposition means, a buffer 110 , a conversion characteristic calculation unit 111 constituting conversion means and conversion characteristic calculation means, a high frequency separation unit 112 constituting separation means and high frequency separation means, a gradation processing unit 113 constituting conversion means and gradation processing means, a buffer 114 , a frequency synthesis unit 115 constituting synthesis means and frequency synthesis means, a signal processing unit 116 , an output unit 117 , a control unit 118 constituting control means and doubling as noise estimation means and collection means, an external I/F unit 119 , and a temperature
- An analog image signal captured and output via the lens system 100 , the aperture 101 , the CCD 102 is amplified by the amplification unit 103 and converted into a digital signal by the A/D conversion unit 104 .
- the image signal from the A/D conversion unit 104 is transferred via the buffer 105 to the frequency decomposition unit 109 .
- the buffer 105 is connected to the exposure control unit 106 and also to the focus control unit 107 .
- the exposure control unit 106 is connected to the aperture 101 , the CCD 102 , and the amplification unit 103 . Also, the focus control unit 107 is connected to the AF motor 108 .
- the signal from the frequency decomposition unit 109 is connected to the buffer 110 .
- the buffer 110 is connected to the conversion characteristic calculation unit 111 , the high frequency separation unit 112 , and the gradation processing unit 113 .
- the conversion characteristic calculation unit 111 is connected to the gradation processing unit 113 .
- the high frequency separation unit 112 is connected to the gradation processing unit 113 and the buffer 114 .
- the gradation processing unit 113 is connected to the buffer 114 .
- the buffer 114 is connected via the frequency synthesis unit 115 and the signal processing unit 116 to the output unit 117 such as a memory card.
- the control unit 118 is composed, for example, of a micro computer.
- the control unit 118 is bi-directionally connected to the amplification unit 103 , the A/D conversion unit 104 , the exposure control unit 106 , the focus control unit 107 , the frequency decomposition unit 109 , the conversion characteristic calculation unit 111 , the high frequency separation unit 112 , the gradation processing unit 113 , the frequency synthesis unit 115 , the signal processing unit 116 , and the output unit 117 , and is configured to control these units.
- the external I/F unit 119 is also bi-directionally connected to the control unit 118 .
- the external I/F unit 119 is an interface provided with a power supply switch, a shutter button, a mode button for performing switching of various modes for each shooting operation, and the like.
- the signal from the temperature sensor 120 is also connected to the control unit 118 .
- the temperature sensor 120 is arranged in a neighborhood of the CCD 102 , and is configured to substantially measure the temperature of the CCD 102 .
- the user Before performing the shooting operation, the user sets image pickup conditions such as an ISO sensitivity via the external I/F unit 119 .
- the image processing system is turned into a pre-image pickup device.
- the lens system 100 forms an optical image of a subject on an image pickup plane of the CCD 102 .
- the aperture 101 regulates a passage range of the subject luminous flux which has been formed into image by the lens system to change the luminance of the optical image formed on the image pickup plane of the CCD 102 .
- the CCD 102 photoelectrically converts the formed optical image and outputs as an analog image signal. It should be noted that according to the present embodiment, as the CCD 102 , a monochrome single CCD is considered. But, the image pickup device is not limited to the CCD, but of course a CMOS or other image pickup devices may be used.
- the analog signal output in this manner from the CCD 102 is amplified by the amplification unit 103 by a predetermined amount while taking into account the ISO sensitivity. Thereafter, the analog signal is converted into the digital signal by the A/D conversion unit 104 to be transferred to the buffer 105 .
- the gradation width of the digitalized image signal is set, for example, as 12-bits.
- the image signal stored in the buffer 105 is transferred to the exposure control unit 106 and the focus control unit 107 .
- the exposure control unit 106 While taking into account the set ISO sensitivity, the shutter speed at a limit of image stability, and the like, the exposure control unit 106 performs a control on an aperture value of the aperture 101 , an electronic shutter speed of the CCD 102 , a gain of the amplification unit 103 , and the like to achieve the correct exposure on the basis of the image signal.
- the focus control unit 107 obtains a focus signal by detecting the edge intensity and controls the AF motor 108 so that the edge intensity becomes the largest on the basis of the image signal.
- the image processing system functions as a real shooting device.
- the image signal is transferred to the buffer 105 .
- the real shooting operation is performed on the basis of the exposure conditions calculated by the exposure control unit 106 and the focus conditions calculated by the focus control unit 107 , and these conditions for each shooting operation are transferred to the control unit 118 .
- the image signal in the buffer 105 obtained by the real shooting operation is transferred to the frequency decomposition unit 109 .
- the frequency decomposition unit 109 On the basis of the control of the control unit 118 , the frequency decomposition unit 109 performs a predetermined frequency decomposition on the transferred image signal to obtain a high frequency component and a low frequency component. Then, the frequency decomposition unit 109 sequentially transfers the thus obtained high frequency component and the low frequency component to the buffer 110 . It should be noted that according to the present embodiment, for the frequency decomposition, it is supposed to employ the wavelet transform by two times.
- the conversion characteristic calculation unit 111 reads the low frequency component from the buffer 110 to calculate gradation characteristics used for the gradation conversion processing on the basis of the control of the control unit 118 . It should be noted that according to the present embodiment, as the gradation conversion processing, a space-variant processing which uses a plurality of gradation characteristics different for each local region is supposed. Then, the conversion characteristic calculation unit 111 transfers the calculated gradation characteristics to the gradation processing unit 113 .
- the high frequency separation unit 112 reads the high frequency component from the buffer 110 to separate the high frequency component into an invalid component caused by noise and other valid component. Then, the high frequency separation unit 112 transfers the thus separated valid component to the gradation processing unit 113 and the above-mentioned invalid component to the buffer 114 , respectively.
- the gradation processing unit 113 reads the low frequency component from the buffer 110 , the valid component in the high frequency component from the high frequency separation unit 112 , and the gradation characteristic from the conversion characteristic calculation unit 111 , respectively, on the basis of the control of the control unit 118 . Then, the gradation processing unit 113 performs the gradation processing on the low frequency component and the valid component in the high frequency component on the basis of the above-mentioned gradation characteristic. The gradation processing unit 113 transfers the low frequency component on which the gradation processing has been performed and the valid component in the high frequency component on which the gradation processing has been performed to the buffer 114 .
- the frequency synthesis unit 115 reads the low frequency component on which the gradation processing has been performed, the valid component in the high frequency component on which the gradation processing has been performed, and the invalid component in the high frequency component from the buffer 114 , and synthesizes the image signal on which the gradation processing has been performed on the basis of these components under the control of the control unit 118 . It should be noted that according to the present embodiment, it is supposed to use the inverse wavelet transform as the frequency synthesis. Then, the frequency synthesis unit 115 transfers the synthesized image signal to the signal processing unit 116 .
- the signal processing unit 116 performs a known compression processing or the like on the image signal from the image signal the frequency synthesis unit 115 and transfers the signal after the processing to the output unit 117 on the basis of the control of the control unit 118 .
- the output unit 117 records and saves the image signal output from the signal processing unit 116 in the recording medium such as a memory card.
- FIG. 2 is a block diagram of a configuration example of the frequency decomposition unit 109 .
- the frequency decomposition unit 109 includes a data reading unit 200 , a buffer 201 , a horizontal high-pass filter (in the drawing, which is simply referred to as “horizontal high-pass”, and the same applies in the following description) 202 , a horizontal low-pass filter (in the drawing, which is simply referred to as “horizontal low-pass”, and the same applies in the following description) 203 , a sub sampler 204 , a sub sampler 205 , a vertical high-pass filter (in the drawing, which is simply referred to as “vertical high-pass”, and the same applies in the following description) 206 , a vertical low-pass filter (in the drawing, which is simply referred to as “vertical low-pass”, and the same applies in the following description) 207 , a vertical high-pass filter 208 , a vertical low-pass filter 209 , a sub sampler 210 , a sub sampler 211 , a sub sampler 212 , a sub sampler 213
- the buffer 105 is connected via the data reading unit 200 to the buffer 201 .
- the buffer 201 is connected to the horizontal high-pass filter 202 and the horizontal low-pass filter 203 .
- the horizontal high-pass filter 202 is connected via the sub sampler 204 to the vertical high-pass filter 206 and the vertical low-pass filter 207 .
- the horizontal low-pass filter 203 is connected via the sub sampler 205 to the vertical high-pass filter 208 and the vertical low-pass filter 209 .
- the vertical high-pass filter 206 is connected to the sub sampler 210
- the vertical low-pass filter 207 is connected to the sub sampler 211
- the vertical high-pass filter 208 is connected to the sub sampler 212
- the vertical low-pass filter 209 is connected to the sub sampler 213 , respectively.
- the sub sampler 210 , the sub sampler 211 , and the sub sampler 212 are connected to the switching unit 214 .
- the sub sampler 213 is connected to the switching unit 214 and the data transfer control unit 215 .
- the switching unit 214 is connected to the buffer 110 .
- the data transfer control unit 215 is connected to the buffer 201 .
- the basis function ROM 216 is connected to the filter coefficient reading unit 217 .
- the filter coefficient reading unit 217 is connected to the horizontal high-pass filter 202 , the horizontal low-pass filter 203 , the vertical high-pass filter 206 , the vertical low-pass filter 207 , the vertical high-pass filter 208 , and the vertical low-pass filter 209 .
- the control unit 118 is bi-directionally connected to the data reading unit 200 , the switching unit 214 , the data transfer control unit 215 , and the filter coefficient reading unit 217 to control these units.
- the basis function ROM 216 records filter coefficients used for the wavelet transform such as Harr function or Daubechies function. Among these, for example, the coefficient of the high-pass filter in the Harr function is represented by Numeric Expression 1 and the coefficient of the low-pass filter is represented by Numeric Expression 2, respectively.
- the filter coefficient reading unit 217 reads the filter coefficients from the basis function ROM 216 , transfers the high-pass filter coefficient to the horizontal high-pass filter 202 , the vertical high-pass filter 206 , and the vertical high-pass filter 208 , and transfers the low-pass filter coefficient to the horizontal low-pass filter 203 , the vertical low-pass filter 207 , and the vertical low-pass filter 209 , respectively, on the basis of the control of the control unit 118 .
- the data reading unit 200 reads the image signal from the buffer 105 to be transferred to the buffer 201 .
- the image signal read from the buffer 105 and stored on the buffer 201 is set as L 0 .
- the image signal on the buffer 201 is subjected to the filtering processing in the horizontal direction and the vertical direction by the horizontal high-pass filter 202 , the horizontal low-pass filter 203 , the vertical high-pass filter 206 , the vertical low-pass filter 207 , the vertical high-pass filter 208 , and the vertical low-pass filter 209 .
- the sub sampler 204 and the sub sampler 205 perform the sub sampling on the input image signal in the horizontal direction into 1/2
- the sub sampler 210 , the sub sampler 211 , the sub sampler 212 , and the sub sampler 213 performs the sub sampling on the input image signal in the vertical direction into 1/2.
- the output of the sub sampler 210 provides a high frequency component Hs 1 ij in the slanted direction in the transform performed for the first time
- the output of the sub sampler 211 provides a first-order high frequency component Hh 1 ij in the horizontal direction in the transform performed for the first time
- the output of the sub sampler 212 provides a first-order high frequency component Hv 1 ij in the vertical direction in the transform performed for the first time
- the output of the sub sampler 213 provides a first-order low frequency component L 1 ij in the transform performed for the first time, respectively.
- suffixes i and j mean coordinates in x and y directions in the first-order signal after the transform.
- FIGS. 3A to 3C are explanatory diagrams for describing the wavelet transform: FIG. 3A illustrates the image signal in the real space, FIG. 3B illustrates the signal after the wavelet transform is performed for the first time, and FIG. 3C illustrates the signal after the wavelet transform is performed for the second time, respectively.
- FIG. 3B illustrates the first-order high frequency component Hs 1 00 in the slanted direction, the first-order high frequency component Hh 1 00 in the horizontal direction, and the first-order high frequency component Hv 1 00 in the vertical direction corresponding to the low frequency component L 1 00 .
- the three first-order high frequency components Hs 1 ij , Hh 1 ij , and Hv 1 ij corresponding to the first-order low frequency component L 1 ij of one pixel, are all one pixel.
- the switching unit 214 sequentially transfers the above-mentioned three first-order high frequency components Hs 1 ij , Hh 1 ij , and Hv 1 ij and the first-order low frequency component L 1 ij to the buffer 110 .
- the data transfer control unit 215 transfers the first-order low frequency component L 1 ij from the sub sampler 213 to the buffer 201 on the basis of the control of the control unit 118 .
- FIG. 3C illustrates the signal in such a transform performed for the second time.
- the second-order high frequency component in the slanted direction corresponding to the second-order low frequency component L 2 00 of one pixel becomes Hs 2 00
- the second-order high frequency component in the horizontal direction becomes Hh 2 00
- the second-order high frequency component in the vertical direction becomes Hv 2 00 , all of which are one pixel
- the first-order high frequency components in the corresponding slanted direction become Hs 1 00 , Hs 1 10 , Hs 1 01 , and Hs 1 11
- the first-order high frequency components in the horizontal direction become Hh 1 00 , Hh 1 10 , Hh 1 01 , and Hh 1 11
- the first-order high frequency components in the vertical direction become Hv 1 00 , Hv 1 10 , Hv 1 01 , and Hv 1 11 , all of which are four pixels.
- FIG. 4 is a block diagram of a configuration example of the conversion characteristic calculation unit 111 .
- the conversion characteristic calculation unit 111 includes a division unit 300 constituting division means, a buffer 301 , a correct range extraction unit 302 constituting correct range extraction means, an edge calculation unit 303 constituting region-of-interest setting means and edge calculation means, a histogram creation unit 304 constituting histogram creation means, a gradation conversion curve calculation unit 305 constituting gradation conversion curve calculation means, and a buffer 306 .
- the buffer 110 is connected via the division unit 300 to the buffer 301 .
- the buffer 301 is connected to the correct range extraction unit 302 and the histogram creation unit 304 .
- the correct range extraction unit 302 is connected via the edge calculation unit 303 to the histogram creation unit 304 .
- the histogram creation unit 304 is connected via the gradation conversion curve calculation unit 305 and the buffer 306 to the gradation processing unit 113 .
- the control unit 118 is bi-directionally connected to the division unit 300 , the correct range extraction unit 302 , the edge calculation unit 303 , the histogram creation unit 304 , and the gradation conversion curve calculation unit 305 to control these units.
- the division unit 300 reads the low frequency component of the image signal from the buffer 110 on the basis of the control of the control unit 118 and divides the low frequency component into regions of a predetermined size shown in FIG. 7 , for example, a 32 ⁇ 32 pixel size, so that the respective regions are not overlapped one another.
- FIG. 7 is an explanatory diagram for describing the division into the regions of the low frequency component in the synthesis operation of the gradation conversion curves. Then, the division unit 300 sequentially transfers the divided regions to the buffer 301 .
- the correct range extraction unit 302 reads the low frequency components from the buffer 301 for each local region unit on the basis of the control of the control unit 118 .
- the correct range extraction unit 302 compares the low frequency components with the pre-set threshold related to the dark part (by way of an example, in the case of 12-bit gradation, for example, 128) and the pre-set threshold related to the light part (in the case of the 12-bit gradation, for example, 3968), and transfers the low frequency components which are equal to or larger than the threshold of the dark part and also equal to or smaller than the threshold of the light part as the correct exposure range to the edge calculation unit 303 .
- the edge calculation unit 303 reads the low frequency components in the correct exposure range from the correct range extraction unit 302 on the basis of the control of the control unit 118 , and uses a Laplacian filter or the like to calculate the known edge intensity.
- the edge calculation unit 303 transfers the calculated edge intensity to the histogram creation unit 304 .
- the histogram creation unit 304 selects a pixel having an edge intensity which is equal to or larger than the pre-set threshold (in the case of the above-mentioned 12-bit gradation, for example, 64) regarding the edge intensity from the edge calculation unit 303 , and reads the low frequency components at the corresponding pixel positions from the buffer 301 on the basis of the control of the control unit 118 . Then, the histogram creation unit 304 creates a histogram related to the read low frequency components and transfers the created histogram to the gradation conversion curve calculation unit 305 .
- the pre-set threshold in the case of the above-mentioned 12-bit gradation, for example, 64
- the gradation conversion curve calculation unit 305 accumulates and furthermore normalizes the histograms from the histogram creation unit 304 on the basis of the control of the control unit 118 to calculate the gradation conversion curve.
- the normalization is performed while following the gradation of the image signal. In the case of the above-mentioned 12-bit gradation, the normalization is performed so as to have the range of 0 to 4095.
- the gradation conversion curve calculation unit 305 transfers the calculated gradation conversion curve to the buffer 306 .
- the respective processings in the correct range extraction unit 302 , the edge calculation unit 303 , the histogram creation unit 304 , and the gradation conversion curve calculation unit 305 are performed in synchronization for each local region unit on the basis of the control of the control unit 118 .
- FIG. 5 is a block diagram of a configuration example of the high frequency separation unit 112 .
- the high frequency separation unit 112 includes a low frequency component extraction unit 400 , a gain calculation unit 401 constituting noise estimation means and collection means, a standard value assigning unit 402 constituting noise estimation means and assigning means, a parameter ROM 403 constituting noise estimation means and recording means, a parameter selection unit 404 constituting noise estimation means and parameter selection means, an interpolation unit 405 constituting noise estimation means and interpolation means, a high frequency component extraction unit 406 , an average calculation unit 407 constituting setting means and average calculation means, an upper limit and lower limit setting unit 408 constituting setting means and upper limit and lower limit setting means, and a determination unit 409 constituting determination means.
- the buffer 110 is connected to the low frequency component extraction unit 400 and the high frequency component extraction unit 406 .
- the low frequency component extraction unit 400 is connected to the parameter selection unit 404 .
- the gain calculation unit 401 , the standard value assigning unit 402 , and the parameter ROM 403 are connected to the parameter selection unit 404 .
- the parameter selection unit 404 is connected via the interpolation unit 405 to the upper limit and lower limit setting unit 408 .
- the high frequency component extraction unit 406 is connected to the average calculation unit 407 and the determination unit 409 .
- the average calculation unit 407 is connected via the upper limit and lower limit setting unit 408 to the determination unit 409 .
- the determination unit 409 is connected to the gradation processing unit 113 and the buffer 114 .
- the control unit 118 is bi-directionally connected to the low frequency component extraction unit 400 , the gain calculation unit 401 , the standard value assigning unit 402 , the parameter selection unit 404 , the interpolation unit 405 , the high frequency component extraction unit 406 , the average calculation unit 407 , the upper limit and lower limit setting unit 408 , and the determination unit 409 to control these units.
- the low frequency component extraction unit 400 sequentially extracts the low frequency components from the buffer 110 for each pixel on the basis of the control of the control unit 118 . It should be noted that according to the present embodiment, it is supposed to perform the wavelet transform by two times. In this case, the low frequency component extracted from the buffer 110 by the low frequency component extraction unit 400 becomes the second-order low frequency component L 2 kl as illustrated in FIG. 3C .
- the gain calculation unit 401 calculates the gain information in the amplification unit 103 and transfers the calculated gain information to the parameter selection unit 404 .
- control unit 118 obtains temperature information of the CCD 102 from the temperature sensor 120 and transfers the thus obtained temperature information to the parameter selection unit 404 .
- the standard value assigning unit 402 transfers a standard value of the information that cannot be obtained to the parameter selection unit 404 .
- the parameter selection unit 404 searches the parameter ROM 403 for a parameter of a reference noise model used for estimating the noise amount on the basis of the pixel value of the target pixel from the low frequency component extraction unit 400 , the gain information from the gain calculation unit 401 or the standard value assigning unit 402 , and the temperature information from the control unit 118 or the standard value assigning unit 402 . Then, the parameter selection unit 404 transfers the searched parameter to the interpolation unit 405 . Also, the parameter selection unit 404 transfers the image signal of the low frequency component from the low frequency component extraction unit 400 to the interpolation unit 405 .
- the interpolation unit 405 calculates a noise amount N related to the low frequency component on the basis of the parameter of the reference noise model and transfers the calculated noise amount N to the upper limit and lower limit setting unit 408 .
- the above-mentioned calculation of the noise amount N based on the parameter ROM 403 , the parameter selection unit 404 , and the interpolation unit 405 can be realized through the technology disclosed in Japanese Unexamined Patent Application Publication No. 2004-128985 described above, for example.
- the high frequency component extraction unit 406 extracts the high frequency component corresponding to the low frequency component extracted by the low frequency component extraction unit 400 and the high frequency components located in the neighborhood of the high frequency component on the basis of the control of the control unit 118 .
- the high frequency components corresponding to the second-order low frequency component L 2 00 become total three pixels of Hs 2 00 , Hh 2 00 , and Hv 2 00 which are the second-order high frequency components and total 12 pixels of Hs 1 00 , Hs 1 10 , Hs 1 01 , Hs 1 11 , Hh 1 00 , Hh 1 10 , Hh 1 01 , Hh 1 11 , Hv 1 00 , Hv 1 10 , Hv 1 01 , and Hv 1 11 which are the first-order high frequency components.
- the high frequency component located in a neighborhood of the high frequency components for example, a region of 2 ⁇ 2 pixels including the corresponding high frequency component is selected.
- the high frequency component extraction unit 406 sequentially transfers the high frequency component corresponding to the low frequency component and the high frequency components located in the neighborhood of the high frequency component to the average calculation unit 407 , and sequentially transfers the high frequency component corresponding to the low frequency component to the determination unit 409 .
- the average calculation unit 407 calculates an average value AV and transfers the calculated average value AV to the upper limit and lower limit setting unit 408 .
- the upper limit and lower limit setting unit 408 sets an upper limit App_Up and a lower limit App_Low for distinguishing the valid component and the invalid component as represented by Numeric Expression 3 as follows.
- the upper limit and lower limit setting unit 408 transfers the thus set upper limit App_Up and the lower limit App_Low to the determination unit 409 .
- the determination unit 409 reads the high frequency component corresponding to the low frequency component from the high frequency component extraction unit 406 and also reads the upper limit App_Up and the lower limit App_Low shown in Numeric Expression 3 from the upper limit and lower limit setting unit 408 . Then, in a case where the high frequency component is in range between the upper limit App_Up and the lower limit App_Low (for example, in a range equal to or larger than the lower limit App_Low and also equal to or smaller than the upper limit App_Up), the determination unit 409 determines that the high frequency component is the invalid component caused by the noise and transfers the high frequency component to the buffer 114 .
- the determination unit 409 determines that the high frequency component is the valid component and transfers the high frequency component to the gradation processing unit 113 .
- FIG. 6 is a block diagram of a configuration example of the gradation processing unit 113 .
- the gradation processing unit 113 is configured by including a low frequency component extraction unit 500 constituting first extraction means, a distance calculation unit 501 constituting distance calculation means, a gradation conversion equation setting unit 502 constituting gradation conversion equation setting means, a buffer 503 , a high frequency component extraction unit 504 constituting second extraction means, and a gradation conversion unit 505 constituting gradation conversion means.
- the conversion characteristic calculation unit 111 is connected to the gradation conversion equation setting unit 502 .
- the buffer 110 is connected to the low frequency component extraction unit 500 .
- the low frequency component extraction unit 500 is connected to the distance calculation unit 501 and the gradation conversion unit 505 .
- the distance calculation unit 501 is connected to the gradation conversion unit 505 via the gradation conversion equation setting unit 502 and the buffer 503 .
- the high frequency separation unit 112 is connected via the high frequency component extraction unit 504 to the gradation conversion unit 505 .
- the gradation conversion unit 505 is connected to the buffer 114 .
- the control unit 118 is bi-directionally connected to the low frequency component extraction unit 500 , the distance calculation unit 501 , the gradation conversion equation setting unit 502 , the high frequency component extraction unit 504 , and the gradation conversion unit 505 to control these units.
- the low frequency component extraction unit 500 sequentially extracts the low frequency components from the buffer 110 for each pixel on the basis of the control of the control unit 118 . It should be noted that according to the present embodiment, as described above, it is supposed to perform the wavelet transform by two times. In this case, the target pixel of the low frequency component extracted by the low frequency component extraction unit 500 from the buffer 110 becomes the second-order low frequency component L 2 kl as illustrated in FIG. 3C .
- the low frequency component extraction unit 500 transfers the extracted low frequency components to the distance calculation unit 501 and the gradation conversion unit 505 .
- the distance calculation unit 501 calculates distances between the target pixel extracted by the low frequency component extraction unit 500 and four regions in a neighborhood of the target pixel.
- FIG. 8 is an explanatory diagram of the distances between the target pixel and the neighboring four regions d 1 to d 4 in the synthesis operation of the gradation conversion curves.
- the distances between the target pixel and the neighboring four regions are respectively calculated as a distance between the target pixel and the center of the respective regions.
- the distance calculation unit 501 transfers the calculated distances d m to the gradation conversion equation setting unit 502 .
- the gradation conversion equation setting unit 502 reads the distances d m from the distance calculation unit 501 and also reads the corresponding gradation conversion curve T m ( ) of the neighboring four regions from the conversion characteristic calculation unit 111 to set the gradation conversion equation with respect to the target pixel as shown in Numeric Expression 4 as follows.
- P in Numeric Expression 4 means a pixel of a target of the gradation conversion processing
- P′ means a pixel after the gradation conversion processing, respectively.
- the gradation conversion equation setting unit 502 transfers the gradation conversion equation set as shown in Numeric Expression 4 to the buffer 503 .
- the high frequency component extraction unit 504 extracts the extracted high frequency components corresponding to the low frequency components extracted by the low frequency component extraction unit 500 from the high frequency separation unit 112 on the basis of the control of the control unit 118 .
- the target pixel of the low frequency component is the second-order low frequency component L 2 kl shown in FIG. 3C
- the extracted high frequency components becomes total three pixels including one pixel each from the second-order high frequency components Hs 2 kl , Hh 2 kl , and Hv 2 kl and total 12 pixels including four pixels each from the first-order high frequency components Hs 1 ij , Hh 1 ij , and Hv 1 ij .
- the high frequency component extraction unit 504 transfers the extracted high frequency components to the gradation conversion unit 505 .
- the gradation conversion unit 505 reads the high frequency component and also reads the gradation conversion equation shown in Numeric Expression 4 from the buffer 503 . On the basis of the read gradation conversion equation, the gradation conversion unit 505 performs the gradation conversion on the high frequency components. The gradation conversion unit 505 transfers the high frequency component after the gradation conversion to the buffer 114 . On the other hand, in a case where it is determined that the corresponding high frequency component is the invalid component and the extracted high frequency component does not exist, on the basis of the control of the control unit 118 , the gradation conversion unit 505 cancels the gradation conversion on the high frequency component.
- the gradation conversion unit 505 reads the low frequency component from the low frequency component extraction unit 500 and the gradation conversion equation shown in Numeric Expression 4 from the buffer 503 , respectively, to perform the gradation conversion on the low frequency component.
- the gradation conversion unit 505 transfers the low frequency component after the gradation conversion to the buffer 114 .
- FIG. 9 is a block diagram of a configuration example of the frequency synthesis unit 115 .
- the frequency synthesis unit 115 is configured by including a data reading unit 600 , a switching unit 601 , an up sampler 602 , an up sampler 603 , an up sampler 604 , an up sampler 605 , a vertical high-pass filter 606 , a vertical low-pass filter 607 , a vertical high-pass filter 608 , a vertical low-pass filter 609 , an up sampler 610 , an up sampler 611 , a horizontal high-pass filter 612 , a horizontal low-pass filter 613 , a buffer 614 , a data transfer control unit 615 , a basis function ROM 616 , and a filter coefficient reading unit 617 .
- the buffer 114 is connected via the data reading unit 600 to the switching unit 601 .
- the switching unit 601 is connected to the up sampler 602 , the up sampler 603 , the up sampler 604 , and the up sampler 605 .
- the up sampler 602 is connected to the vertical high-pass filter 606
- the up sampler 603 is connected to the vertical low-pass filter 607
- the up sampler 604 is connected to the vertical high-pass filter 608
- the up sampler 605 is connected to the vertical low-pass filter 609 .
- the vertical high-pass filter 606 and the vertical low-pass filter 607 are connected to the up sampler 610 , and the vertical high-pass filter 608 and the vertical low-pass filter 609 are connected to the up sampler 611 .
- the up sampler 610 is connected to the horizontal high-pass filter 612
- the up sampler 611 is connected to the horizontal low-pass filter 613 .
- the horizontal high-pass filter 612 and the horizontal low-pass filter 613 are connected to the buffer 614 .
- the buffer 614 is connected to the signal processing unit 116 and the data transfer control unit 615 .
- the data transfer control unit 615 is connected to the switching unit 601 .
- the basis function ROM 616 is connected to the filter coefficient reading unit 617 .
- the filter coefficient reading unit 617 is connected to the vertical high-pass filter 606 , the vertical low-pass filter 607 , the vertical high-pass filter 608 , the vertical low-pass filter 609 , the horizontal high-pass filter 612 , and the horizontal low-pass filter 613 .
- the control unit 118 is bi-directionally connected to the data reading unit 600 , the switching unit 601 , the data transfer control unit 615 , and the filter coefficient reading unit 617 to control these units.
- the basis function ROM 616 records a filter coefficient used for the inverse wavelet transform such as the Harr function or the Daubechies function.
- the filter coefficient reading unit 617 reads filter coefficient from the basis function ROM 616 .
- the filter coefficient reading unit 617 transfers the high-pass filter coefficient to the vertical high-pass filter 606 , the vertical high-pass filter 608 , and the horizontal high-pass filter 612 and the low-pass filter coefficient to the vertical low-pass filter 607 , the vertical low-pass filter 609 , the horizontal low-pass filter 613 , respectively.
- the data reading unit 600 reads the low frequency component on which the gradation processing has been performed and the valid component at the n-stage in the high frequency component on which the gradation processing has been performed and the invalid component at the n-stage in the high frequency component from the buffer 114 to be transferred to the switching unit 601 .
- the valid component at the n-stage in the high frequency component on which the gradation processing has been performed and the invalid component at the n-stage in the high frequency component are the integrated high frequency component at the n-stage when read by the data reading unit 600 .
- the switching unit 601 transfers the high frequency components in the slanted direction via the up sampler 602 to the vertical high-pass filter 606 , the high frequency components in the horizontal direction via the up sampler 603 to the vertical low-pass filter 607 , the high frequency components in the vertical direction via the up sampler 604 to the vertical high-pass filter 608 , and the low frequency components via the up sampler 605 to the vertical low-pass filter 609 , respectively, to execute the filtering processing in the vertical direction.
- the frequency components from the vertical high-pass filter 606 and the vertical low-pass filter 607 are transferred via the up sampler 610 to the horizontal high-pass filter 612 , and the frequency components from the vertical high-pass filter 608 and the vertical low-pass filter 609 are transferred via the up sampler 611 to the horizontal low-pass filter 613 , and then the filtering processing in the horizontal direction is performed.
- the frequency components from the horizontal high-pass filter 612 and the horizontal low-pass filter 613 are transferred to the buffer 614 to be synthesized into one, thus generating the low frequency component at the (n ⁇ 1)-th stage.
- the up sampler 602 , the up sampler 603 , the up sampler 604 , and the up sampler 605 performs up sampling of the input frequency component double in the vertical direction
- the up sampler 610 and the up sampler 611 performs up sampling of the input frequency component double in the horizontal direction.
- the data transfer control unit 615 transfers the low frequency components to the switching unit 601 on the basis of the control of the control unit 118 .
- the data reading unit 600 reads from the three types of high frequency components in the slanted direction, the horizontal direction, and the vertical direction at the (n ⁇ 1)-th stage from the buffer 114 to be transferred to the switching unit 601 . Then, as the filtering processing similar to the above is performed on the frequency at the stage number of the decomposition (n ⁇ 1), the low frequency component at the (n ⁇ 2)-th stage is output to the buffer 614 .
- the above-mentioned procedure is repeatedly performed until the control unit 118 performs the synthesis at a predetermined n-th stage.
- the low frequency component at the zero-th stage is output to the buffer 614 and the low frequency component at the zero-th stage is transferred to the signal processing unit 116 as the image signal on which the gradation conversion has been performed.
- the image processing system in which the image pickup unit including the lens system 100 , the aperture 101 , the CCD 102 , the amplification unit 103 , the A/D conversion unit 104 , the exposure control unit 106 , the focus control unit 107 , the AF motor 108 , and the temperature sensor 120 is integrated has been described.
- the image processing system is not necessarily limited to the above-mentioned configuration.
- the image pickup unit may be provided as a separated body. That is, in the image processing system illustrated in FIG.
- the separated image pickup unit performs the image pickup, and an image signal recorded on a recording medium such as a memory card in an unprocessed raw data state is read out from the recording medium to be processed.
- a recording medium such as a memory card in an unprocessed raw data state
- associated information related to the image signal like the temperature of the image pickup device, the exposure conditions, and the like, for each shooting operation is recorded on a header unit or the like.
- transmission of various pieces of information from the separated image pickup unit to the image processing system is not necessarily performed via a recording medium, and may be performed via a communication circuit or the like.
- FIG. 10 is a diagram illustrating another configuration example of the image processing system.
- the image processing system illustrated in FIG. 10 has a configuration in which with respect to the image processing system illustrated in FIG. 1 , the lens system 100 , the aperture 101 , the CCD 102 , the amplification unit 103 , the A/D conversion unit 104 , the exposure control unit 106 , the focus control unit 107 , the AF motor 108 , and the temperature sensor 120 are omitted, and an input unit 700 and an header information analysis unit 701 are added.
- Other basic configuration in the image processing system illustrated in FIG. 10 is similar to that illustrated in FIG. 1 . Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described.
- the input unit 700 is connected to the buffer 105 and the header information analysis unit 701 .
- the control unit 118 is bi-directionally connected to the input unit 700 and the header information analysis unit 701 to control these units.
- the image signal and the header information saved on the recording medium such as a memory card are read via the input unit 700 .
- the image signal is transferred to the buffer 105 , and the header information is transferred to the header information analysis unit 701 , respectively.
- the header information analysis unit 701 extracts the information for each shooting operation (that is, the exposure conditions, the temperature of the image pickup device, and the like, which are described above) to be transferred to the control unit 118 on the basis of the header information transferred from the input unit 700 .
- the processing in the following stage is similar to that of the image processing system illustrated in FIG. 1 .
- the image signal from the CCD 112 is recorded on the recording medium such as a memory card as raw data without applying the process, and also the associated information such as image pickup conditions (for example, the temperature of the image pickup device, the exposure conditions, and the like, for each shooting operation from the control unit 118 ) is recorded in the recording medium as the header information.
- the processing can be performed as the computer is allowed to execute the image processing program which is separate software to instruct the computer to read the information of the recording medium.
- the transmission of various pieces of information from the image pickup unit to the computer is not necessarily performed via the recording medium and may be performed via a communication line or the like.
- FIG. 11 is a flow chart showing a main routine of an image processing program.
- the image signal is read, and also the header information such as the temperature and the exposure conditions of the image pickup device is read (step S 1 ).
- step S 2 by performing the frequency decomposition such as the wavelet transform, the high frequency component and the low frequency component are obtained.
- step S 3 the conversion characteristic is calculated.
- the high frequency component is separated into the invalid component caused by the noise and the other valid component (step S 4 ).
- the gradation processing is performed on the low frequency component and the valid component in the high frequency component (step S 5 ).
- step S 6 on the basis of the low frequency component on which the gradation processing has been performed, the valid component in the high frequency component on which the gradation processing has been performed, and the invalid component in the high frequency component, the image signal on which the gradation conversion has been performed is synthesized (step S 6 ).
- step S 7 the signal processing such as a known compression processing is performed.
- step S 8 the image signal after the processing is output (step S 8 ), and the processing is ended.
- FIG. 12 is a flow chart showing the processing for the conversion characteristic calculation in the above-mentioned step S 3 .
- the low frequency component is divided into regions of a predetermined size to be sequentially extracted (step S 10 ).
- the low frequency components are compared with the pre-set threshold related to the dark part and the pre-set threshold related to the light part respectively to extract the low frequency components which are equal to or larger than the threshold of the dark part and also equal to or smaller than the threshold of the light part as the correct exposure range (step S 11 ).
- step S 12 the known calculation for the edge intensity is performed.
- the histogram is created (step S 13 ).
- the gradation conversion curve is calculated (step S 14 ).
- the gradation conversion curve calculated in the above-mentioned manner is output (step S 15 ).
- step S 16 it is determined whether or not the processing has been performed for all the regions. In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S 10 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in FIG. 11 .
- FIG. 13 is a flow chart showing the processing for the high frequency separation in the above-mentioned step S 4 .
- step S 20 the low frequency components are sequentially extracted for each pixel.
- the information such as the temperature and the gain of the image pickup device is set.
- a pre-set standard value is assigned to the relevant information (step S 21 ).
- step S 22 the parameter related to the reference noise model is read.
- the noise amount related to the low frequency component is calculated through the interpolation processing (step S 23 ).
- the high frequency component corresponding to the low frequency component and the high frequency components located in the neighborhood of the high frequency component are sequentially extracted (step S 24 ).
- the average value is calculated (step S 25 ).
- the upper limit and the lower limit are set as shown in Numeric Expression 3 (step S 26 ).
- step S 27 in a case where the high frequency component is in the range between the upper limit and the lower limit, it is determined that the high frequency component is the invalid component caused by the noise, and in a case where the high frequency component exceeds the upper limit or falls short of the lower limit, it is determined that the high frequency component is the valid component (step S 27 ).
- the valid component and the invalid component are output while being separated from each other (step S 28 ).
- step S 29 it is determined whether or not the processing for all the high frequency components has been completed. In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S 24 to repeat the above-mentioned processing.
- step S 29 in a case where it is determined that the processing for all the high frequency components has been completed, it is determined whether or not the processing for all the low frequency components has been completed (step S 30 ). In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S 20 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in FIG. 11 .
- FIG. 14 is a flow chart showing the processing for the gradation processing in the above-mentioned step S 5 .
- step S 40 the low frequency components are sequentially extracted for each pixel.
- step S 41 the distances between the target pixel of the low frequency component and the centers of the four neighboring regions are calculated.
- step S 42 the gradation conversion curves in the four neighboring regions are read.
- the gradation conversion equation with respect to the target pixel is set (step S 43 ).
- step S 44 the high frequency components regarded as the valid components corresponding to the low frequency components are sequentially extracted.
- step S 45 it is determined whether or not the high frequency component regarded as the valid component exists.
- the gradation conversion equation shown in Numeric Expression 4 is applied to the high frequency component regarded as the valid component to perform the gradation conversion (step S 46 ).
- step S 46 When the processing in the step S 46 is ended or in a case where it is determined that the high frequency component regarded as the valid component does not exist in the above-mentioned step S 45 , the gradation conversion equation shown in Numeric Expression 4 is applied to the low frequency component to perform the gradation conversion (step S 47 ).
- step S 48 the low frequency component on which the gradation processing has been performed and the valid component in the high frequency component on which the gradation processing has been performed are output (step S 48 ).
- step S 49 it is determined whether or not the processing for all the low frequency components has been completed. In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S 40 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in FIG. 11 .
- the configuration of using the wavelet transform for the frequency decomposition and the frequency synthesis is adopted, but the configuration is not necessarily limited to the above.
- a configuration of using the known frequency decomposition such as the Fourier transform, the discrete cosine transform or the transform for the frequency synthesis can also be adopted.
- the number of times to perform the wavelet transform is set as two, but the configuration is not necessarily limited to the above.
- such a configuration can be adopted that by increasing the number of times to perform the conversion, the separation of the invalid component caused by the noise and the other valid component is improved, or by decreasing the number of times to perform the conversion, the uniformity of the image is improved.
- the gradation conversion curve is calculated for each image on the basis of the histogram, but the increase in the noise components is not taken into account. For this reason, for example, when a ratio of the dark part in the image is large, the gradation conversion curve based on the histogram provides a wide gradation to the dark part. However, in this case, the noise in the dark part prominently appears, and there is a problem that an optimal gradation conversion processing is not performed in terms of image quality.
- the contrast emphasis processing is performed only on the low frequency component. Therefore, there is a problem that the sharpness is degraded in a region containing a large number of high frequency components such as an edge region. Also, according to the technology disclosed in the publication, different processings are performed on the low frequency component and other components. Therefore, there is a problem that the continuity and integrity for the image as a whole may be lost.
- the first embodiment of the present invention only the high frequency component where the influence of the noise prominently visually appears is separated into the invalid component and the valid component.
- the gradation processing is performed on the valid component, and the gradation processing is not performed on the invalid component, and an increase in noise accompanying with the gradation processing is suppressed. Thus, it is possible to generate the high quality image signal.
- the low frequency component is excluded from the target of the processing after being separated into the valid component and the invalid component, a possibility of generating an adverse effect accompanying with the processing is decreased, and it is possible to improve the stability.
- the image signal is synthesized with the invalid component, it is possible to obtain the image signal with little sense of visual discomfort, and the stability and reliability of the processing can be improved.
- the wavelet transform is excellent at the separation of the frequency, and it is therefore possible to perform the high accuracy processing.
- the gradation conversion curve is adaptively and also independently calculated for each region from the low frequency component of the image signal, it is possible to perform the gradation conversion at the high accuracy on various image signals.
- the gradation conversion curve is calculated on the basis of the low frequency component, it is possible to calculate the appropriate gradation conversion curve with little influence from the noise.
- the gradation conversion with the identical conversion characteristic is performed on the low frequency component and the valid component in the high frequency component located at the same position, it is possible to obtain the image signal providing the sense of integrity with little sense of visual discomfort.
- the discontinuity between the regions is not generated, and it is possible to obtain the high quality image signals.
- FIGS. 15 to 25 illustrate a second embodiment of the present invention
- FIG. 15 is a block diagram of a configuration of an image processing system.
- the same part as that of the first embodiment described above is allocated with the same name and reference numeral to appropriately omit a description thereof, and only a different part will be mainly described.
- the image processing system has a configuration in which with respect to the above-mentioned image processing system illustrated in FIG. 1 according to the first embodiment, a pre-white balance unit 801 , a Y/C separation unit 802 constituting Y/C separation means, a buffer 803 , and a Y/C synthesis unit 809 constituting Y/C synthesis means are added, and the CCD 102 , the frequency decomposition unit 109 , the conversion characteristic calculation unit 111 , the high frequency separation unit 112 , the gradation processing unit 113 , and the frequency synthesis unit 115 are replaced by a color CCD 800 , a frequency decomposition unit 804 constituting separation means and frequency decomposition means, a conversion characteristic calculation unit 805 constituting conversion means and conversion characteristic calculation means, a high frequency separation unit 806 constituting separation means and high frequency separation means, a gradation processing unit 807 constituting conversion means and gradation processing means, and a frequency synthesis unit 808 constituting synthesis means
- the color image signal captured via the lens system 100 , the aperture 101 , and the color CCD 800 is transferred to the amplification unit 103 .
- the buffer 105 is connected to the exposure control unit 106 , the focus control unit 107 , the pre-white balance unit 801 , and the Y/C separation unit 802 .
- the pre-white balance unit 801 is connected to the amplification unit 103 .
- the Y/C separation unit 802 is connected to the buffer 803 , and the buffer 803 is connected to the frequency decomposition unit 804 , the conversion characteristic calculation unit 805 , and the Y/C synthesis unit 809 .
- the frequency decomposition unit 804 is connected to the buffer 110 .
- the buffer 110 is connected to the conversion characteristic calculation unit 805 , the high frequency separation unit 806 , and the gradation processing unit 807 .
- the conversion characteristic calculation unit 805 is connected to the gradation processing unit 807 .
- the high frequency separation unit 806 is connected to the buffer 114 and the gradation processing unit 807 .
- the gradation processing unit 807 is connected to the buffer 114 .
- the buffer 114 is connected via the frequency synthesis unit 808 and the Y/C synthesis unit 809 to the signal processing unit 116 .
- the control unit 118 is also bi-directionally connected to the pre-white balance unit 801 , the Y/C separation unit 802 , the frequency decomposition unit 804 , the conversion characteristic calculation unit 805 , the high frequency separation unit 806 , the gradation processing unit 807 , the frequency synthesis unit 808 , and the Y/C synthesis unit 809 to control these units.
- the temperature sensor 120 is arranged in a neighborhood of the color CCD 800 , and the signal from the temperature sensor 120 is also connected to the control unit 118 .
- the image processing system functions as the pre-image pickup device.
- the color image signal captured via the lens system 100 , the aperture 101 , and the color CCD 800 is transferred via the amplification unit 103 and the A/D conversion unit 104 to the buffer 105 .
- the color CCD 800 a single CCD in which a Bayer-type primary color filter is arranged on a front face is supposed.
- FIG. 16 is a diagram illustrating a configuration of the Bayer-type primary color filter.
- the Bayer-type primary color filter has a such configuration that that the basic unit is 2 ⁇ 2 pixels, one each of a red (R) filter and a blue (B) filter are arranged at pixel positions at opposite corners in the basis unit, and green (G) filters are arranged at pixel positions at remaining opposite corners.
- the color image signal in the buffer 105 is transferred to the pre-white balance unit 801 .
- the pre-white balance unit 801 multiplies signals at a predetermined level for each color signal (in other words, cumulatively adds) to calculate a simplified white balance coefficient.
- the pre-white balance unit 801 transfers the calculated coefficient to the amplification unit 103 and multiplies different gains for each color signal to perform the white balance.
- the user performs the full press of the shutter button composed of the two-stage switch of the external I/F unit 119 .
- the digital camera functions as the real shooting device.
- the color image signal is transferred to the buffer 105 .
- the white balance coefficient calculated by the pre-white balance unit 801 at this time is transferred to the control unit 118 .
- the color image signal in the buffer 105 obtained through the real shooting operation is transferred to the Y/C separation unit 802 .
- the Y/C separation unit 802 On the basis of the control of the control unit 118 , through a known interpolation processing, the Y/C separation unit 802 generates the three color image signals composed of R, G, and B, and further separates the R, G, and B signals into a luminance signal Y and color difference signals Cb and Cr as shown in Numeric Expression 5 below.
- the luminance signal and the color difference signals separated by the Y/C separation unit 802 are transferred to the buffer 803 .
- the frequency decomposition unit 804 On the basis of the control of the control unit 118 , the frequency decomposition unit 804 performs the frequency decomposition on the luminance signal in the buffer 105 , and the high frequency component and the low frequency component are obtained. Then, the frequency decomposition unit 804 sequentially transfers the high frequency component and the low frequency component thus obtained to the buffer 110 .
- the conversion characteristic calculation unit 805 reads the low frequency component from the buffer 110 from on the basis of the control of the control unit 118 , and the color difference signals from the buffer 803 , respectively, to calculate the gradation characteristic used for the gradation conversion processing. It should be noted that according to the present embodiment, as the gradation conversion processing, the space-invariant processing using the single gradation conversion curve is supposed with respect to the image signal. Then, the conversion characteristic calculation unit 805 transfers the calculated gradation characteristics to the gradation processing unit 807 .
- the high frequency separation unit 806 reads the high frequency component from the buffer 110 and the high frequency component is separated into the invalid component caused by the noise and the other valid component on the basis of the control of the control unit 118 . Then, the high frequency separation unit 806 transfers the thus separated valid component to the gradation processing unit 807 and the above-mentioned invalid component to the buffer 114 , respectively.
- the gradation processing unit 807 reads the low frequency component from the buffer 110 , the valid components in the high frequency component from the high frequency separation unit 806 , and the gradation characteristic from the conversion characteristic calculation unit 805 , respectively, on the basis of the control of the control unit 118 . Then, on the basis of the above-mentioned gradation characteristic, the gradation processing unit 807 performs the gradation processing on the low frequency component and the valid component in the high frequency component. The gradation processing unit 807 transfers the low frequency component on which the gradation processing has been performed and the valid component in the high frequency component on which the gradation processing has been performed to the buffer 114 .
- the frequency synthesis unit 808 reads the low frequency component on which the gradation processing has been performed, the valid component in the high frequency component on which the gradation processing has been performed, and the invalid component in the high frequency component from the buffer 114 and performs an additional processing on the basis of these components to synthesize the luminance signals on which the gradation conversion has been performed with each other on the basis of the control of the control unit 118 . Then, the frequency synthesis unit 808 transfers the synthesized luminance signal to the Y/C synthesis unit 809 .
- the Y/C synthesis unit 809 reads the luminance signal Y′ on which the gradation conversion has been performed from the frequency synthesis unit 808 and the color difference signals Cb and Cr from the buffer 803 , respectively, to synthesize color image signals R′, G′, and B′ on which the gradation conversion has been performed as shown in Numeric Expression 6 below on the basis of the control of the control unit 118 .
- the Y/C synthesis unit 809 transfers the synthesized color image signals R′, G′, and B′ to the signal processing unit 116 .
- the signal processing unit 116 performs a known compression processing or the like on the image signal from the Y/C synthesis unit 809 and transfers the signal after the processing to the output unit 117 on the basis of the control of the control unit 118 .
- the output unit 117 records and saves the image signal output from the signal processing unit 116 in the recording medium such as a memory card.
- FIG. 18 is a block diagram of a configuration example of the frequency decomposition unit 804 .
- the frequency decomposition unit 804 is configured by including a signal extraction unit 900 , a low-pass filter unit 901 , a low frequency buffer 902 , and a difference filter unit 903 .
- the buffer 803 is connected to the signal extraction unit 900 .
- the signal extraction unit 900 is connected to the low-pass filter unit 901 and the difference filter unit 903 .
- the low-pass filter unit 901 is connected to the low frequency buffer 902 .
- the low frequency buffer 902 is connected to the difference filter unit 903 .
- the difference filter unit 903 is connected to the buffer 110 .
- the control unit 118 is bi-directionally connected to the signal extraction unit 900 , the low-pass filter unit 901 , and the difference filter unit 903 to control these units.
- the signal extraction unit 900 reads the luminance signals from the buffer 803 on the basis of the control of the control unit 118 to transfer the luminance signals to the low-pass filter unit 901 and the difference filter unit 903 .
- the low-pass filter unit 901 performs a known low-pass filter processing on the luminance signals from the signal extraction unit 900 to calculate the low frequency components of the luminance signals on the basis of the control of the control unit 118 . It should be noted that according to the present embodiment, as the low-pass filter used by the low-pass filter unit 901 , for example, an average value filter having a pixel size of 7 ⁇ 7. The low-pass filter unit 901 transfers the calculated low frequency components to the low frequency buffer 902 .
- the difference filter unit 903 reads the luminance signals from the signal extraction unit 900 and the low frequency components of the luminance signals from the low frequency buffer 902 , respectively, and takes a difference thereof to calculate the high frequency components of the luminance signals.
- the difference filter unit 903 transfers the calculated high frequency components and the read low frequency components to the buffer 110 .
- FIG. 19 is a block diagram of a configuration example of the conversion characteristic calculation unit 805 .
- the conversion characteristic calculation unit 805 has such a configuration that with respect to the conversion characteristic calculation unit 111 shown in FIG. 4 of the above-mentioned first embodiment, a hue calculation unit 1000 constituting region-of-interest setting means, a person determination unit 1001 constituting region-of-interest setting means, a weighting factor setting unit 1002 constituting weighting factor setting means, and a histogram correction unit 1003 constituting histogram correction means are added, and the division unit 300 and the buffer 301 are omitted.
- Other basic configuration is similar to that of the conversion characteristic calculation unit 111 shown in FIG. 4 . Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described.
- the buffer 803 and the buffer 110 are connected to the correct range extraction unit 302 .
- the correct range extraction unit 302 is connected to the edge calculation unit 303 and the hue calculation unit 1000 .
- the hue calculation unit 1000 is connected via the person determination unit 1001 and the weighting factor setting unit 1002 to the histogram correction unit 1003 .
- the histogram creation unit 304 is connected to the histogram correction unit 1003 .
- the histogram correction unit 1003 is connected via the gradation conversion curve calculation unit 305 and the buffer 306 to the gradation processing unit 807 .
- the control unit 118 is also bi-directionally connected to the hue calculation unit 1000 , the person determination unit 1001 , the weighting factor setting unit 1002 , and the histogram correction unit 1003 to control these units.
- the correct range extraction unit 302 reads the luminance signals from the buffer 110 which are compared with the pre-set threshold related to the dark part (by way of an example, in the case of 12-bit gradation, for example, 128) and the pre-set threshold related to the light part (in the case of the 12-bit gradation, for example, 3968) respectively, and transfers the luminance signals which are equal to or larger than the threshold of the dark part and also equal to or smaller than the threshold of the light part as the correct exposure range to the edge calculation unit 303 on the basis of the control of the control unit 118 .
- the pre-set threshold related to the dark part by way of an example, in the case of 12-bit gradation, for example, 128) and the pre-set threshold related to the light part (in the case of the 12-bit gradation, for example, 3968) respectively.
- the correct range extraction unit 302 reads the color difference signals Cb and Cr at coordinates corresponding to the luminance signals in the correct exposure range from the buffer 803 to be transferred to the hue calculation unit 1000 .
- the edge calculation unit 303 and the histogram creation unit 304 create the histogram of edge regions from the luminance signals similarly to the above-mentioned first embodiment, and transfer the created histogram to the histogram correction unit 1003 .
- the hue calculation unit 1000 reads the color difference signals Cb and Cr from the correct range extraction unit 302 which are compared with the pre-set threshold to extract a skin color region, and transfers the result to the person determination unit 1001 on the basis of the control of the control unit 118 .
- the person determination unit 1001 uses the information related to the skin color region from the hue calculation unit 1000 and the edge amount from the edge calculation unit 303 to extract a region determined as a human face, and transfers the result to the weighting factor setting unit 1002 on the basis of the control of the control unit 118 .
- the weighting factor setting unit 1002 calculates luminance information in the region determined as the human face which is multiplied by a predetermined coefficient, thereby weighting factors for the corrections at the respective luminance levels are calculated. It should be noted that the weighting factors at the luminance levels which do not exist in the region determined as the human face are 0. The weighting factor setting unit 1002 transfers the calculated weighting factors to the histogram correction unit 1003 .
- the histogram correction unit 1003 reads the histogram from the histogram creation unit 304 and also reads the weighting factors from the weighting factor setting unit 1002 on the basis of the control of the control unit 118 . Then, the histogram correction unit 1003 adds the weighting factors to the respective luminance levels of the histogram to perform the correction.
- the corrected histogram is transferred to the gradation conversion curve calculation unit 305 , and similarly to the above-mentioned first embodiment, the gradation conversion curve is calculated.
- the calculated gradation conversion curve is transferred to the buffer 306 , and when necessary, transferred to the gradation processing unit 807 . It should be noted that according to the present embodiment, the space-invariant processing is supposed, and the calculated gradation conversion curve is of one type.
- FIG. 20 is a block diagram of a configuration example of the high frequency separation unit 806 .
- the high frequency separation unit 806 has such a configuration that with respect to the high frequency separation unit 112 shown in FIG. 5 of the above-mentioned first embodiment, a noise LUT 1100 constituting noise estimation means and table conversion means are added, and the parameter ROM 403 , the parameter selection unit 404 , and the interpolation unit 405 are omitted.
- Other basic configuration is similar to that of the high frequency separation unit 112 shown in FIG. 5 . Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described.
- the low frequency component extraction unit 400 , the gain calculation unit 401 , and the standard value assigning unit 402 are connected to the noise LUT 1100 .
- the noise LUT 1100 is connected to the upper limit and lower limit setting unit 408 .
- the determination unit 409 is connected to the gradation processing unit 807 and the buffer 114 .
- the control unit 118 is also bi-directionally connected to the noise LUT 1100 to control the table.
- the gain calculation unit 401 calculates the gain information in the amplification unit 103 which is transferred to the noise LUT 1100 on the basis of the ISO sensitivity, the information related to the exposure conditions, and the white balance coefficient sent from the control unit 118 .
- control unit 118 obtains temperature information of the color CCD 800 from the temperature sensor 120 and transfers the thus obtained temperature information to the noise LUT 1100 .
- the standard value assigning unit 402 transfers a standard value of the information that cannot be obtained to the noise LUT 1100 .
- the noise LUT 1100 is a look up table where a relation among the signal value level of the image signal, the gain of the image signal, and the operation temperature of the image pickup device, and the noise amount is recorded.
- the look up table is designed, for example, by using the technology disclosed in Japanese Unexamined Patent Application Publication No. 2004-128985 described above.
- the noise LUT 1100 outputs the noise amount on the basis of the pixel value of the target pixel from the low frequency component extraction unit 400 , the gain information from the gain calculation unit 401 or the standard value assigning unit 402 , and the temperature information from the control unit 118 or the standard value assigning unit 402 .
- the output noise amount is transferred to the upper limit and lower limit setting unit 408 .
- the high frequency component extraction unit 406 extracts the high frequency component corresponding to the low frequency component extracted by the low frequency component extraction unit 400 and the high frequency components located in the neighborhood of the high frequency component on the basis of the control of the control unit 118 .
- the frequency decomposition unit 804 uses the low-pass filter and the difference filter to extract the low frequency component and the high frequency component. Therefore, the pixel configurations of the low frequency component and the high frequency component are of the same size and, and the high frequency component corresponding to the low frequency component is one pixel.
- the action of the high frequency separation unit 806 thereafter is similar to that of the high frequency separation unit 112 of the above-mentioned first embodiment.
- the high frequency component is separated into the valid component and the invalid component.
- the valid component is transferred to the gradation processing unit 807 , and the invalid component is transferred to the buffer 114 , respectively.
- FIG. 21 is a block diagram of a configuration example of the gradation processing unit 807 .
- the gradation processing unit 807 has such a configuration that with respect to the gradation processing unit 113 shown in FIG. 6 of the above-mentioned first embodiment, the distance calculation unit 501 , the gradation conversion equation setting unit 502 , and the buffer 503 are deleted.
- Other basic configuration is similar to that of the gradation processing unit 113 shown in FIG. 6 . Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described.
- the conversion characteristic calculation unit 805 is connected to the gradation conversion unit 505 .
- the buffer 110 is connected via the low frequency component extraction unit 500 to the gradation conversion unit 505 .
- the high frequency separation unit 806 is connected via the high frequency component extraction unit 504 to the gradation conversion unit 505 .
- the control unit 118 is bi-directionally connected to the low frequency component extraction unit 500 , the high frequency component extraction unit 504 , and the gradation conversion unit 505 to control these units.
- the low frequency component extraction unit 500 sequentially extracts the low frequency components from the buffer 110 for each pixel on the basis of the control of the control unit 118 .
- the low frequency component extraction unit 500 transfers the extracted low frequency components to the gradation conversion unit 505 .
- the high frequency component extraction unit 504 extracts the high frequency components corresponding to the low frequency components extracted by the low frequency component extraction unit 500 from the high frequency separation unit 806 on the basis of the control of the control unit 118 .
- the pixel configurations of the low frequency component and the high frequency component are of the same size, and the high frequency component corresponding to the low frequency component is one pixel. It should be noted that in a case where it is determined that the high frequency component corresponding to the low frequency component is the invalid component and the extracted high frequency component does not exist, the high frequency component extraction unit 504 transfers the error information to the control unit 118 .
- the gradation conversion unit 505 reads the low frequency components from the low frequency component extraction unit 500 on the basis of the control of the control unit 118 and reads the gradation conversion curve from the conversion characteristic calculation unit 805 to perform the gradation conversion on the low frequency components.
- the gradation conversion unit 505 transfers the low frequency component after the gradation conversion to the buffer 114 .
- the gradation conversion unit 505 reads the high frequency component of the valid component corresponding to the low frequency component from the high frequency component extraction unit 504 to perform the gradation conversion. Then, the gradation conversion unit 505 transfers the high frequency component after the gradation conversion to the buffer 114 . It should be noted that in a case where the high frequency component corresponding to the low frequency component does not exist, the gradation conversion unit 505 cancel the gradation conversion on the high frequency component on the basis of the control of the control unit 118 .
- the image processing system in which the image pickup unit is separately provided may be used.
- the configuration is not necessarily limited to the above.
- the color image signal from the color CCD 800 is recorded on the recording medium such as a memory card as raw data while being unprocessed, and the associated information such as image pickup conditions (for example, the temperature of the image pickup device, the exposure conditions, and the like, for each shooting operation from the control unit 118 ) is recorded in the recording medium as the header information.
- the processing can be performed as the computer is allowed to execute the image processing program which is separate software to instruct the computer to read the information of the recording medium.
- the transmission of various pieces of information from the image pickup unit to the computer is not necessarily performed via the recording medium and may be performed via a communication line or the like.
- FIG. 22 is a flow chart showing a main routine of an image processing program.
- the color image signal is read, and also the header information such as the temperature and the exposure conditions of the image pickup device is read (step S 1 ).
- the luminance signals and the color difference signals are calculated (step S 50 ).
- step S 2 the frequency decomposition on the luminance signals is performed, and the high frequency component and the low frequency component are obtained.
- the conversion characteristic is calculated (step S 51 ).
- the high frequency component is separated into the invalid component caused by the noise and the other valid component (step S 52 ).
- the gradation processing is performed on the low frequency component and the valid component in the high frequency component (step S 53 ).
- the valid component in the high frequency component on which the gradation processing has been performed, and the invalid component in the high frequency component, the luminance signals on which the gradation conversion has been performed are synthesized one another (step S 6 ).
- the luminance signals and the color difference signals are synthesized to obtain the color image signal on which the gradation conversion has been performed (step S 54 ).
- step S 7 the signal processing such as a known compression processing is performed.
- step S 8 the color image signal after the processing is output (step S 8 ), and the processing is ended.
- FIG. 23 is a flow chart showing the processing for the conversion characteristic calculation in the above-mentioned step S 51 .
- FIG. 23 processing steps basically substantially identified with the processing shown in FIG. 12 of the above-mentioned first embodiment are allocated with the same step numbers.
- the luminance signals are compared with the pre-set threshold related to the dark part and the pre-set threshold related to the light part to extract the luminance signals which are equal to or larger than the threshold of the dark part and also equal to or smaller than the threshold of the light part as the correct exposure range (step S 11 ).
- step S 12 the known calculation for the edge intensity is performed on the luminance signals in the correct exposure range by using the Laplacian filter or the like.
- the histogram is created (step S 13 ).
- a particular hue region for example, a skin color region is extracted (step S 60 ).
- the region determined as the human face is extracted and set as a region-of-interest (step S 61 ).
- the luminance information in the region-of-interest is calculated and multiplied by a pre-set coefficient to calculate the weighting factors for the correction related to the respective luminance levels (step S 62 ).
- the weighting factors are added to the respective luminance levels of the histogram to perform the correction on the histogram (step S 63 ).
- the gradation conversion curve is calculated (step S 14 ).
- step S 15 The gradation conversion curve calculated in the above-mentioned manner is output (step S 15 ), and the flow is returned from the processing to the processing shown in FIG. 22 .
- FIG. 24 is a flow chart showing the processing for the high frequency separation.
- step S 20 the low frequency components are sequentially extracted for each pixel.
- the information such as the temperature and the gain of the image pickup device is set.
- a pre-set standard value is assigned to the relevant information (step S 21 ).
- step S 70 the table related to the noise amount where a relation among the signal value level of the image signal, the gain of the image signal, the operation temperature of the image pickup device, and the noise amount is recorded is read (step S 70 ).
- the noise amount is calculated (step S 71 ).
- the high frequency component corresponding to the low frequency component and the high frequency components located in the neighborhood of the high frequency component are extracted (step S 24 ).
- the average value is calculated (step S 25 ).
- step S 26 the upper limit and the lower limit are set as shown in Numeric Expression 3 (step S 26 ).
- step S 27 it is determined that the high frequency component is the invalid component caused by the noise, and in a case where the high frequency component exceeds the upper limit or falls short of the lower limit, it is determined that the high frequency component is the valid component (step S 27 ).
- step S 28 the valid component and the invalid component are output while being separated from each other.
- step S 30 it is determined whether or not the processing for all the low frequency components has been completed. In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S 20 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in FIG. 22 .
- FIG. 25 is a flow chart showing the gradation processing in the above-mentioned step S 53 .
- FIG. 25 processing steps basically substantially identified with the processing shown in FIG. 14 of the above-mentioned first embodiment are allocated with the same step numbers.
- step S 40 the low frequency components are sequentially extracted for each pixel.
- step S 42 the gradation conversion curve is read (step S 42 ).
- step S 44 the high frequency component regarded as the valid component corresponding to the low frequency component is extracted.
- step S 45 it is determined whether or not the high frequency component regarded as the valid component exists.
- the gradation conversion is performed on the high frequency component regarded as the valid component (step S 46 ).
- step S 46 When the processing in the step S 46 is ended or in a case where it is determined that the high frequency component regarded as the valid component does not exist in the above-mentioned step S 45 , the gradation conversion is performed on the low frequency components (step S 47 ).
- step S 48 the low frequency component on which the gradation processing has been performed and the valid component in the high frequency component on which the gradation processing has been performed are output (step S 48 ).
- step S 49 it is determined whether or not the processing for all the low frequency components has been completed. In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S 40 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in FIG. 22 .
- the configuration of using the low-pass filter and the difference filter for the frequency decomposition and the frequency synthesis is adopted, but the configuration is not necessarily limited to the above.
- a configuration of using a Gaussian filter and the Laplacian filter for the frequency decomposition and the frequency synthesis may also be adopted.
- the operation amount is increased, an advantage is provided that the performance of the frequency decomposition is better.
- the Gaussian filter and the Laplacian filter are used, similarly to the above-mentioned first embodiment, a configuration of performing the frequency decomposition and the frequency synthesis at multi stages can be adopted.
- the configuration of using the Bayer-type primary color filter is adopted, but the configuration is not necessarily limited to the above.
- the single image pickup device using a color-difference line-sequential type complementary color filter shown in FIG. 17 or the two or three image pickup device may also be applied.
- FIG. 17 is a diagram illustrating the configuration of the color-difference line-sequential type complementary color filter.
- the color-difference line-sequential type complementary color filter has a basic unit of 2 ⁇ 2 pixels. Cyan (Cy) and yellow (Ye) are arranged on the same line of the 2 ⁇ 2 pixels, and magenta (Mg) and green (G) are arranged on the other line of the 2 ⁇ 2 pixels. It should be noted that such a configuration is adopted that the positions of magenta (Mg) and green (G) are inverted for each line.
- the second embodiment described above only the high frequency component where the influence of the noise prominently visually appears with respect to the color signal is separated into the invalid component and the valid component.
- the gradation processing is performed on the valid component, and the gradation processing is not performed on the invalid component, and an increase in noise accompanying with the gradation processing is suppressed.
- it is possible to generate the high quality color image signal.
- the low frequency component is excluded from the target of the processing after being separated into the valid component and the invalid component, the possibility of generating the adverse effect accompanying with the processing is decreased, and it is possible to improve the stability.
- the image signal is synthesized with the invalid component, it is possible to obtain the color image signal with little sense of visual discomfort, and the stability and reliability of the processing can be improved.
- the image processing system in which the processing can be performed at a high speed can be configured at a low cost.
- the gradation conversion curve is obtained adaptively from the low frequency components of the luminance signals, it is possible to perform the high accuracy gradation conversion on various types of the color image signals.
- the gradation conversion curve is calculated on the basis of the low frequency component, it is possible to calculate the appropriate gradation conversion curve with little influence from the noise.
- the gradation processing can be performed while weighting the region-of-interest such as a human being, it is possible to obtain the high quality image signals which are subjectively preferable.
- the gradation conversion with the identical conversion characteristic is performed on the low frequency component and the valid component in the high frequency component located at the same position, it is possible to obtain the image signal providing the sense of integrity with little sense of visual discomfort.
- FIGS. 26 to 30 illustrate a third embodiment of the present invention
- FIG. 26 is a block diagram of a configuration of an image processing system.
- the image processing system has such a configuration that with respect to the above-mentioned image processing system illustrated in FIG. 1 according to the first embodiment, an edge emphasis unit 1202 constituting edge emphasis means is added, and the frequency decomposition unit 109 , the high frequency separation unit 112 , and the frequency synthesis unit 115 are respectively replaced by a frequency decomposition unit 1200 constituting separation means and frequency decomposition means, high frequency separation unit 1201 constituting separation means and high frequency separation means, and a frequency synthesis unit 1203 constituting synthesis means and frequency synthesis means.
- Other basic configuration is similar to that of the above-mentioned first embodiment. Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described.
- the buffer 105 is connected to the exposure control unit 106 , the focus control unit 107 , the conversion characteristic calculation unit 111 , and the frequency decomposition unit 1200 .
- the frequency decomposition unit 1200 is connected to the buffer 110 .
- the buffer 110 is connected to the conversion characteristic calculation unit 111 , the high frequency separation unit 1201 , and the gradation processing unit 113 .
- the high frequency separation unit 1201 is connected to the edge emphasis unit 1202 and the buffer 114 .
- the edge emphasis unit 1202 is connected to the gradation processing unit 113 .
- the buffer 114 is connected via the frequency synthesis unit 1203 to the signal processing unit 116 .
- the control unit 118 is also bi-directionally connected to the frequency decomposition unit 1200 , the high frequency separation unit 1201 , the edge emphasis unit 1202 , and the frequency synthesis unit 1203 to control these units.
- FIG. 26 the action of the image processing system illustrated in FIG. 26 is basically similar to that of the first embodiment, and therefore only a different part will be mainly described along the flow of the image signal.
- the image signal in the buffer 105 is transferred to the frequency decomposition unit 1200 .
- the frequency decomposition unit 1200 performs a predetermined frequency decomposition on the transferred image signal to obtain a high frequency component and a low frequency component on the basis of the control of the control unit 118 . Then, the frequency decomposition unit 1200 sequentially transfers the thus obtained high frequency component and the low frequency components to the buffer 110 .
- the frequency decomposition for example, it is supposed to use a known discrete cosine transform of a 64 ⁇ 64 pixel unit.
- FIGS. 27A and 27B are explanatory diagrams for describing the discrete cosine transform;
- FIG. 27A illustrates the image signal in the real space and
- FIG. 27B illustrates the signal in the frequency space after the discrete cosine transform, respectively.
- the upper left is set as the origin, that is, as the zero-th order component, and the high frequency components at the first-order or above are arranged on a concentric circle while using the zero-th order component as the origin.
- the conversion characteristic calculation unit 111 reads the image signal from the buffer 105 for each 64 ⁇ 64 pixel unit used in the frequency decomposition unit 1200 on the basis of the control of the control unit 118 . After that, the conversion characteristic calculation unit 111 calculates the gradation characteristic used for the gradation conversion processing similarly to the above-mentioned first embodiment. That is, according to the present embodiment, for the gradation conversion processing, it is supposed to employ the space-variant processing using a plurality of gradation characteristics different for each region at the 64 ⁇ 64 pixel unit. Then, the conversion characteristic calculation unit 111 transfers the calculated the gradation characteristic to the gradation processing unit 113 .
- the high frequency separation unit 1201 reads the high frequency components from the buffer 110 and performs the noise reducing processing on the high frequency components on the basis of the control of the control unit 118 . After that, the high frequency component is separated into the invalid component caused by the noise and the other valid component. Then, the high frequency separation unit 1201 transfers the thus separated valid components to the edge emphasis unit 1202 and the above-mentioned invalid components to the buffer 114 , respectively.
- the edge emphasis unit 1202 multiplies the valid component transferred by the high frequency separation unit 1201 by a pre-set coefficient to perform the edge emphasis processing, and transfers the processing result to the gradation processing unit 113 .
- the gradation processing unit 113 reads the low frequency components from the buffer 110 , the valid components in the high frequency components from the edge emphasis unit 1202 , and the gradation characteristic from the conversion characteristic calculation unit 111 , respectively, on the basis of the control of the control unit 118 . Then, on the basis of the above-mentioned gradation characteristic, the gradation processing unit 113 performs the gradation processing on the low frequency component and the valid components in the high frequency components. The gradation processing unit 113 transfers the low frequency component on which the gradation processing has been performed and the valid components in the high frequency components to the buffer 114 .
- the frequency synthesis unit 1203 reads the low frequency component on which the gradation processing has been performed, the valid component in the high frequency component on which the gradation processing has been performed, and the invalid component in the high frequency component from the buffer 114 on the basis of the control of the control unit 118 , and synthesizes the image signal on which the gradation processing has been performed on the basis of these components. It should be noted that according to the present embodiment, for the frequency synthesis, it is supposed to use a known inverse DCT (Discrete Cosine Transform). Then, the frequency synthesis unit 1203 transfers the synthesized image signal to the signal processing unit 116 .
- inverse DCT Discrete Cosine Transform
- the signal processing unit 116 performs a known compression processing or the like on the image signal from the frequency synthesis unit 1203 and transfers the signal after the processing to the output unit 117 on the basis of the control of the control unit 118 .
- the output unit 117 records and saves the image signal output from the signal processing unit 116 in the recording medium such as a memory card.
- FIG. 28 is a block diagram of a configuration example of the high frequency separation unit 1201 .
- the high frequency separation unit 1201 has such a configuration that with respect to the high frequency separation unit 112 shown in FIG. 5 of the above-mentioned first embodiment, a first smoothing unit 1300 constituting noise reducing means and first smoothing means and a second smoothing unit 1301 constituting noise reducing means and second smoothing means are added.
- Other basic configuration is similar to that of the high frequency separation unit 112 shown in FIG. 5 . Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described.
- the determination unit 409 is connected to the first smoothing unit 1300 and the second smoothing unit 1301 .
- the first smoothing unit 1300 is connected to the edge emphasis unit 1202 .
- the second smoothing unit 1301 is connected to the buffer 114 .
- the control unit 118 is bi-directionally connected to the first smoothing unit 1300 and the second smoothing unit 1301 to control these units.
- the low frequency component extraction unit 400 sequentially extracts the low frequency components from the buffer 110 on the basis of the control of the control unit 118 . It should be noted that according to the present embodiment, as described above, it is supposed to use the discrete cosine transform of the 64 ⁇ 64 pixels. Then, the low frequency component extraction unit 400 extracts frequency components equal to or smaller than a predetermined n-th order among the frequency components at the respective orders shown in FIG. 27B as the low frequency components from the respective regions of the 64 ⁇ 64 pixels.
- the noise amount is calculated via the parameter selection unit 404 and the interpolation unit 405 similarly to the above-mentioned first embodiment. Then, the interpolation unit 405 transfers the calculated noise amount to the upper limit and lower limit setting unit 408 .
- the high frequency component extraction unit 406 extracts frequency components at equal to or larger than the (n+1)-th order from the respective regions of the 64 ⁇ 64 pixels corresponding to the low frequency components extracted by the low frequency component extraction unit 400 as the high frequency components on the basis of the control of the control unit 118 .
- the average calculation unit 407 separates the high frequency components for each order to calculate the respective average values AV on the basis of the control of the control unit 118 and transfers the calculated average value AV to the upper limit and lower limit setting unit 408 .
- the upper limit and lower limit setting unit 408 sets an upper limit App_Up and a lower limit App_Low for distinguishing the valid component and the invalid component as represented by Numeric Expression 3 as follows for each order.
- the upper limit and lower limit setting unit 408 transfers the thus set upper limit App_Up and the lower limit App_Low to the determination unit 409 , transfers the average value AV to the second smoothing unit 1301 , and transfers the average value AV and the noise amount N to the first smoothing unit 1300 , respectively.
- the determination unit 409 reads the high frequency components from the high frequency component extraction unit 406 , and also reads the upper limit App_Up and the lower limit App_Low corresponding to the order of the high frequency components from the upper limit and lower limit setting unit 408 . Then, in a case where the high frequency component exceeds the upper limit App_Up or falls short of the lower limit App_Low, the determination unit 409 determines that the high frequency component is the valid component and transfers the high frequency components to the first smoothing unit 1300 .
- the determination unit 409 determines that the high frequency component is the invalid component caused by the noise and transfers the high frequency component to the second smoothing unit 1301 .
- the second smoothing unit 1301 performs a processing of substituting the high frequency component (herein, the high frequency component is set as P) with the average value AV from the upper limit and lower limit setting unit 408 as shown in Numeric Expression 7 below.
- the first smoothing unit 1300 uses the average value AV from the upper limit and lower limit setting unit 408 and the noise amount N to perform the correction on the high frequency component P.
- the correction has two types of processings. First, in a case where the high frequency component exceeds the upper limit App_Up, the first smoothing unit 1300 performs a correction as shown in Numeric Expression 8 below.
- the first smoothing unit 1300 performs a correction as shown in Numeric Expression 9 below.
- the processing result obtained by the first smoothing unit 1300 is transferred to the edge emphasis unit 1202 , and the processing result obtained by the second smoothing unit 1301 is transferred to the buffer 114 , respectively.
- the high frequency component determined as the valid component is transferred via the edge emphasis unit 1202 to the gradation processing unit 113 , and the gradation processing is performed.
- the high frequency component determined as the invalid component is transferred to the buffer 114 without performing the gradation processing thereon.
- the image processing system in which the image pickup unit is separately provided may be used.
- the image signal from the CCD 112 is recorded in the recording medium such as a memory card as raw data without applying the process, and also the associated information such as image pickup conditions (for example, the temperature of the image pickup device, the exposure conditions, and the like, for each shooting operation from the control unit 118 ) is recorded in the recording medium as the header information.
- the processing can be performed as the computer is allowed to execute the image processing program which is separate software to instruct the computer to read the information of the recording medium.
- the transmission of various pieces of information from the image pickup unit to the computer is not necessarily performed via the recording medium and may be performed via a communication line or the like.
- FIG. 29 is a flow chart showing a main routine of an image processing program.
- the image signal is read, and also the header information such as the temperature and the exposure conditions of the image pickup device is read (step S 1 ).
- step S 2 by performing the frequency decomposition such as the discrete cosine transform, the high frequency component and the low frequency component are obtained.
- step S 3 the conversion characteristic is calculated (step S 3 ).
- the high frequency component is separated into the invalid component caused by the noise and the other valid component (step S 80 ).
- step S 5 the gradation processing is performed on the low frequency component and the valid component in the high frequency component.
- step S 6 on the basis of the low frequency component on which the gradation processing has been performed, the valid component in the high frequency component on which the gradation processing has been performed, and the invalid component in the high frequency component, the image signal on which the gradation conversion has been performed is synthesized (step S 6 ).
- step S 7 the signal processing such as a known compression processing is performed.
- step S 8 the image signal after the processing is output (step S 8 ), and the processing is ended.
- FIG. 30 is a flow chart showing the processing for the high frequency separation in the above-mentioned step S 80 .
- FIG. 30 processing steps basically substantially identified with the processing shown in FIG. 13 of the above-mentioned first embodiment are allocated with the same step numbers.
- step S 20 the low frequency components are sequentially extracted for each pixel.
- the information such as the temperature and the gain of the image pickup device is set.
- a pre-set standard value is assigned to the relevant information (step S 21 ).
- step S 22 the parameter related to the reference noise model is read.
- the noise amount related to the low frequency component is calculated through the interpolation processing (step S 23 ).
- step S 24 the high frequency components corresponding to the low frequency components are sequentially extracted.
- step S 25 the average values of the high frequency components corresponding to the low frequency components are calculated for each order.
- the upper limit and the lower limit are set as shown in Numeric Expression 3 (step S 26 ).
- step S 90 in a case where the high frequency component is in the range between the upper limit and the lower limit, it is determined that the high frequency component is the invalid component caused by the noise, and in a case where the high frequency component exceeds the upper limit or falls short of the lower limit, it is determined that the high frequency component is the valid component (step S 90 ).
- step S 90 in a case where it is determined that the high frequency component is the invalid component, the correction processing shown in Numeric Expression 7 is performed on the high frequency component (step S 92 ).
- step S 91 or S 92 When the processing in step S 91 or S 92 is ended, the valid component and the invalid component are output while being separated from each other (step S 93 ).
- step S 29 it is determined whether or not the processing for all the high frequency components has been completed. In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S 24 to repeat the above-mentioned processing.
- step S 29 in a case where it is determined that the processing for all the high frequency components has been completed, it is determined whether or not the processing for all the low frequency components has been completed (step S 30 ). In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S 20 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in FIG. 29 .
- the configuration of using the discrete cosine transform for the frequency decomposition and the frequency synthesis is adopted, but the configuration is not necessarily limited to the above.
- a configuration of using the wavelet transform can be adopted, and similarly to the second embodiment described above, a configuration of using the low-pass filter and the difference filter in combination can also be adopted.
- the configuration of processing the monochrome image signal is adopted, but the configuration is not necessarily limited to the above.
- a configuration of calculating the luminance signals from the color image signal obtained from the color image pickup device for the processing can also be adopted.
- the third embodiment described above only the high frequency component where the influence of the noise prominently visually appears is separated into the invalid component and the valid component.
- the gradation processing is performed on the valid component, and the gradation processing is not performed on the invalid component, and an increase in noise accompanying with the gradation processing is suppressed. Thus, it is possible to generate the high quality image signal.
- the low frequency component is excluded from the target of the processing after being separated into the valid component and the invalid component, the possibility of generating the adverse effect accompanying with the processing is decreased, and it is possible to improve the stability.
- the image signal is synthesized with the invalid component, it is possible to obtain the image signal with little sense of visual discomfort, and the stability and reliability of the processing can be improved.
- the discrete cosine transform is excellent at the separation of the frequency, and it is therefore possible to perform the high accuracy processing.
- the gradation conversion curve is adaptively and also independently calculated for each region from the low frequency component of the image signal, it is possible to perform the gradation conversion at the high accuracy on various image signals.
- the correction processing is performed on the valid component in the high frequency component and the smoothing processing is performed on the invalid component in the high frequency component, the generation of the discontinuity accompanying with the noise reducing processing is prevented, and it is possible to generate the high quality image signal.
- edge emphasis processing is performed only on the valid component in the high frequency component and the edge emphasis processing is not performed on the invalid component in the high frequency component, it is possible to emphasize only the edge component without emphasizing the noise component. With the configuration, it is possible to generate the high quality image signal.
- FIGS. 31 to 36 illustrate a fourth embodiment of the present invention
- FIG. 31 is a block diagram of a configuration of an image processing system.
- the same configuration as that of the above-mentioned first to third embodiments is allocated with the same reference numerals to appropriately omit the description thereof, and only a different part will be mainly described.
- the image processing system has such a configuration that with respect to the above-mentioned image processing system illustrated in FIG. 1 according to the first embodiment, a noise reducing unit 1400 constituting separation means and noise reducing means, a difference unit 1401 constituting separation means and difference means, and a signal synthesis unit 1403 constituting synthesis means and signal synthesis means are added, the gradation processing unit 113 is replaced by a gradation processing unit 1402 constituting conversion means and gradation processing means, and the frequency decomposition unit 109 , the high frequency separation unit 112 , and the frequency synthesis unit 115 are omitted.
- Other basic configuration is similar to that of the above-mentioned first embodiment. Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described.
- the buffer 105 is connected to the exposure control unit 106 , the focus control unit 107 , the noise reducing unit 1400 , and the difference unit 1401 .
- the noise reducing unit 1400 is connected to the buffer 110 .
- the buffer 110 is connected to the conversion characteristic calculation unit 111 , the difference unit 1401 , and the gradation processing unit 1402 .
- the conversion characteristic calculation unit 111 is connected to the gradation processing unit 1402 .
- the difference unit 1401 is connected to the buffer 114 .
- the gradation processing unit 1402 is connected to the buffer 114 .
- the buffer 114 is connected via the signal synthesis unit 1403 to the signal processing unit 116 .
- the control unit 118 is also bi-directionally connected to the noise reducing unit 1400 , the difference unit 1401 , the gradation processing unit 1402 , and the signal synthesis unit 1403 to control these units.
- the image signal in the buffer 105 is transferred to the noise reducing unit 1400 .
- the noise reducing unit 1400 performs the noise reducing processing on the basis of the control of the control unit 118 and transfers the image signal after the noise reducing processing as the valid component to the buffer 110 .
- the conversion characteristic calculation unit 111 reads the valid component from the buffer 110 , and similarly to the above-mentioned first embodiment, calculates the gradation characteristic used for the gradation conversion processing. It should be noted that according to the present embodiment, for the gradation conversion processing, for example, it is supposed to use a space-variant processing using a plurality of gradation characteristics different for each region of a 64 ⁇ 64 pixel unit. Then, the conversion characteristic calculation unit 111 transfers the calculated the gradation characteristic to the gradation processing unit 1402 .
- the difference unit 1401 reads the image signal before the noise reducing processing from the buffer 105 , and also reads the image signal after the noise reducing processing from the buffer 110 as the valid component to perform a processing of taking a difference thereof.
- the difference unit 1401 transfers a signal obtained as the result of taking the difference as the invalid component to the buffer 114 .
- the gradation processing unit 1402 reads the valid component from the buffer 110 and the gradation characteristic from the conversion characteristic calculation unit 111 , respectively, on the basis of the control of the control unit 118 . Then, on the basis of the above-mentioned gradation characteristic, the gradation processing unit 1402 performs the gradation processing on the above-mentioned valid component. The gradation processing unit 1402 transfers the valid component on which the gradation processing has been performed to the buffer 114 .
- the signal synthesis unit 1403 reads the valid component on which the gradation processing has been performed and the invalid component from the buffer 114 on the basis of the control of the control unit 118 and adds these components, so that the image signal on which the gradation conversion has been performed is synthesized.
- the signal synthesis unit 1403 transfers the image signal thus synthesized to the signal processing unit 116 .
- the signal processing unit 116 performs a known compression processing or the like on the image signal from the signal synthesis unit 1403 and transfers the signal after the processing to the output unit 117 on the basis of the control of the control unit 118 .
- the output unit 117 records and saves the image signal output from the signal processing unit 116 in the recording medium such as a memory card.
- FIG. 32 is a block diagram of a configuration example of the noise reducing unit 1400 .
- the noise reducing unit 1400 is configured by including an image signal extraction unit 1500 , an average calculation unit 1501 constituting noise estimation means and average calculation means, a gain calculation unit 1502 constituting noise estimation means and collection means, a standard value assigning unit 1503 constituting noise estimation means and assigning means, a noise LUT 1504 constituting noise estimation means and table conversion means, an upper limit and lower limit setting unit 1505 constituting setting means and upper limit and lower limit setting means, a determination unit 1506 constituting determination means, a first smoothing unit 1507 constituting first smoothing means, and a second smoothing unit 1508 constituting second smoothing means.
- the buffer 105 is connected to the image signal extraction unit 1500 .
- the image signal extraction unit 1500 is connected to the average calculation unit 1501 and the determination unit 1506 .
- the average calculation unit 1501 , the gain calculation unit 1502 , and the standard value assigning unit 1503 are connected to the noise LUT 1504 .
- the noise LUT 1504 is connected to the upper limit and lower limit setting unit 1505 .
- the upper limit and lower limit setting unit 1505 is connected to the determination unit 1506 , the first smoothing unit 1507 , and the second smoothing unit 1508 .
- the determination unit 1506 is connected to the first smoothing unit 1507 and the second smoothing unit 1508 .
- the first smoothing unit 1507 and the second smoothing unit 1508 are connected to the buffer 110 .
- the control unit 118 is bi-directionally connected to the image signal extraction unit 1500 , the average calculation unit 1501 , the gain calculation unit 1502 , the standard value assigning unit 1503 , the noise LUT 1504 , the upper limit and lower limit setting unit 1505 , the determination unit 1506 , the first smoothing unit 1507 , and the second smoothing unit 1508 to control these units.
- the image signal extraction unit 1500 sequentially extracts the target pixel on which the noise reducing processing should be performed and neighboring pixels of, for example, 3 ⁇ 3 pixels including the target pixel from the buffer 105 on the basis of the control of the control unit 118 .
- the image signal extraction unit 1500 transfers the target pixel and the neighboring pixels to the average calculation unit 1501 , and the target pixel to the determination unit 1506 , respectively.
- the average calculation unit 1501 reads the target pixel and the neighboring pixels from the image signal extraction unit 1500 and calculates the average value AV thereof on the basis of the control of the control unit 118 .
- the average calculation unit 1501 transfers the calculated average value AV to the noise LUT 1504 .
- the gain calculation unit 1502 calculates the gain information in the amplification unit 103 to be transferred to the noise LUT 1504 on the basis of the information related to the ISO sensitivity and the exposure condition transferred from the control unit 118 .
- control unit 118 obtains temperature information of the CCD 102 from the temperature sensor 120 and transferred the thus obtained temperature information to the noise LUT 1504 .
- the standard value assigning unit 1503 transfers a standard value of the information that cannot be obtained to the noise LUT 1504 .
- the noise LUT 1504 is a look up table where a relation among the signal value level of the image signal, the gain of the image signal, the operation temperature of the image pickup device, and the noise amount is recorded.
- the look up table is designed, for example, by using the technology disclosed in Japanese Unexamined Patent Application Publication No. 2004-128985.
- the noise LUT 1504 outputs the noise amount N on the basis of the average value AV related to the target pixel from the average calculation unit 1501 , the gain information from the gain calculation unit 1502 or the standard value assigning unit 1503 , and the temperature information from the control unit 118 or the standard value assigning unit 1503 .
- the noise amount N and the average value AV from the average calculation unit 1501 are transferred from the noise LUT 1504 to the upper limit and lower limit setting unit 1505 .
- the upper limit and lower limit setting unit 1505 uses the average value AV and the noise amount N from the noise LUT 1504 to set the upper limit App_Up and the lower limit App_Low for identifying whether the target pixel belongs to the noise or not as shown in Numeric Expression 3.
- the upper limit and lower limit setting unit 1505 transfers the thus set upper limit App_Up and the lower limit App_Low to the determination unit 1506 , transfers the average value AV to the second smoothing unit 1508 , and transfers the average value AV and the noise amount N to the first smoothing unit 1507 , respectively.
- the determination unit 1506 reads the target pixel from the image signal extraction unit 1500 and the upper limit App_Up and the lower limit App_Low from the upper limit and lower limit setting unit 1505 , respectively, on the basis of the control of the control unit 118 . Then, in a case where the target pixel exceeds the upper limit App_Up or falls short of the lower limit App_Low, the determination unit 1506 determines that the target pixel does not belong to the noise and transfers the target pixel to the first smoothing unit 1507 .
- the determination unit 1506 determines that the target pixel belongs to the noise and transfers the target pixel to the second smoothing unit 1508 .
- the second smoothing unit 1508 performs the processing of substituting the target pixel (herein, the target pixel is set as P) with the average value AV from the upper limit and lower limit setting unit 1505 as shown in Numeric Expression 7.
- the first smoothing unit 1507 uses the average value AV and the noise amount N from the upper limit and lower limit setting unit 1505 to perform the correction on the target pixel P.
- the correction has two types of processings. In a case where the target pixel P exceeds the upper limit App_Up, the first smoothing unit 1507 performs the correction shown in Numeric Expression 8. On the other hand, the first smoothing unit 1507 performs the correction shown in Numeric Expression 9 in a case where the target pixel P falls short of the lower limit App_Low.
- the processing result obtained by the first smoothing unit 1507 and the processing result obtained by the second smoothing unit 1508 are both transferred to the buffer 110 .
- FIG. 33 is a block diagram of a configuration example of the gradation processing unit 1402 .
- the gradation processing unit 1402 has such a configuration that with respect to the gradation processing unit 113 shown in FIG. 6 of the above-mentioned first embodiment, the low frequency component extraction unit 500 , the high frequency component extraction unit 504 is omitted, and an image signal extraction unit 1600 constituting extraction means is added.
- Other basic configuration is similar to that of the gradation processing unit 113 shown in FIG. 6 . Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described.
- the buffer 110 is connected to the image signal extraction unit 1600 .
- the image signal extraction unit 1600 is connected to the distance calculation unit 501 and the gradation conversion unit 505 .
- the control unit 118 is also bi-directionally connected to the image signal extraction unit 1600 to control the unit.
- the image signal extraction unit 1600 sequentially extracts the image signals after the noise reducing processing as valid components from the buffer 110 for each pixel on the basis of the control of the control unit 118 .
- the image signal extraction unit 1600 transfers the extracted valid component to the distance calculation unit 501 and the gradation conversion unit 505 .
- the distance calculation unit 501 and the gradation conversion equation setting unit 502 sets the gradation conversion equation with respect to the target pixel as shown in Numeric Expression 4. Then, the gradation conversion equation setting unit 502 transfers the set gradation conversion equation to the buffer 503 .
- the gradation conversion unit 505 reads the valid component from the image signal extraction unit 1600 and also reads the gradation conversion equation from the buffer 503 to perform the gradation conversion on the valid component.
- the gradation conversion unit 505 transfers the valid component after the gradation conversion to the buffer 114 .
- the image processing system in which the image pickup unit is separately provided may be used.
- the image signal from the CCD 112 is recorded in the recording medium such as a memory card as raw data without applying the process, and also the associated information such as image pickup conditions (for example, the temperature of the image pickup device, the exposure conditions, and the like, for each shooting operation from the control unit 118 ) is recorded in the recording medium as the header information.
- the processing can be performed as the computer is allowed to execute the image processing program which is separate software to instruct the computer to read the information of the recording medium.
- the transmission of various pieces of information from the image pickup unit to the computer is not necessarily performed via the recording medium and may be performed via a communication line or the like.
- FIG. 34 is a flow chart showing a main routine of an image processing program.
- the image signal is read, and also the header information such as the temperature and the exposure conditions of the image pickup device is read (step S 1 ).
- the noise reducing processing is performed to calculate the image signal after the noise reducing processing as the valid component (step S 100 ).
- step S 3 the conversion characteristic is calculated (step S 3 ).
- the invalid component is calculated (step S 101 ).
- step S 102 the gradation processing is performed on the valid component.
- step S 103 the image signal on which the gradation conversion has been performed is synthesized.
- step S 7 the signal processing such as a known compression processing is performed.
- step S 8 the image signal after the processing is output (step S 8 ), and the processing is ended.
- FIG. 35 is a flow chart showing the processing for the noise reduction in the above-mentioned step S 100 .
- the target pixel on which the noise reducing processing should be performed and neighboring pixels, for example, of 3 ⁇ 3 pixels including the target pixel are sequentially extracted (step S 110 ).
- step S 111 an average value of the target pixel and the neighboring pixels is calculated.
- the information such as the temperature and the gain of the image pickup device is set.
- a pre-set standard value is assigned to the relevant information (step S 112 ).
- step S 113 the table related to the noise amount where a relation among the signal value level of the image signal, the gain of the image signal, the operation temperature of the image pickup device, and the noise amount is recorded is read (step S 113 ).
- the noise amount is calculated (step S 114 ).
- the upper limit and the lower limit are set as shown in Numeric Expression 3 (step S 115 ).
- step S 116 it is determined whether the target pixel belongs to the noise or not through the comparison with the upper limit and the lower limit.
- step S 116 in a case where the target pixel is in the range between the upper limit and the lower limit, it is determined that the target pixel belongs to the noise, the correction processing shown in Numeric Expression 7 is performed on the target pixel (step S 118 ).
- the corrected target pixel is output as the pixel after the noise reducing processing (step S 119 ).
- the image signal after the noise reducing processing is set as the valid component, and it is determined whether the processing has been completed for all the valid components or not (step S 120 ). In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S 110 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in FIG. 34 .
- FIG. 36 is a flow chart showing the gradation processing in the above-mentioned step S 102 .
- FIG. 36 processing steps basically substantially identified with the processing shown in FIG. 14 of the above-mentioned first embodiment are allocated with the same step numbers.
- the image signals after the noise reducing processing are sequentially extracted as valid components for each pixel (step S 130 ).
- step S 41 the distances between the target pixel of the valid component and the centers of the four neighboring regions are calculated.
- step S 42 the gradation conversion curves in the four neighboring regions are read.
- the gradation conversion equation with respect to the target pixel is set (step S 43 ).
- step S 47 by applying the gradation conversion equation shown in Numeric Expression 4 with respect to the target pixel of the valid component, the gradation conversion is performed (step S 47 ).
- step S 48 the target pixel on which the gradation processing has been performed is output (step S 48 ).
- step S 131 it is determined whether the processing has been completed for all the image signals after the noise reducing processing or not. In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S 130 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in FIG. 34 .
- the configuration of processing the monochrome image signal is adopted, but the configuration is not necessarily limited to the above.
- the configuration is not necessarily limited to the above.
- the gradation processing is performed only on the image signal after the noise reduction, and an increase in noise accompanying with the gradation processing is suppressed. Thus, it is possible to generate the high quality image signal.
- the conversion characteristic is calculated on the basis of the image signal after the noise reduction, the appropriate conversion characteristic with little influence from the noise can be calculated, and it is possible to improve the stability and reliability of the processing.
- the gradation conversion curve is adaptively calculated from the image signal after the noise reduction, it is possible to perform the high accuracy gradation conversion on various types of image signals.
- the present embodiment corresponds to the processing system in which the gradation conversion processing is combined with the noise reducing processing. Therefore, the affinity and compatibility with the existing system are high, and the present embodiment can be applied to a large number of image processing systems. Furthermore, the higher performance can be achieved as a whole, and the system scale can be reduced, which leads to the realization of the lower cost.
- the image signal after the noise reduction on which the gradation processing has been performed and the invalid component are synthesized with each other.
- the error generated in the noise reducing processing can be suppressed, and it is possible to perform the stable gradation processing. Also, it is possible to generate the high quality image signal with little sense of visual discomfort.
- the gradation conversion curve is adaptively obtained, it is possible to perform the high accuracy gradation conversion on various types of image signals.
- the degree of freedom is further improved, and also it is possible to obtain the high quality image signals for scenes with a large contrast.
Abstract
Description
- This application is a continuation application of PCT/JP2007/067222 filed on Sep. 4, 2007 and claims benefit of Japanese Application No. 2006-247169 filed in Japan on Sep. 12, 2006, the entire contents of which are incorporated herein by this reference.
- 1. Field of the Invention
- The present invention relates to an image processing system arranged to perform a gradation conversion on an image signal and a recording medium recording an image processing program for performing the gradation conversion on the image signal.
- 2. Description of the Related Art
- As a gradation processing to be performed on an image signal, a space-invariant method of using a single gradation conversion curve for the image signal and a space-variant method of using a plurality of gradation conversion curves different for each local region are proposed.
- For example, Japanese Patent No. 3465226 discloses a technology for dividing the image signal into a plurality of regions on the basis of texture information, performing a gradation conversion processing by calculating gradation conversion curve for each region on the basis of a histogram, and performing a weighting interpolation on the basis of a distance between the respective regions. With the configuration, it is possible to perform the space-variant gradation processing and maintain the continuity between the regions, and it is possible to obtain the high quality image signals in which light-dark crush is prevented also for an image having a wide dynamic range.
- Also, Japanese Unexamined Patent Application Publication No. 8-56316 discloses a technology for separating the image signal into a high frequency component and a low frequency component, performing a contrast emphasis processing on the low frequency component, and synthesizing the low frequency component after the contrast emphasis processing with the high frequency component. By employing such a technology, an emphasis on the noise of the high frequency component is prevented, and it is possible to obtain the high quality image signals.
- Furthermore, Japanese Unexamined Patent Application Publication No. 2004-128985 discloses a technology for estimating a noise amount for each block unit on the basis of a noise model and performing different noise reducing processings for each block unit. By employing such a technology, it is possible to perform a space-variant noise reducing processing, and it is possible to obtain the high quality image signals in which degradation of the edge component is little.
- According to an aspect of the present invention, there is provided an image processing system arranged to perform a gradation conversion on an image signal, the image processing system including: separation means adapted to separate the image signal into an invalid component caused by noise and other valid component; conversion means adapted to perform the gradation conversion on the valid component; and synthesis means adapted to synthesize an image signal on which the gradation conversion has been performed on the basis of the valid component on which the gradation conversion has been performed and the invalid component.
- Also, according to an aspect of the present invention, there is provided a recording medium recording an image processing program for instructing a computer to perform a gradation conversion on an image signal, the image processing program instructing the computer to execute: a separation step of separating the image signal into an invalid component caused by noise and other valid component; a conversion step of performing the gradation conversion on the valid component; and a synthesis step of synthesizing an image signal on which the gradation conversion has been performed on the basis of the valid component on which the gradation conversion has been performed and the invalid component.
-
FIG. 1 is a block diagram of a configuration of an image processing system according to a first embodiment of the present invention; -
FIG. 2 is a block diagram of a configuration example of a frequency decomposition unit according to the first embodiment; -
FIG. 3A is an explanatory diagram for describing a wavelet transform, illustrating an image signal in a real space according to the first embodiment; -
FIG. 3B is an explanatory diagram for describing the wavelet transform, illustrating the signal after the first wavelet transform has been performed according to the first embodiment; -
FIG. 3C is an explanatory diagram for describing the wavelet transform, illustrating the signal after the second wavelet transform has been performed according to the first embodiment; -
FIG. 4 is a block diagram of a configuration example of a conversion characteristic calculation unit according to the first embodiment; -
FIG. 5 is a block diagram of a configuration example of a high frequency separation unit according to the first embodiment; -
FIG. 6 is a block diagram of a configuration example of a gradation processing unit according to the first embodiment; -
FIG. 7 is an explanatory diagram for describing a division into regions of a low frequency component in a synthesis operation for gradation conversion curves according to the first embodiment; -
FIG. 8 is an explanatory diagram for describing distances d1 to d4 between a target pixel and neighboring four regions in the synthesis operation for gradation conversion curves according to the first embodiment; -
FIG. 9 is a block diagram of a configuration example of a frequency synthesis unit according to the first embodiment; -
FIG. 10 is a diagram illustrating another configuration example of the image processing system according to the first embodiment; -
FIG. 11 is a flow chart showing a main routine of an image processing program according to the first embodiment; -
FIG. 12 is a flow chart showing a processing for a conversion characteristic calculation in step S3 ofFIG. 11 according to the first embodiment; -
FIG. 13 is a flow chart showing a processing for a high frequency separation in step S4 ofFIG. 11 according to the first embodiment; -
FIG. 14 is a flow chart showing a gradation processing in step S5 ofFIG. 11 according to the first embodiment; -
FIG. 15 is a block diagram of a configuration of an image processing system according to a second embodiment of the present invention; -
FIG. 16 is a diagram illustrating a configuration of a Bayer-type primary color filter according to the second embodiment; -
FIG. 17 is a diagram illustrating a configuration of a color-difference line-sequential type complementary color filter according to the second embodiment; -
FIG. 18 is a block diagram of a configuration example of a frequency decomposition unit according to the second embodiment; -
FIG. 19 is a block diagram of a configuration example of a conversion characteristic calculation unit according to thesecond embodiment 2; -
FIG. 20 is a block diagram of a configuration example of a high frequency separation unit according to the second embodiment; -
FIG. 21 is a block diagram of a configuration example of a gradation processing unit according to the second embodiment; -
FIG. 22 is a flow chart showing a main routine of an image processing program according to the second embodiment; -
FIG. 23 is a flow chart showing a processing for a conversion characteristic calculation in step S51 ofFIG. 22 according to the second embodiment; -
FIG. 24 is a flow chart showing a processing for a high frequency separation in step S52 ofFIG. 22 according to the second embodiment; -
FIG. 25 is a flow chart showing a gradation processing in step S53 ofFIG. 22 according to the second embodiment; -
FIG. 26 is a block diagram of a configuration of an image processing system according to a third embodiment of the present invention; -
FIG. 27A is an explanatory diagram for describing a DCT (discrete cosine transform), illustrating an image signal in a real space according to the third embodiment; -
FIG. 27B is an explanatory diagram for describing the DCT (discrete cosine transform), illustrating a signal in a frequency space after the DCT transform according to the third embodiment; -
FIG. 28 is a block diagram of a configuration example of a high frequency separation unit according to the third embodiment; -
FIG. 29 is a flow chart showing a main routine of an image processing program according to the third embodiment; -
FIG. 30 is a flow chart showing a processing for a high frequency separation in step S80 ofFIG. 29 according to the third embodiment; -
FIG. 31 is a block diagram of a configuration of an image processing system according to a fourth embodiment of the present invention; -
FIG. 32 is a block diagram of a configuration example of a noise reducing unit according to the fourth embodiment; -
FIG. 33 is a block diagram of a configuration example of a gradation processing unit according to the fourth embodiment; -
FIG. 34 is a flow chart showing a main routine of an image processing program according to the fourth embodiment; -
FIG. 35 is a flow chart showing a processing for a noise reduction in step S100 ofFIG. 34 according to the fourth embodiment; and -
FIG. 36 is a flow chart showing a gradation processing in step S102 ofFIG. 34 according to the fourth embodiment. - Hereinafter, embodiments of the present invention will be described with reference to the drawings.
-
FIG. 1 toFIG. 14 illustrate a first embodiment of the present invention, andFIG. 1 is a block diagram of a configuration of an image processing system. - The image processing system illustrated in
FIG. 1 is an example constituted as an image pickup system including an image pickup unit. - That is, the image processing system includes a
lens system 100, anaperture 101, aCCD 102, anamplification unit 103, an A/D conversion unit (in the drawing, which is simply referred to as “A/D”) 104, abuffer 105, anexposure control unit 106, afocus control unit 107, anAF motor 108, afrequency decomposition unit 109 constituting separation means and frequency decomposition means, abuffer 110, a conversioncharacteristic calculation unit 111 constituting conversion means and conversion characteristic calculation means, a highfrequency separation unit 112 constituting separation means and high frequency separation means, agradation processing unit 113 constituting conversion means and gradation processing means, abuffer 114, afrequency synthesis unit 115 constituting synthesis means and frequency synthesis means, asignal processing unit 116, anoutput unit 117, acontrol unit 118 constituting control means and doubling as noise estimation means and collection means, an external I/F unit 119, and atemperature sensor 120. - An analog image signal captured and output via the
lens system 100, theaperture 101, theCCD 102 is amplified by theamplification unit 103 and converted into a digital signal by the A/D conversion unit 104. - The image signal from the A/
D conversion unit 104 is transferred via thebuffer 105 to thefrequency decomposition unit 109. Thebuffer 105 is connected to theexposure control unit 106 and also to thefocus control unit 107. - The
exposure control unit 106 is connected to theaperture 101, theCCD 102, and theamplification unit 103. Also, thefocus control unit 107 is connected to theAF motor 108. - The signal from the
frequency decomposition unit 109 is connected to thebuffer 110. Thebuffer 110 is connected to the conversioncharacteristic calculation unit 111, the highfrequency separation unit 112, and thegradation processing unit 113. - The conversion
characteristic calculation unit 111 is connected to thegradation processing unit 113. The highfrequency separation unit 112 is connected to thegradation processing unit 113 and thebuffer 114. Thegradation processing unit 113 is connected to thebuffer 114. - The
buffer 114 is connected via thefrequency synthesis unit 115 and thesignal processing unit 116 to theoutput unit 117 such as a memory card. - The
control unit 118 is composed, for example, of a micro computer. Thecontrol unit 118 is bi-directionally connected to theamplification unit 103, the A/D conversion unit 104, theexposure control unit 106, thefocus control unit 107, thefrequency decomposition unit 109, the conversioncharacteristic calculation unit 111, the highfrequency separation unit 112, thegradation processing unit 113, thefrequency synthesis unit 115, thesignal processing unit 116, and theoutput unit 117, and is configured to control these units. - In addition, the external I/
F unit 119 is also bi-directionally connected to thecontrol unit 118. The external I/F unit 119 is an interface provided with a power supply switch, a shutter button, a mode button for performing switching of various modes for each shooting operation, and the like. - Furthermore, the signal from the
temperature sensor 120 is also connected to thecontrol unit 118. Thetemperature sensor 120 is arranged in a neighborhood of theCCD 102, and is configured to substantially measure the temperature of theCCD 102. - Next, the action of the image processing system illustrated in
FIG. 1 will be described along the flow of the image signal. - Before performing the shooting operation, the user sets image pickup conditions such as an ISO sensitivity via the external I/
F unit 119. - After that, when the user performs a half press of the shutter button which is composed of a two-stage switch of the external I/
F unit 119, the image processing system is turned into a pre-image pickup device. - The
lens system 100 forms an optical image of a subject on an image pickup plane of theCCD 102. - The
aperture 101 regulates a passage range of the subject luminous flux which has been formed into image by the lens system to change the luminance of the optical image formed on the image pickup plane of theCCD 102. - The
CCD 102 photoelectrically converts the formed optical image and outputs as an analog image signal. It should be noted that according to the present embodiment, as theCCD 102, a monochrome single CCD is considered. But, the image pickup device is not limited to the CCD, but of course a CMOS or other image pickup devices may be used. - The analog signal output in this manner from the
CCD 102 is amplified by theamplification unit 103 by a predetermined amount while taking into account the ISO sensitivity. Thereafter, the analog signal is converted into the digital signal by the A/D conversion unit 104 to be transferred to thebuffer 105. It should be noted that according to the present embodiment, the gradation width of the digitalized image signal is set, for example, as 12-bits. - The image signal stored in the
buffer 105 is transferred to theexposure control unit 106 and thefocus control unit 107. - While taking into account the set ISO sensitivity, the shutter speed at a limit of image stability, and the like, the
exposure control unit 106 performs a control on an aperture value of theaperture 101, an electronic shutter speed of theCCD 102, a gain of theamplification unit 103, and the like to achieve the correct exposure on the basis of the image signal. - Also, the
focus control unit 107 obtains a focus signal by detecting the edge intensity and controls theAF motor 108 so that the edge intensity becomes the largest on the basis of the image signal. - In this way, after the focus adjustment, the exposure adjustment, or the like is performed, when the user performs a full press of a shutter button which is composed of a two-stage switch of the external I/
F unit 119, the image processing system functions as a real shooting device. - After that, similarly to the pre shooting, the image signal is transferred to the
buffer 105. The real shooting operation is performed on the basis of the exposure conditions calculated by theexposure control unit 106 and the focus conditions calculated by thefocus control unit 107, and these conditions for each shooting operation are transferred to thecontrol unit 118. - The image signal in the
buffer 105 obtained by the real shooting operation is transferred to thefrequency decomposition unit 109. - On the basis of the control of the
control unit 118, thefrequency decomposition unit 109 performs a predetermined frequency decomposition on the transferred image signal to obtain a high frequency component and a low frequency component. Then, thefrequency decomposition unit 109 sequentially transfers the thus obtained high frequency component and the low frequency component to thebuffer 110. It should be noted that according to the present embodiment, for the frequency decomposition, it is supposed to employ the wavelet transform by two times. - The conversion
characteristic calculation unit 111 reads the low frequency component from thebuffer 110 to calculate gradation characteristics used for the gradation conversion processing on the basis of the control of thecontrol unit 118. It should be noted that according to the present embodiment, as the gradation conversion processing, a space-variant processing which uses a plurality of gradation characteristics different for each local region is supposed. Then, the conversioncharacteristic calculation unit 111 transfers the calculated gradation characteristics to thegradation processing unit 113. - The high
frequency separation unit 112 reads the high frequency component from thebuffer 110 to separate the high frequency component into an invalid component caused by noise and other valid component. Then, the highfrequency separation unit 112 transfers the thus separated valid component to thegradation processing unit 113 and the above-mentioned invalid component to thebuffer 114, respectively. - The
gradation processing unit 113 reads the low frequency component from thebuffer 110, the valid component in the high frequency component from the highfrequency separation unit 112, and the gradation characteristic from the conversioncharacteristic calculation unit 111, respectively, on the basis of the control of thecontrol unit 118. Then, thegradation processing unit 113 performs the gradation processing on the low frequency component and the valid component in the high frequency component on the basis of the above-mentioned gradation characteristic. Thegradation processing unit 113 transfers the low frequency component on which the gradation processing has been performed and the valid component in the high frequency component on which the gradation processing has been performed to thebuffer 114. - The
frequency synthesis unit 115 reads the low frequency component on which the gradation processing has been performed, the valid component in the high frequency component on which the gradation processing has been performed, and the invalid component in the high frequency component from thebuffer 114, and synthesizes the image signal on which the gradation processing has been performed on the basis of these components under the control of thecontrol unit 118. It should be noted that according to the present embodiment, it is supposed to use the inverse wavelet transform as the frequency synthesis. Then, thefrequency synthesis unit 115 transfers the synthesized image signal to thesignal processing unit 116. - The
signal processing unit 116 performs a known compression processing or the like on the image signal from the image signal thefrequency synthesis unit 115 and transfers the signal after the processing to theoutput unit 117 on the basis of the control of thecontrol unit 118. - The
output unit 117 records and saves the image signal output from thesignal processing unit 116 in the recording medium such as a memory card. - Next,
FIG. 2 is a block diagram of a configuration example of thefrequency decomposition unit 109. - The
frequency decomposition unit 109 includes adata reading unit 200, abuffer 201, a horizontal high-pass filter (in the drawing, which is simply referred to as “horizontal high-pass”, and the same applies in the following description) 202, a horizontal low-pass filter (in the drawing, which is simply referred to as “horizontal low-pass”, and the same applies in the following description) 203, asub sampler 204, asub sampler 205, a vertical high-pass filter (in the drawing, which is simply referred to as “vertical high-pass”, and the same applies in the following description) 206, a vertical low-pass filter (in the drawing, which is simply referred to as “vertical low-pass”, and the same applies in the following description) 207, a vertical high-pass filter 208, a vertical low-pass filter 209, asub sampler 210, asub sampler 211, asub sampler 212, asub sampler 213, aswitching unit 214, a datatransfer control unit 215, abasis function ROM 216, and a filtercoefficient reading unit 217. - The
buffer 105 is connected via thedata reading unit 200 to thebuffer 201. - The
buffer 201 is connected to the horizontal high-pass filter 202 and the horizontal low-pass filter 203. - The horizontal high-
pass filter 202 is connected via thesub sampler 204 to the vertical high-pass filter 206 and the vertical low-pass filter 207. The horizontal low-pass filter 203 is connected via thesub sampler 205 to the vertical high-pass filter 208 and the vertical low-pass filter 209. - The vertical high-
pass filter 206 is connected to thesub sampler 210, the vertical low-pass filter 207 is connected to thesub sampler 211, the vertical high-pass filter 208 is connected to thesub sampler 212, and the vertical low-pass filter 209 is connected to thesub sampler 213, respectively. - The
sub sampler 210, thesub sampler 211, and thesub sampler 212 are connected to theswitching unit 214. - The
sub sampler 213 is connected to theswitching unit 214 and the datatransfer control unit 215. Theswitching unit 214 is connected to thebuffer 110. The datatransfer control unit 215 is connected to thebuffer 201. - The
basis function ROM 216 is connected to the filtercoefficient reading unit 217. The filtercoefficient reading unit 217 is connected to the horizontal high-pass filter 202, the horizontal low-pass filter 203, the vertical high-pass filter 206, the vertical low-pass filter 207, the vertical high-pass filter 208, and the vertical low-pass filter 209. - The
control unit 118 is bi-directionally connected to thedata reading unit 200, theswitching unit 214, the datatransfer control unit 215, and the filtercoefficient reading unit 217 to control these units. - The
basis function ROM 216 records filter coefficients used for the wavelet transform such as Harr function or Daubechies function. Among these, for example, the coefficient of the high-pass filter in the Harr function is represented byNumeric Expression 1 and the coefficient of the low-pass filter is represented byNumeric Expression 2, respectively. -
High-pass filter coefficient={0.5,−0.5} [Expression 1] -
Low-pass filter coefficient={0.5,0.5} [Expression 2] - It should be noted that these filter coefficients are commonly used in the horizontal direction and the vertical direction.
- The filter
coefficient reading unit 217 reads the filter coefficients from thebasis function ROM 216, transfers the high-pass filter coefficient to the horizontal high-pass filter 202, the vertical high-pass filter 206, and the vertical high-pass filter 208, and transfers the low-pass filter coefficient to the horizontal low-pass filter 203, the vertical low-pass filter 207, and the vertical low-pass filter 209, respectively, on the basis of the control of thecontrol unit 118. - In this way, after the filter coefficients are transferred to the respective high-pass filters and the respective low-pass filters, on the basis of the control of the
control unit 118, thedata reading unit 200 reads the image signal from thebuffer 105 to be transferred to thebuffer 201. It should be noted that in the following description, the image signal read from thebuffer 105 and stored on thebuffer 201 is set as L0. - The image signal on the
buffer 201 is subjected to the filtering processing in the horizontal direction and the vertical direction by the horizontal high-pass filter 202, the horizontal low-pass filter 203, the vertical high-pass filter 206, the vertical low-pass filter 207, the vertical high-pass filter 208, and the vertical low-pass filter 209. - At this time, the
sub sampler 204 and thesub sampler 205 perform the sub sampling on the input image signal in the horizontal direction into 1/2, and thesub sampler 210, thesub sampler 211, thesub sampler 212, and thesub sampler 213 performs the sub sampling on the input image signal in the vertical direction into 1/2. - Therefore, the output of the
sub sampler 210 provides a high frequency component Hs1 ij in the slanted direction in the transform performed for the first time, the output of thesub sampler 211 provides a first-order high frequency component Hh1 ij in the horizontal direction in the transform performed for the first time, the output of thesub sampler 212 provides a first-order high frequency component Hv1 ij in the vertical direction in the transform performed for the first time, the output of thesub sampler 213 provides a first-order low frequency component L1 ij in the transform performed for the first time, respectively. Herein, suffixes i and j mean coordinates in x and y directions in the first-order signal after the transform. -
FIGS. 3A to 3C are explanatory diagrams for describing the wavelet transform:FIG. 3A illustrates the image signal in the real space,FIG. 3B illustrates the signal after the wavelet transform is performed for the first time, andFIG. 3C illustrates the signal after the wavelet transform is performed for the second time, respectively. - When the wavelet transform is performed for the first time on the image signal in the real space as illustrated in
FIG. 3A , the signal becomes as illustrated inFIG. 3B . Also,FIG. 3B illustrates the first-order high frequency component Hs1 00 in the slanted direction, the first-order high frequency component Hh1 00 in the horizontal direction, and the first-order high frequency component Hv1 00 in the vertical direction corresponding to the low frequency component L1 00. - In the transform performed for the first time, the three first-order high frequency components Hs1 ij, Hh1 ij, and Hv1 ij corresponding to the first-order low frequency component L1 ij of one pixel, are all one pixel.
- On the basis of the control of the
control unit 118, theswitching unit 214 sequentially transfers the above-mentioned three first-order high frequency components Hs1 ij, Hh1 ij, and Hv1 ij and the first-order low frequency component L1 ij to thebuffer 110. - Also, the data
transfer control unit 215 transfers the first-order low frequency component L1 ij from thesub sampler 213 to thebuffer 201 on the basis of the control of thecontrol unit 118. - As the filtering processing similar to the above is performed on the first-order low frequency component L1 ij on the
buffer 201, three second-order high frequency components Hs2 kl, Hh2 kl, and Hv2 kl and a second-order low frequency component L2 kl are output. Herein, suffixes k and l mean coordinates in the x and y directions in the second-order signal after the transform. -
FIG. 3C illustrates the signal in such a transform performed for the second time. - As illustrated in
FIG. 3C , in the transform performed for the second time, the second-order high frequency component in the slanted direction corresponding to the second-order low frequency component L2 00 of one pixel becomes Hs2 00, the second-order high frequency component in the horizontal direction becomes Hh2 00, and the second-order high frequency component in the vertical direction becomes Hv2 00, all of which are one pixel, but the first-order high frequency components in the corresponding slanted direction become Hs1 00, Hs1 10, Hs1 01, and Hs1 11, the first-order high frequency components in the horizontal direction become Hh1 00, Hh1 10, Hh1 01, and Hh1 11, and the first-order high frequency components in the vertical direction become Hv1 00, Hv1 10, Hv1 01, and Hv1 11, all of which are four pixels. The above-mentioned procedure is repeatedly performed until decomposition at a predetermined n (n is an integer equal to or larger than 1, and according to the present embodiment, as described above, n=2 is supposed) stage is performed on the basis of the control of thecontrol unit 118. - Next,
FIG. 4 is a block diagram of a configuration example of the conversioncharacteristic calculation unit 111. - The conversion
characteristic calculation unit 111 includes adivision unit 300 constituting division means, abuffer 301, a correctrange extraction unit 302 constituting correct range extraction means, anedge calculation unit 303 constituting region-of-interest setting means and edge calculation means, ahistogram creation unit 304 constituting histogram creation means, a gradation conversioncurve calculation unit 305 constituting gradation conversion curve calculation means, and abuffer 306. - The
buffer 110 is connected via thedivision unit 300 to thebuffer 301. - The
buffer 301 is connected to the correctrange extraction unit 302 and thehistogram creation unit 304. The correctrange extraction unit 302 is connected via theedge calculation unit 303 to thehistogram creation unit 304. - The
histogram creation unit 304 is connected via the gradation conversioncurve calculation unit 305 and thebuffer 306 to thegradation processing unit 113. - The
control unit 118 is bi-directionally connected to thedivision unit 300, the correctrange extraction unit 302, theedge calculation unit 303, thehistogram creation unit 304, and the gradation conversioncurve calculation unit 305 to control these units. - Subsequently, a description will be given of the action of the conversion
characteristic calculation unit 111. - The
division unit 300 reads the low frequency component of the image signal from thebuffer 110 on the basis of the control of thecontrol unit 118 and divides the low frequency component into regions of a predetermined size shown inFIG. 7 , for example, a 32×32 pixel size, so that the respective regions are not overlapped one another. Herein,FIG. 7 is an explanatory diagram for describing the division into the regions of the low frequency component in the synthesis operation of the gradation conversion curves. Then, thedivision unit 300 sequentially transfers the divided regions to thebuffer 301. - The correct
range extraction unit 302 reads the low frequency components from thebuffer 301 for each local region unit on the basis of the control of thecontrol unit 118. The correctrange extraction unit 302 compares the low frequency components with the pre-set threshold related to the dark part (by way of an example, in the case of 12-bit gradation, for example, 128) and the pre-set threshold related to the light part (in the case of the 12-bit gradation, for example, 3968), and transfers the low frequency components which are equal to or larger than the threshold of the dark part and also equal to or smaller than the threshold of the light part as the correct exposure range to theedge calculation unit 303. - The
edge calculation unit 303 reads the low frequency components in the correct exposure range from the correctrange extraction unit 302 on the basis of the control of thecontrol unit 118, and uses a Laplacian filter or the like to calculate the known edge intensity. Theedge calculation unit 303 transfers the calculated edge intensity to thehistogram creation unit 304. - The
histogram creation unit 304 selects a pixel having an edge intensity which is equal to or larger than the pre-set threshold (in the case of the above-mentioned 12-bit gradation, for example, 64) regarding the edge intensity from theedge calculation unit 303, and reads the low frequency components at the corresponding pixel positions from thebuffer 301 on the basis of the control of thecontrol unit 118. Then, thehistogram creation unit 304 creates a histogram related to the read low frequency components and transfers the created histogram to the gradation conversioncurve calculation unit 305. - The gradation conversion
curve calculation unit 305 accumulates and furthermore normalizes the histograms from thehistogram creation unit 304 on the basis of the control of thecontrol unit 118 to calculate the gradation conversion curve. The normalization is performed while following the gradation of the image signal. In the case of the above-mentioned 12-bit gradation, the normalization is performed so as to have the range of 0 to 4095. The gradation conversioncurve calculation unit 305 transfers the calculated gradation conversion curve to thebuffer 306. - It should be noted that the respective processings in the correct
range extraction unit 302, theedge calculation unit 303, thehistogram creation unit 304, and the gradation conversioncurve calculation unit 305 are performed in synchronization for each local region unit on the basis of the control of thecontrol unit 118. - Next,
FIG. 5 is a block diagram of a configuration example of the highfrequency separation unit 112. - The high
frequency separation unit 112 includes a low frequencycomponent extraction unit 400, again calculation unit 401 constituting noise estimation means and collection means, a standardvalue assigning unit 402 constituting noise estimation means and assigning means, aparameter ROM 403 constituting noise estimation means and recording means, aparameter selection unit 404 constituting noise estimation means and parameter selection means, aninterpolation unit 405 constituting noise estimation means and interpolation means, a high frequencycomponent extraction unit 406, anaverage calculation unit 407 constituting setting means and average calculation means, an upper limit and lowerlimit setting unit 408 constituting setting means and upper limit and lower limit setting means, and adetermination unit 409 constituting determination means. - The
buffer 110 is connected to the low frequencycomponent extraction unit 400 and the high frequencycomponent extraction unit 406. The low frequencycomponent extraction unit 400 is connected to theparameter selection unit 404. - The
gain calculation unit 401, the standardvalue assigning unit 402, and theparameter ROM 403 are connected to theparameter selection unit 404. Theparameter selection unit 404 is connected via theinterpolation unit 405 to the upper limit and lowerlimit setting unit 408. - The high frequency
component extraction unit 406 is connected to theaverage calculation unit 407 and thedetermination unit 409. Theaverage calculation unit 407 is connected via the upper limit and lowerlimit setting unit 408 to thedetermination unit 409. - The
determination unit 409 is connected to thegradation processing unit 113 and thebuffer 114. - The
control unit 118 is bi-directionally connected to the low frequencycomponent extraction unit 400, thegain calculation unit 401, the standardvalue assigning unit 402, theparameter selection unit 404, theinterpolation unit 405, the high frequencycomponent extraction unit 406, theaverage calculation unit 407, the upper limit and lowerlimit setting unit 408, and thedetermination unit 409 to control these units. - Subsequently, a description will be given of the action of the high
frequency separation unit 112. - The low frequency
component extraction unit 400 sequentially extracts the low frequency components from thebuffer 110 for each pixel on the basis of the control of thecontrol unit 118. It should be noted that according to the present embodiment, it is supposed to perform the wavelet transform by two times. In this case, the low frequency component extracted from thebuffer 110 by the low frequencycomponent extraction unit 400 becomes the second-order low frequency component L2 kl as illustrated inFIG. 3C . - On the basis of the information related to the ISO sensitivity and the exposure condition transferred from the
control unit 118, thegain calculation unit 401 calculates the gain information in theamplification unit 103 and transfers the calculated gain information to theparameter selection unit 404. - Also, the
control unit 118 obtains temperature information of theCCD 102 from thetemperature sensor 120 and transfers the thus obtained temperature information to theparameter selection unit 404. - On the basis of the control of the
control unit 118, in a case where at least one of the above-mentioned gain information and the temperature information cannot be obtained, the standardvalue assigning unit 402 transfers a standard value of the information that cannot be obtained to theparameter selection unit 404. - The
parameter selection unit 404 searches theparameter ROM 403 for a parameter of a reference noise model used for estimating the noise amount on the basis of the pixel value of the target pixel from the low frequencycomponent extraction unit 400, the gain information from thegain calculation unit 401 or the standardvalue assigning unit 402, and the temperature information from thecontrol unit 118 or the standardvalue assigning unit 402. Then, theparameter selection unit 404 transfers the searched parameter to theinterpolation unit 405. Also, theparameter selection unit 404 transfers the image signal of the low frequency component from the low frequencycomponent extraction unit 400 to theinterpolation unit 405. - The
interpolation unit 405 calculates a noise amount N related to the low frequency component on the basis of the parameter of the reference noise model and transfers the calculated noise amount N to the upper limit and lowerlimit setting unit 408. - It should be noted that to be more specific, the above-mentioned calculation of the noise amount N based on the
parameter ROM 403, theparameter selection unit 404, and theinterpolation unit 405 can be realized through the technology disclosed in Japanese Unexamined Patent Application Publication No. 2004-128985 described above, for example. - The high frequency
component extraction unit 406 extracts the high frequency component corresponding to the low frequency component extracted by the low frequencycomponent extraction unit 400 and the high frequency components located in the neighborhood of the high frequency component on the basis of the control of thecontrol unit 118. - For example, in a case where the second-order low frequency component L2 00 illustrated in
FIG. 3C is extracted as the low frequency component, the high frequency components corresponding to the second-order low frequency component L2 00 become total three pixels of Hs2 00, Hh2 00, and Hv2 00 which are the second-order high frequency components and total 12 pixels of Hs1 00, Hs1 10, Hs1 01, Hs1 11, Hh1 00, Hh1 10, Hh1 01, Hh1 11, Hv1 00, Hv1 10, Hv1 01, and Hv1 11 which are the first-order high frequency components. - Also, as the high frequency component located in a neighborhood of the high frequency components, for example, a region of 2×2 pixels including the corresponding high frequency component is selected.
- The high frequency
component extraction unit 406 sequentially transfers the high frequency component corresponding to the low frequency component and the high frequency components located in the neighborhood of the high frequency component to theaverage calculation unit 407, and sequentially transfers the high frequency component corresponding to the low frequency component to thedetermination unit 409. - On the basis of the control of the
control unit 118, from the high frequency component corresponding to the low frequency component and the high frequency components located in the neighborhood of the high frequency component, theaverage calculation unit 407 calculates an average value AV and transfers the calculated average value AV to the upper limit and lowerlimit setting unit 408. - On the basis of the control of the
control unit 118, by using the average value AV from theaverage calculation unit 407 and the noise amount N from theinterpolation unit 405, the upper limit and lowerlimit setting unit 408 sets an upper limit App_Up and a lower limit App_Low for distinguishing the valid component and the invalid component as represented byNumeric Expression 3 as follows. -
App_Up=AV+N/2 -
App_Low=AV−N/2 [Expression 3] - The upper limit and lower
limit setting unit 408 transfers the thus set upper limit App_Up and the lower limit App_Low to thedetermination unit 409. - On the basis of the control of the
control unit 118, thedetermination unit 409 reads the high frequency component corresponding to the low frequency component from the high frequencycomponent extraction unit 406 and also reads the upper limit App_Up and the lower limit App_Low shown inNumeric Expression 3 from the upper limit and lowerlimit setting unit 408. Then, in a case where the high frequency component is in range between the upper limit App_Up and the lower limit App_Low (for example, in a range equal to or larger than the lower limit App_Low and also equal to or smaller than the upper limit App_Up), thedetermination unit 409 determines that the high frequency component is the invalid component caused by the noise and transfers the high frequency component to thebuffer 114. On the other hand, in a case where the high frequency component exceeds the upper limit App_Up (larger than the upper limit App_Up) or falls short of the lower limit App_Low (smaller than the lower limit App_Low), thedetermination unit 409 determines that the high frequency component is the valid component and transfers the high frequency component to thegradation processing unit 113. - It should be noted that the respective processings in the
average calculation unit 407, the upper limit and lowerlimit setting unit 408, and thedetermination unit 409 described above are performed in synchronization for the respective pixels of the corresponding high frequency components on the basis of the control of thecontrol unit 118. - Next,
FIG. 6 is a block diagram of a configuration example of thegradation processing unit 113. - The
gradation processing unit 113 is configured by including a low frequencycomponent extraction unit 500 constituting first extraction means, adistance calculation unit 501 constituting distance calculation means, a gradation conversionequation setting unit 502 constituting gradation conversion equation setting means, abuffer 503, a high frequencycomponent extraction unit 504 constituting second extraction means, and agradation conversion unit 505 constituting gradation conversion means. - The conversion
characteristic calculation unit 111 is connected to the gradation conversionequation setting unit 502. - The
buffer 110 is connected to the low frequencycomponent extraction unit 500. The low frequencycomponent extraction unit 500 is connected to thedistance calculation unit 501 and thegradation conversion unit 505. Thedistance calculation unit 501 is connected to thegradation conversion unit 505 via the gradation conversionequation setting unit 502 and thebuffer 503. - The high
frequency separation unit 112 is connected via the high frequencycomponent extraction unit 504 to thegradation conversion unit 505. - The
gradation conversion unit 505 is connected to thebuffer 114. - The
control unit 118 is bi-directionally connected to the low frequencycomponent extraction unit 500, thedistance calculation unit 501, the gradation conversionequation setting unit 502, the high frequencycomponent extraction unit 504, and thegradation conversion unit 505 to control these units. - Subsequently, a description will be given of the action of the
gradation processing unit 113. - The low frequency
component extraction unit 500 sequentially extracts the low frequency components from thebuffer 110 for each pixel on the basis of the control of thecontrol unit 118. It should be noted that according to the present embodiment, as described above, it is supposed to perform the wavelet transform by two times. In this case, the target pixel of the low frequency component extracted by the low frequencycomponent extraction unit 500 from thebuffer 110 becomes the second-order low frequency component L2 kl as illustrated inFIG. 3C . - The low frequency
component extraction unit 500 transfers the extracted low frequency components to thedistance calculation unit 501 and thegradation conversion unit 505. - The
distance calculation unit 501 calculates distances between the target pixel extracted by the low frequencycomponent extraction unit 500 and four regions in a neighborhood of the target pixel. -
FIG. 8 is an explanatory diagram of the distances between the target pixel and the neighboring four regions d1 to d4 in the synthesis operation of the gradation conversion curves. - The distances between the target pixel and the neighboring four regions are respectively calculated as a distance between the target pixel and the center of the respective regions. In the following description, the calculated distances between the target pixel and the neighboring four regions are represented by dm (m=1 to 4), and the respective gradation conversion curves of the neighboring four regions are represented by Tm ( ). The
distance calculation unit 501 transfers the calculated distances dm to the gradation conversionequation setting unit 502. - On the basis of the control of the
control unit 118, the gradation conversionequation setting unit 502 reads the distances dm from thedistance calculation unit 501 and also reads the corresponding gradation conversion curve Tm ( ) of the neighboring four regions from the conversioncharacteristic calculation unit 111 to set the gradation conversion equation with respect to the target pixel as shown inNumeric Expression 4 as follows. -
- Herein, P in
Numeric Expression 4 means a pixel of a target of the gradation conversion processing, and P′ means a pixel after the gradation conversion processing, respectively. - The gradation conversion
equation setting unit 502 transfers the gradation conversion equation set as shown inNumeric Expression 4 to thebuffer 503. - On the other hand, the high frequency
component extraction unit 504 extracts the extracted high frequency components corresponding to the low frequency components extracted by the low frequencycomponent extraction unit 500 from the highfrequency separation unit 112 on the basis of the control of thecontrol unit 118. According to the present embodiment, as the target pixel of the low frequency component is the second-order low frequency component L2 kl shown inFIG. 3C , the extracted high frequency components becomes total three pixels including one pixel each from the second-order high frequency components Hs2 kl, Hh2 kl, and Hv2 kl and total 12 pixels including four pixels each from the first-order high frequency components Hs1 ij, Hh1 ij, and Hv1 ij. Then, the high frequencycomponent extraction unit 504 transfers the extracted high frequency components to thegradation conversion unit 505. - After that, in a case where the high frequency component from the high frequency
component extraction unit 504 exists, thegradation conversion unit 505 reads the high frequency component and also reads the gradation conversion equation shown inNumeric Expression 4 from thebuffer 503. On the basis of the read gradation conversion equation, thegradation conversion unit 505 performs the gradation conversion on the high frequency components. Thegradation conversion unit 505 transfers the high frequency component after the gradation conversion to thebuffer 114. On the other hand, in a case where it is determined that the corresponding high frequency component is the invalid component and the extracted high frequency component does not exist, on the basis of the control of thecontrol unit 118, thegradation conversion unit 505 cancels the gradation conversion on the high frequency component. - Also, the
gradation conversion unit 505 reads the low frequency component from the low frequencycomponent extraction unit 500 and the gradation conversion equation shown inNumeric Expression 4 from thebuffer 503, respectively, to perform the gradation conversion on the low frequency component. Thegradation conversion unit 505 transfers the low frequency component after the gradation conversion to thebuffer 114. - Next,
FIG. 9 is a block diagram of a configuration example of thefrequency synthesis unit 115. - The
frequency synthesis unit 115 is configured by including adata reading unit 600, aswitching unit 601, an upsampler 602, an upsampler 603, an upsampler 604, an upsampler 605, a vertical high-pass filter 606, a vertical low-pass filter 607, a vertical high-pass filter 608, a vertical low-pass filter 609, an upsampler 610, an upsampler 611, a horizontal high-pass filter 612, a horizontal low-pass filter 613, abuffer 614, a datatransfer control unit 615, abasis function ROM 616, and a filtercoefficient reading unit 617. - The
buffer 114 is connected via thedata reading unit 600 to theswitching unit 601. Theswitching unit 601 is connected to the upsampler 602, the upsampler 603, the upsampler 604, and the upsampler 605. The upsampler 602 is connected to the vertical high-pass filter 606, the upsampler 603 is connected to the vertical low-pass filter 607, the upsampler 604 is connected to the vertical high-pass filter 608, and the upsampler 605 is connected to the vertical low-pass filter 609. - The vertical high-
pass filter 606 and the vertical low-pass filter 607 are connected to the upsampler 610, and the vertical high-pass filter 608 and the vertical low-pass filter 609 are connected to the upsampler 611. The upsampler 610 is connected to the horizontal high-pass filter 612, and the upsampler 611 is connected to the horizontal low-pass filter 613. The horizontal high-pass filter 612 and the horizontal low-pass filter 613 are connected to thebuffer 614. Thebuffer 614 is connected to thesignal processing unit 116 and the datatransfer control unit 615. - The data
transfer control unit 615 is connected to theswitching unit 601. - The
basis function ROM 616 is connected to the filtercoefficient reading unit 617. The filtercoefficient reading unit 617 is connected to the vertical high-pass filter 606, the vertical low-pass filter 607, the vertical high-pass filter 608, the vertical low-pass filter 609, the horizontal high-pass filter 612, and the horizontal low-pass filter 613. - The
control unit 118 is bi-directionally connected to thedata reading unit 600, theswitching unit 601, the datatransfer control unit 615, and the filtercoefficient reading unit 617 to control these units. - Subsequently, a description will be given of the action of the
frequency synthesis unit 115. - The
basis function ROM 616 records a filter coefficient used for the inverse wavelet transform such as the Harr function or the Daubechies function. - On the basis of the control of the
control unit 118, the filtercoefficient reading unit 617 reads filter coefficient from thebasis function ROM 616. The filtercoefficient reading unit 617 transfers the high-pass filter coefficient to the vertical high-pass filter 606, the vertical high-pass filter 608, and the horizontal high-pass filter 612 and the low-pass filter coefficient to the vertical low-pass filter 607, the vertical low-pass filter 609, the horizontal low-pass filter 613, respectively. - After the filter coefficients are transferred, on the basis of the control of the
control unit 118, thedata reading unit 600 reads the low frequency component on which the gradation processing has been performed and the valid component at the n-stage in the high frequency component on which the gradation processing has been performed and the invalid component at the n-stage in the high frequency component from thebuffer 114 to be transferred to theswitching unit 601. It should be noted that the valid component at the n-stage in the high frequency component on which the gradation processing has been performed and the invalid component at the n-stage in the high frequency component are the integrated high frequency component at the n-stage when read by thedata reading unit 600. - On the basis of the control of the
control unit 118, theswitching unit 601 transfers the high frequency components in the slanted direction via theup sampler 602 to the vertical high-pass filter 606, the high frequency components in the horizontal direction via theup sampler 603 to the vertical low-pass filter 607, the high frequency components in the vertical direction via theup sampler 604 to the vertical high-pass filter 608, and the low frequency components via theup sampler 605 to the vertical low-pass filter 609, respectively, to execute the filtering processing in the vertical direction. - The frequency components from the vertical high-
pass filter 606 and the vertical low-pass filter 607 are transferred via theup sampler 610 to the horizontal high-pass filter 612, and the frequency components from the vertical high-pass filter 608 and the vertical low-pass filter 609 are transferred via theup sampler 611 to the horizontal low-pass filter 613, and then the filtering processing in the horizontal direction is performed. - The frequency components from the horizontal high-
pass filter 612 and the horizontal low-pass filter 613 are transferred to thebuffer 614 to be synthesized into one, thus generating the low frequency component at the (n−1)-th stage. - At this time, the up
sampler 602, the upsampler 603, the upsampler 604, and the upsampler 605 performs up sampling of the input frequency component double in the vertical direction, and the upsampler 610 and the upsampler 611 performs up sampling of the input frequency component double in the horizontal direction. - The data
transfer control unit 615 transfers the low frequency components to theswitching unit 601 on the basis of the control of thecontrol unit 118. - On the basis of the control of the
control unit 118, thedata reading unit 600 reads from the three types of high frequency components in the slanted direction, the horizontal direction, and the vertical direction at the (n−1)-th stage from thebuffer 114 to be transferred to theswitching unit 601. Then, as the filtering processing similar to the above is performed on the frequency at the stage number of the decomposition (n−1), the low frequency component at the (n−2)-th stage is output to thebuffer 614. - The above-mentioned procedure is repeatedly performed until the
control unit 118 performs the synthesis at a predetermined n-th stage. With the configuration, in the end, the low frequency component at the zero-th stage is output to thebuffer 614 and the low frequency component at the zero-th stage is transferred to thesignal processing unit 116 as the image signal on which the gradation conversion has been performed. - It should be noted that in the above, the image processing system in which the image pickup unit including the
lens system 100, theaperture 101, theCCD 102, theamplification unit 103, the A/D conversion unit 104, theexposure control unit 106, thefocus control unit 107, theAF motor 108, and thetemperature sensor 120 is integrated has been described. However, the image processing system is not necessarily limited to the above-mentioned configuration. For example, as illustrated inFIG. 10 , the image pickup unit may be provided as a separated body. That is, in the image processing system illustrated inFIG. 10 , the separated image pickup unit performs the image pickup, and an image signal recorded on a recording medium such as a memory card in an unprocessed raw data state is read out from the recording medium to be processed. It should be noted that at this time, associated information related to the image signal like the temperature of the image pickup device, the exposure conditions, and the like, for each shooting operation is recorded on a header unit or the like. It should be noted that transmission of various pieces of information from the separated image pickup unit to the image processing system is not necessarily performed via a recording medium, and may be performed via a communication circuit or the like. - Herein,
FIG. 10 is a diagram illustrating another configuration example of the image processing system. - The image processing system illustrated in
FIG. 10 has a configuration in which with respect to the image processing system illustrated inFIG. 1 , thelens system 100, theaperture 101, theCCD 102, theamplification unit 103, the A/D conversion unit 104, theexposure control unit 106, thefocus control unit 107, theAF motor 108, and thetemperature sensor 120 are omitted, and aninput unit 700 and an headerinformation analysis unit 701 are added. Other basic configuration in the image processing system illustrated inFIG. 10 is similar to that illustrated inFIG. 1 . Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described. - The
input unit 700 is connected to thebuffer 105 and the headerinformation analysis unit 701. Thecontrol unit 118 is bi-directionally connected to theinput unit 700 and the headerinformation analysis unit 701 to control these units. - Next, a different action in the image processing system illustrated in
FIG. 10 is as follows. - For example, when a reproduction operation is started via the external I/
F unit 119 such as a mouse or a key board, the image signal and the header information saved on the recording medium such as a memory card are read via theinput unit 700. - Among the information read from the
input unit 700, the image signal is transferred to thebuffer 105, and the header information is transferred to the headerinformation analysis unit 701, respectively. - The header
information analysis unit 701 extracts the information for each shooting operation (that is, the exposure conditions, the temperature of the image pickup device, and the like, which are described above) to be transferred to thecontrol unit 118 on the basis of the header information transferred from theinput unit 700. - The processing in the following stage is similar to that of the image processing system illustrated in
FIG. 1 . - Furthermore, in the above, it is supposed to perform the processing by way of the hardware, but the configuration is not necessarily limited to the above. For example, the image signal from the
CCD 112 is recorded on the recording medium such as a memory card as raw data without applying the process, and also the associated information such as image pickup conditions (for example, the temperature of the image pickup device, the exposure conditions, and the like, for each shooting operation from the control unit 118) is recorded in the recording medium as the header information. Then, the processing can be performed as the computer is allowed to execute the image processing program which is separate software to instruct the computer to read the information of the recording medium. It should be noted that the transmission of various pieces of information from the image pickup unit to the computer is not necessarily performed via the recording medium and may be performed via a communication line or the like. -
FIG. 11 is a flow chart showing a main routine of an image processing program. - When the processing is started, first, the image signal is read, and also the header information such as the temperature and the exposure conditions of the image pickup device is read (step S1).
- Next, by performing the frequency decomposition such as the wavelet transform, the high frequency component and the low frequency component are obtained (step S2).
- Subsequently, as is described below with reference to
FIG. 12 , the conversion characteristic is calculated (step S3). - Furthermore, as is described below with reference to
FIG. 13 , the high frequency component is separated into the invalid component caused by the noise and the other valid component (step S4). - Then, as is described below with reference to
FIG. 14 , the gradation processing is performed on the low frequency component and the valid component in the high frequency component (step S5). - Next, on the basis of the low frequency component on which the gradation processing has been performed, the valid component in the high frequency component on which the gradation processing has been performed, and the invalid component in the high frequency component, the image signal on which the gradation conversion has been performed is synthesized (step S6).
- Subsequently, the signal processing such as a known compression processing is performed (step S7).
- Then, the image signal after the processing is output (step S8), and the processing is ended.
-
FIG. 12 is a flow chart showing the processing for the conversion characteristic calculation in the above-mentioned step S3. - When the processing is started, as illustrated in
FIG. 7 , the low frequency component is divided into regions of a predetermined size to be sequentially extracted (step S10). - Next, the low frequency components are compared with the pre-set threshold related to the dark part and the pre-set threshold related to the light part respectively to extract the low frequency components which are equal to or larger than the threshold of the dark part and also equal to or smaller than the threshold of the light part as the correct exposure range (step S11).
- Subsequently, by using the Laplacian filter with respect to the low frequency components in the correct exposure range, the known calculation for the edge intensity is performed (step S12).
- Then, by selecting the pixels having the edge intensity equal to or larger than the pre-set threshold, the histogram is created (step S13).
- After that, by accumulating the histograms and further performing the normalization, the gradation conversion curve is calculated (step S14).
- The gradation conversion curve calculated in the above-mentioned manner is output (step S15).
- Subsequently, it is determined whether or not the processing has been performed for all the regions (step S16). In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S10 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in
FIG. 11 . -
FIG. 13 is a flow chart showing the processing for the high frequency separation in the above-mentioned step S4. - When the processing is started, first, the low frequency components are sequentially extracted for each pixel (step S20).
- Next, from the read header information, the information such as the temperature and the gain of the image pickup device is set. At this time, if a necessary parameter does not exist for the header information, a pre-set standard value is assigned to the relevant information (step S21).
- Subsequently, the parameter related to the reference noise model is read (step S22).
- Then, on the basis of the parameter of the reference noise model, the noise amount related to the low frequency component is calculated through the interpolation processing (step S23).
- After that, as illustrated in
FIG. 3B or 3C, the high frequency component corresponding to the low frequency component and the high frequency components located in the neighborhood of the high frequency component are sequentially extracted (step S24). - Next, from the high frequency component corresponding to the low frequency component and the high frequency components located in the neighborhood of the high frequency component, the average value is calculated (step S25).
- Subsequently, on the basis of the average value and the noise amount, the upper limit and the lower limit are set as shown in Numeric Expression 3 (step S26).
- Then, in a case where the high frequency component is in the range between the upper limit and the lower limit, it is determined that the high frequency component is the invalid component caused by the noise, and in a case where the high frequency component exceeds the upper limit or falls short of the lower limit, it is determined that the high frequency component is the valid component (step S27).
- Furthermore, the valid component and the invalid component are output while being separated from each other (step S28).
- Then, it is determined whether or not the processing for all the high frequency components has been completed (step S29). In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S24 to repeat the above-mentioned processing.
- On the other hand, in the step S29, in a case where it is determined that the processing for all the high frequency components has been completed, it is determined whether or not the processing for all the low frequency components has been completed (step S30). In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S20 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in
FIG. 11 . -
FIG. 14 is a flow chart showing the processing for the gradation processing in the above-mentioned step S5. - When the processing is started, first, the low frequency components are sequentially extracted for each pixel (step S40).
- Next, as illustrated in
FIG. 8 , the distances between the target pixel of the low frequency component and the centers of the four neighboring regions are calculated (step S41). - Subsequently, the gradation conversion curves in the four neighboring regions are read (step S42).
- Furthermore, as shown in
Numeric Expression 4, the gradation conversion equation with respect to the target pixel is set (step S43). - Then, as illustrated in
FIG. 3B or 3C, the high frequency components regarded as the valid components corresponding to the low frequency components are sequentially extracted (step S44). - After that, it is determined whether or not the high frequency component regarded as the valid component exists (step S45).
- At this time, in a case where it is determined that the high frequency component regarded as the valid component exists, the gradation conversion equation shown in
Numeric Expression 4 is applied to the high frequency component regarded as the valid component to perform the gradation conversion (step S46). - When the processing in the step S46 is ended or in a case where it is determined that the high frequency component regarded as the valid component does not exist in the above-mentioned step S45, the gradation conversion equation shown in
Numeric Expression 4 is applied to the low frequency component to perform the gradation conversion (step S47). - Then, the low frequency component on which the gradation processing has been performed and the valid component in the high frequency component on which the gradation processing has been performed are output (step S48).
- After that, it is determined whether or not the processing for all the low frequency components has been completed (step S49). In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S40 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in
FIG. 11 . - It should be noted that in the above, the configuration of using the wavelet transform for the frequency decomposition and the frequency synthesis is adopted, but the configuration is not necessarily limited to the above. For example, a configuration of using the known frequency decomposition such as the Fourier transform, the discrete cosine transform or the transform for the frequency synthesis can also be adopted.
- Also, in the above, the number of times to perform the wavelet transform is set as two, but the configuration is not necessarily limited to the above. For example, such a configuration can be adopted that by increasing the number of times to perform the conversion, the separation of the invalid component caused by the noise and the other valid component is improved, or by decreasing the number of times to perform the conversion, the uniformity of the image is improved.
- With the method of the space-invariant gradation processing using the single gradation conversion curve described above in the background section in the related art, in a non-standard situation such as a backlight, there is a problem that it is difficult to obtain an appropriate image signal.
- Also, according to the technology disclosed in Japanese Patent No. 3465226 described above in the background section, the gradation conversion curve is calculated for each image on the basis of the histogram, but the increase in the noise components is not taken into account. For this reason, for example, when a ratio of the dark part in the image is large, the gradation conversion curve based on the histogram provides a wide gradation to the dark part. However, in this case, the noise in the dark part prominently appears, and there is a problem that an optimal gradation conversion processing is not performed in terms of image quality.
- Furthermore, according to the technology disclosed in Japanese Unexamined Patent Application Publication No. 8-56316 described above in the background section, the contrast emphasis processing is performed only on the low frequency component. Therefore, there is a problem that the sharpness is degraded in a region containing a large number of high frequency components such as an edge region. Also, according to the technology disclosed in the publication, different processings are performed on the low frequency component and other components. Therefore, there is a problem that the continuity and integrity for the image as a whole may be lost.
- Then, according to the technology disclosed in Japanese Unexamined Patent Application Publication No. 2004-128985 described above in the background section, the noise reducing processing and other gradation processing are independent from each other. Therefore, there is a problem that it is difficult to mutually utilize the processings in an optimal manner.
- In contrast with the above-mentioned background technology, according to the first embodiment of the present invention, only the high frequency component where the influence of the noise prominently visually appears is separated into the invalid component and the valid component. The gradation processing is performed on the valid component, and the gradation processing is not performed on the invalid component, and an increase in noise accompanying with the gradation processing is suppressed. Thus, it is possible to generate the high quality image signal.
- Also, as the low frequency component is excluded from the target of the processing after being separated into the valid component and the invalid component, a possibility of generating an adverse effect accompanying with the processing is decreased, and it is possible to improve the stability.
- Furthermore, as the image signal is synthesized with the invalid component, it is possible to obtain the image signal with little sense of visual discomfort, and the stability and reliability of the processing can be improved.
- Also, the wavelet transform is excellent at the separation of the frequency, and it is therefore possible to perform the high accuracy processing.
- As the gradation conversion curve is adaptively and also independently calculated for each region from the low frequency component of the image signal, it is possible to perform the gradation conversion at the high accuracy on various image signals.
- Also, as the gradation conversion curve is calculated on the basis of the low frequency component, it is possible to calculate the appropriate gradation conversion curve with little influence from the noise.
- As the gradation conversion with the identical conversion characteristic is performed on the low frequency component and the valid component in the high frequency component located at the same position, it is possible to obtain the image signal providing the sense of integrity with little sense of visual discomfort.
- Also, as the gradation conversion curves independently obtained for each region are synthesized to set the gradation conversion equation used for the gradation conversion of the target pixel, the discontinuity between the regions is not generated, and it is possible to obtain the high quality image signals.
- Then, in a case where the valid component in the high frequency component does not exist, the unnecessary gradation conversion is cancelled, and it is thus possible to improve the processing speed.
-
FIGS. 15 to 25 illustrate a second embodiment of the present invention, andFIG. 15 is a block diagram of a configuration of an image processing system. - According to the second embodiment, the same part as that of the first embodiment described above is allocated with the same name and reference numeral to appropriately omit a description thereof, and only a different part will be mainly described.
- The image processing system according to the present embodiment has a configuration in which with respect to the above-mentioned image processing system illustrated in
FIG. 1 according to the first embodiment, apre-white balance unit 801, a Y/C separation unit 802 constituting Y/C separation means, abuffer 803, and a Y/C synthesis unit 809 constituting Y/C synthesis means are added, and theCCD 102, thefrequency decomposition unit 109, the conversioncharacteristic calculation unit 111, the highfrequency separation unit 112, thegradation processing unit 113, and thefrequency synthesis unit 115 are replaced by acolor CCD 800, afrequency decomposition unit 804 constituting separation means and frequency decomposition means, a conversioncharacteristic calculation unit 805 constituting conversion means and conversion characteristic calculation means, a highfrequency separation unit 806 constituting separation means and high frequency separation means, agradation processing unit 807 constituting conversion means and gradation processing means, and afrequency synthesis unit 808 constituting synthesis means and frequency synthesis means. Other basic configuration is similar to that of the above-mentioned first embodiment. Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described. - The color image signal captured via the
lens system 100, theaperture 101, and thecolor CCD 800 is transferred to theamplification unit 103. - The
buffer 105 is connected to theexposure control unit 106, thefocus control unit 107, thepre-white balance unit 801, and the Y/C separation unit 802. - The
pre-white balance unit 801 is connected to theamplification unit 103. - The Y/
C separation unit 802 is connected to thebuffer 803, and thebuffer 803 is connected to thefrequency decomposition unit 804, the conversioncharacteristic calculation unit 805, and the Y/C synthesis unit 809. - The
frequency decomposition unit 804 is connected to thebuffer 110. Thebuffer 110 is connected to the conversioncharacteristic calculation unit 805, the highfrequency separation unit 806, and thegradation processing unit 807. The conversioncharacteristic calculation unit 805 is connected to thegradation processing unit 807. The highfrequency separation unit 806 is connected to thebuffer 114 and thegradation processing unit 807. Thegradation processing unit 807 is connected to thebuffer 114. - The
buffer 114 is connected via thefrequency synthesis unit 808 and the Y/C synthesis unit 809 to thesignal processing unit 116. - The
control unit 118 is also bi-directionally connected to thepre-white balance unit 801, the Y/C separation unit 802, thefrequency decomposition unit 804, the conversioncharacteristic calculation unit 805, the highfrequency separation unit 806, thegradation processing unit 807, thefrequency synthesis unit 808, and the Y/C synthesis unit 809 to control these units. - Also, the
temperature sensor 120 according to the present embodiment is arranged in a neighborhood of thecolor CCD 800, and the signal from thetemperature sensor 120 is also connected to thecontrol unit 118. - Next, the action of the image processing system illustrated in
FIG. 15 is basically similar to that of the first embodiment, and therefore only a different part will be mainly described along the flow of the image signal. - When the user performs a half press of the shutter button which is composed of a two-stage switch of the external I/
F unit 119, the image processing system functions as the pre-image pickup device. - After that, the color image signal captured via the
lens system 100, theaperture 101, and thecolor CCD 800 is transferred via theamplification unit 103 and the A/D conversion unit 104 to thebuffer 105. It should be noted that according to the present embodiment, as thecolor CCD 800, a single CCD in which a Bayer-type primary color filter is arranged on a front face is supposed. - Herein,
FIG. 16 is a diagram illustrating a configuration of the Bayer-type primary color filter. - The Bayer-type primary color filter has a such configuration that that the basic unit is 2×2 pixels, one each of a red (R) filter and a blue (B) filter are arranged at pixel positions at opposite corners in the basis unit, and green (G) filters are arranged at pixel positions at remaining opposite corners.
- Subsequently, the color image signal in the
buffer 105 is transferred to thepre-white balance unit 801. Thepre-white balance unit 801 multiplies signals at a predetermined level for each color signal (in other words, cumulatively adds) to calculate a simplified white balance coefficient. Thepre-white balance unit 801 transfers the calculated coefficient to theamplification unit 103 and multiplies different gains for each color signal to perform the white balance. - In this way, when the focus adjustment, the exposure adjustment, the simplified white balance adjustment, and the like are performed, the user performs the full press of the shutter button composed of the two-stage switch of the external I/
F unit 119. After that, the digital camera functions as the real shooting device. - After that, similarly to the pre shooting, the color image signal is transferred to the
buffer 105. The white balance coefficient calculated by thepre-white balance unit 801 at this time is transferred to thecontrol unit 118. - The color image signal in the
buffer 105 obtained through the real shooting operation is transferred to the Y/C separation unit 802. - On the basis of the control of the
control unit 118, through a known interpolation processing, the Y/C separation unit 802 generates the three color image signals composed of R, G, and B, and further separates the R, G, and B signals into a luminance signal Y and color difference signals Cb and Cr as shown inNumeric Expression 5 below. -
Y=0.29900R+0.58700G+0.11400B -
Cb=−0.16874R−0.33126G+0.50000B -
Cr=0.50000R−0.41869G−0.08131B [Expression 5] - The luminance signal and the color difference signals separated by the Y/
C separation unit 802 are transferred to thebuffer 803. - On the basis of the control of the
control unit 118, thefrequency decomposition unit 804 performs the frequency decomposition on the luminance signal in thebuffer 105, and the high frequency component and the low frequency component are obtained. Then, thefrequency decomposition unit 804 sequentially transfers the high frequency component and the low frequency component thus obtained to thebuffer 110. - The conversion
characteristic calculation unit 805 reads the low frequency component from thebuffer 110 from on the basis of the control of thecontrol unit 118, and the color difference signals from thebuffer 803, respectively, to calculate the gradation characteristic used for the gradation conversion processing. It should be noted that according to the present embodiment, as the gradation conversion processing, the space-invariant processing using the single gradation conversion curve is supposed with respect to the image signal. Then, the conversioncharacteristic calculation unit 805 transfers the calculated gradation characteristics to thegradation processing unit 807. - The high
frequency separation unit 806 reads the high frequency component from thebuffer 110 and the high frequency component is separated into the invalid component caused by the noise and the other valid component on the basis of the control of thecontrol unit 118. Then, the highfrequency separation unit 806 transfers the thus separated valid component to thegradation processing unit 807 and the above-mentioned invalid component to thebuffer 114, respectively. - The
gradation processing unit 807 reads the low frequency component from thebuffer 110, the valid components in the high frequency component from the highfrequency separation unit 806, and the gradation characteristic from the conversioncharacteristic calculation unit 805, respectively, on the basis of the control of thecontrol unit 118. Then, on the basis of the above-mentioned gradation characteristic, thegradation processing unit 807 performs the gradation processing on the low frequency component and the valid component in the high frequency component. Thegradation processing unit 807 transfers the low frequency component on which the gradation processing has been performed and the valid component in the high frequency component on which the gradation processing has been performed to thebuffer 114. - The
frequency synthesis unit 808 reads the low frequency component on which the gradation processing has been performed, the valid component in the high frequency component on which the gradation processing has been performed, and the invalid component in the high frequency component from thebuffer 114 and performs an additional processing on the basis of these components to synthesize the luminance signals on which the gradation conversion has been performed with each other on the basis of the control of thecontrol unit 118. Then, thefrequency synthesis unit 808 transfers the synthesized luminance signal to the Y/C synthesis unit 809. - The Y/
C synthesis unit 809 reads the luminance signal Y′ on which the gradation conversion has been performed from thefrequency synthesis unit 808 and the color difference signals Cb and Cr from thebuffer 803, respectively, to synthesize color image signals R′, G′, and B′ on which the gradation conversion has been performed as shown in Numeric Expression 6 below on the basis of the control of thecontrol unit 118. -
R′=Y′+1.40200Cr -
G′=Y′−0.34414Cb−0.71414Cr -
B′=Y′+1.77200Cb [Expression 6] - The Y/
C synthesis unit 809 transfers the synthesized color image signals R′, G′, and B′ to thesignal processing unit 116. - The
signal processing unit 116 performs a known compression processing or the like on the image signal from the Y/C synthesis unit 809 and transfers the signal after the processing to theoutput unit 117 on the basis of the control of thecontrol unit 118. - The
output unit 117 records and saves the image signal output from thesignal processing unit 116 in the recording medium such as a memory card. - Next,
FIG. 18 is a block diagram of a configuration example of thefrequency decomposition unit 804. - The
frequency decomposition unit 804 is configured by including asignal extraction unit 900, a low-pass filter unit 901, alow frequency buffer 902, and adifference filter unit 903. - The
buffer 803 is connected to thesignal extraction unit 900. Thesignal extraction unit 900 is connected to the low-pass filter unit 901 and thedifference filter unit 903. The low-pass filter unit 901 is connected to thelow frequency buffer 902. Thelow frequency buffer 902 is connected to thedifference filter unit 903. Thedifference filter unit 903 is connected to thebuffer 110. - The
control unit 118 is bi-directionally connected to thesignal extraction unit 900, the low-pass filter unit 901, and thedifference filter unit 903 to control these units. - Subsequently, a description will be given of the action of the
frequency decomposition unit 804. - The
signal extraction unit 900 reads the luminance signals from thebuffer 803 on the basis of the control of thecontrol unit 118 to transfer the luminance signals to the low-pass filter unit 901 and thedifference filter unit 903. - The low-
pass filter unit 901 performs a known low-pass filter processing on the luminance signals from thesignal extraction unit 900 to calculate the low frequency components of the luminance signals on the basis of the control of thecontrol unit 118. It should be noted that according to the present embodiment, as the low-pass filter used by the low-pass filter unit 901, for example, an average value filter having a pixel size of 7×7. The low-pass filter unit 901 transfers the calculated low frequency components to thelow frequency buffer 902. - The
difference filter unit 903 reads the luminance signals from thesignal extraction unit 900 and the low frequency components of the luminance signals from thelow frequency buffer 902, respectively, and takes a difference thereof to calculate the high frequency components of the luminance signals. Thedifference filter unit 903 transfers the calculated high frequency components and the read low frequency components to thebuffer 110. - Next,
FIG. 19 is a block diagram of a configuration example of the conversioncharacteristic calculation unit 805. - The conversion
characteristic calculation unit 805 has such a configuration that with respect to the conversioncharacteristic calculation unit 111 shown inFIG. 4 of the above-mentioned first embodiment, ahue calculation unit 1000 constituting region-of-interest setting means, aperson determination unit 1001 constituting region-of-interest setting means, a weightingfactor setting unit 1002 constituting weighting factor setting means, and ahistogram correction unit 1003 constituting histogram correction means are added, and thedivision unit 300 and thebuffer 301 are omitted. Other basic configuration is similar to that of the conversioncharacteristic calculation unit 111 shown inFIG. 4 . Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described. - The
buffer 803 and thebuffer 110 are connected to the correctrange extraction unit 302. The correctrange extraction unit 302 is connected to theedge calculation unit 303 and thehue calculation unit 1000. - The
hue calculation unit 1000 is connected via theperson determination unit 1001 and the weightingfactor setting unit 1002 to thehistogram correction unit 1003. - The
histogram creation unit 304 is connected to thehistogram correction unit 1003. - The
histogram correction unit 1003 is connected via the gradation conversioncurve calculation unit 305 and thebuffer 306 to thegradation processing unit 807. - The
control unit 118 is also bi-directionally connected to thehue calculation unit 1000, theperson determination unit 1001, the weightingfactor setting unit 1002, and thehistogram correction unit 1003 to control these units. - Subsequently, a description will be given of the action of the conversion
characteristic calculation unit 805. - The correct
range extraction unit 302 reads the luminance signals from thebuffer 110 which are compared with the pre-set threshold related to the dark part (by way of an example, in the case of 12-bit gradation, for example, 128) and the pre-set threshold related to the light part (in the case of the 12-bit gradation, for example, 3968) respectively, and transfers the luminance signals which are equal to or larger than the threshold of the dark part and also equal to or smaller than the threshold of the light part as the correct exposure range to theedge calculation unit 303 on the basis of the control of thecontrol unit 118. - Also, the correct
range extraction unit 302 reads the color difference signals Cb and Cr at coordinates corresponding to the luminance signals in the correct exposure range from thebuffer 803 to be transferred to thehue calculation unit 1000. - The
edge calculation unit 303 and thehistogram creation unit 304 create the histogram of edge regions from the luminance signals similarly to the above-mentioned first embodiment, and transfer the created histogram to thehistogram correction unit 1003. - The
hue calculation unit 1000 reads the color difference signals Cb and Cr from the correctrange extraction unit 302 which are compared with the pre-set threshold to extract a skin color region, and transfers the result to theperson determination unit 1001 on the basis of the control of thecontrol unit 118. - The
person determination unit 1001 uses the information related to the skin color region from thehue calculation unit 1000 and the edge amount from theedge calculation unit 303 to extract a region determined as a human face, and transfers the result to the weightingfactor setting unit 1002 on the basis of the control of thecontrol unit 118. - On the basis of the control of the
control unit 118, the weightingfactor setting unit 1002 calculates luminance information in the region determined as the human face which is multiplied by a predetermined coefficient, thereby weighting factors for the corrections at the respective luminance levels are calculated. It should be noted that the weighting factors at the luminance levels which do not exist in the region determined as the human face are 0. The weightingfactor setting unit 1002 transfers the calculated weighting factors to thehistogram correction unit 1003. - The
histogram correction unit 1003 reads the histogram from thehistogram creation unit 304 and also reads the weighting factors from the weightingfactor setting unit 1002 on the basis of the control of thecontrol unit 118. Then, thehistogram correction unit 1003 adds the weighting factors to the respective luminance levels of the histogram to perform the correction. The corrected histogram is transferred to the gradation conversioncurve calculation unit 305, and similarly to the above-mentioned first embodiment, the gradation conversion curve is calculated. - The calculated gradation conversion curve is transferred to the
buffer 306, and when necessary, transferred to thegradation processing unit 807. It should be noted that according to the present embodiment, the space-invariant processing is supposed, and the calculated gradation conversion curve is of one type. - Next,
FIG. 20 is a block diagram of a configuration example of the highfrequency separation unit 806. - The high
frequency separation unit 806 has such a configuration that with respect to the highfrequency separation unit 112 shown inFIG. 5 of the above-mentioned first embodiment, anoise LUT 1100 constituting noise estimation means and table conversion means are added, and theparameter ROM 403, theparameter selection unit 404, and theinterpolation unit 405 are omitted. Other basic configuration is similar to that of the highfrequency separation unit 112 shown inFIG. 5 . Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described. - The low frequency
component extraction unit 400, thegain calculation unit 401, and the standardvalue assigning unit 402 are connected to thenoise LUT 1100. Thenoise LUT 1100 is connected to the upper limit and lowerlimit setting unit 408. - The
determination unit 409 is connected to thegradation processing unit 807 and thebuffer 114. - The
control unit 118 is also bi-directionally connected to thenoise LUT 1100 to control the table. - Subsequently, a description will be given of the action of the high
frequency separation unit 806. - The
gain calculation unit 401 calculates the gain information in theamplification unit 103 which is transferred to thenoise LUT 1100 on the basis of the ISO sensitivity, the information related to the exposure conditions, and the white balance coefficient sent from thecontrol unit 118. - Also, the
control unit 118 obtains temperature information of thecolor CCD 800 from thetemperature sensor 120 and transfers the thus obtained temperature information to thenoise LUT 1100. - On the basis of the control of the
control unit 118, in a case where at least one of the above-mentioned gain information and the temperature information cannot be obtained, the standardvalue assigning unit 402 transfers a standard value of the information that cannot be obtained to thenoise LUT 1100. - The
noise LUT 1100 is a look up table where a relation among the signal value level of the image signal, the gain of the image signal, and the operation temperature of the image pickup device, and the noise amount is recorded. The look up table is designed, for example, by using the technology disclosed in Japanese Unexamined Patent Application Publication No. 2004-128985 described above. Thenoise LUT 1100 outputs the noise amount on the basis of the pixel value of the target pixel from the low frequencycomponent extraction unit 400, the gain information from thegain calculation unit 401 or the standardvalue assigning unit 402, and the temperature information from thecontrol unit 118 or the standardvalue assigning unit 402. The output noise amount is transferred to the upper limit and lowerlimit setting unit 408. - The high frequency
component extraction unit 406 extracts the high frequency component corresponding to the low frequency component extracted by the low frequencycomponent extraction unit 400 and the high frequency components located in the neighborhood of the high frequency component on the basis of the control of thecontrol unit 118. - It should be noted that according to the present embodiment, as described above, the
frequency decomposition unit 804 uses the low-pass filter and the difference filter to extract the low frequency component and the high frequency component. Therefore, the pixel configurations of the low frequency component and the high frequency component are of the same size and, and the high frequency component corresponding to the low frequency component is one pixel. - The action of the high
frequency separation unit 806 thereafter is similar to that of the highfrequency separation unit 112 of the above-mentioned first embodiment. The high frequency component is separated into the valid component and the invalid component. The valid component is transferred to thegradation processing unit 807, and the invalid component is transferred to thebuffer 114, respectively. - Next,
FIG. 21 is a block diagram of a configuration example of thegradation processing unit 807. - The
gradation processing unit 807 has such a configuration that with respect to thegradation processing unit 113 shown inFIG. 6 of the above-mentioned first embodiment, thedistance calculation unit 501, the gradation conversionequation setting unit 502, and thebuffer 503 are deleted. Other basic configuration is similar to that of thegradation processing unit 113 shown inFIG. 6 . Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described. - The conversion
characteristic calculation unit 805 is connected to thegradation conversion unit 505. - The
buffer 110 is connected via the low frequencycomponent extraction unit 500 to thegradation conversion unit 505. The highfrequency separation unit 806 is connected via the high frequencycomponent extraction unit 504 to thegradation conversion unit 505. - The
control unit 118 is bi-directionally connected to the low frequencycomponent extraction unit 500, the high frequencycomponent extraction unit 504, and thegradation conversion unit 505 to control these units. - Subsequently, a description will be given of the action of the
gradation processing unit 807. - The low frequency
component extraction unit 500 sequentially extracts the low frequency components from thebuffer 110 for each pixel on the basis of the control of thecontrol unit 118. The low frequencycomponent extraction unit 500 transfers the extracted low frequency components to thegradation conversion unit 505. - The high frequency
component extraction unit 504 extracts the high frequency components corresponding to the low frequency components extracted by the low frequencycomponent extraction unit 500 from the highfrequency separation unit 806 on the basis of the control of thecontrol unit 118. According to the present embodiment, as described above, the pixel configurations of the low frequency component and the high frequency component are of the same size, and the high frequency component corresponding to the low frequency component is one pixel. It should be noted that in a case where it is determined that the high frequency component corresponding to the low frequency component is the invalid component and the extracted high frequency component does not exist, the high frequencycomponent extraction unit 504 transfers the error information to thecontrol unit 118. - The
gradation conversion unit 505 reads the low frequency components from the low frequencycomponent extraction unit 500 on the basis of the control of thecontrol unit 118 and reads the gradation conversion curve from the conversioncharacteristic calculation unit 805 to perform the gradation conversion on the low frequency components. Thegradation conversion unit 505 transfers the low frequency component after the gradation conversion to thebuffer 114. - After that, the
gradation conversion unit 505 reads the high frequency component of the valid component corresponding to the low frequency component from the high frequencycomponent extraction unit 504 to perform the gradation conversion. Then, thegradation conversion unit 505 transfers the high frequency component after the gradation conversion to thebuffer 114. It should be noted that in a case where the high frequency component corresponding to the low frequency component does not exist, thegradation conversion unit 505 cancel the gradation conversion on the high frequency component on the basis of the control of thecontrol unit 118. - It should be noted that according to the present embodiment too, similarly to the above-mentioned first embodiment, the image processing system in which the image pickup unit is separately provided may be used.
- Also, in the above, it is supposed to perform the processing by way of the hardware, but the configuration is not necessarily limited to the above. For example, the color image signal from the
color CCD 800 is recorded on the recording medium such as a memory card as raw data while being unprocessed, and the associated information such as image pickup conditions (for example, the temperature of the image pickup device, the exposure conditions, and the like, for each shooting operation from the control unit 118) is recorded in the recording medium as the header information. Then, the processing can be performed as the computer is allowed to execute the image processing program which is separate software to instruct the computer to read the information of the recording medium. It should be noted that the transmission of various pieces of information from the image pickup unit to the computer is not necessarily performed via the recording medium and may be performed via a communication line or the like. -
FIG. 22 is a flow chart showing a main routine of an image processing program. - It should be noted that in
FIG. 22 , processing steps basically substantially identified with the processing shown inFIG. 11 of the above-mentioned first embodiment are allocated with the same step numbers. - When the processing is started, first, the color image signal is read, and also the header information such as the temperature and the exposure conditions of the image pickup device is read (step S1).
- Next, as shown in
Numeric Expression 5, the luminance signals and the color difference signals are calculated (step S50). - Subsequently, by using the low-pass filter and the difference filter, the frequency decomposition on the luminance signals is performed, and the high frequency component and the low frequency component are obtained (step S2).
- Furthermore, as is described below with reference to
FIG. 23 , the conversion characteristic is calculated (step S51). - Then, as is described below with reference to
FIG. 24 , the high frequency component is separated into the invalid component caused by the noise and the other valid component (step S52). - Next, as is described below with reference to
FIG. 25 , the gradation processing is performed on the low frequency component and the valid component in the high frequency component (step S53). - Subsequently, on the basis of the low frequency component on which the gradation processing has been performed, the valid component in the high frequency component on which the gradation processing has been performed, and the invalid component in the high frequency component, the luminance signals on which the gradation conversion has been performed are synthesized one another (step S6).
- Then, as shown in Numeric Expression 6, the luminance signals and the color difference signals are synthesized to obtain the color image signal on which the gradation conversion has been performed (step S54).
- Furthermore, the signal processing such as a known compression processing is performed (step S7).
- After that, the color image signal after the processing is output (step S8), and the processing is ended.
-
FIG. 23 is a flow chart showing the processing for the conversion characteristic calculation in the above-mentioned step S51. - It should be noted that in
FIG. 23 , processing steps basically substantially identified with the processing shown inFIG. 12 of the above-mentioned first embodiment are allocated with the same step numbers. - When the processing is started, the luminance signals are compared with the pre-set threshold related to the dark part and the pre-set threshold related to the light part to extract the luminance signals which are equal to or larger than the threshold of the dark part and also equal to or smaller than the threshold of the light part as the correct exposure range (step S11).
- Subsequently, the known calculation for the edge intensity is performed on the luminance signals in the correct exposure range by using the Laplacian filter or the like (step S12).
- Then, by selecting the pixels having the edge intensity equal to or larger than the pre-set threshold, the histogram is created (step S13).
- After that, by comparing the color difference signal with the pre-set threshold, a particular hue region, for example, a skin color region is extracted (step S60).
- Furthermore, on the basis of the skin color region and the information on the edge intensity, the region determined as the human face is extracted and set as a region-of-interest (step S61).
- Next, the luminance information in the region-of-interest is calculated and multiplied by a pre-set coefficient to calculate the weighting factors for the correction related to the respective luminance levels (step S62).
- Subsequently, the weighting factors are added to the respective luminance levels of the histogram to perform the correction on the histogram (step S63).
- After that, by accumulating the histograms and further performing the normalization, the gradation conversion curve is calculated (step S14).
- The gradation conversion curve calculated in the above-mentioned manner is output (step S15), and the flow is returned from the processing to the processing shown in
FIG. 22 . -
FIG. 24 is a flow chart showing the processing for the high frequency separation. - It should be noted that in
FIG. 24 , processing steps basically substantially identified with the processing shown inFIG. 13 of the above-mentioned first embodiment are allocated with the same step numbers. - When the processing is started, first, the low frequency components are sequentially extracted for each pixel (step S20).
- Next, from the read header information, the information such as the temperature and the gain of the image pickup device is set. At this time, if a necessary parameter does not exist for the header information, a pre-set standard value is assigned to the relevant information (step S21).
- Subsequently, the table related to the noise amount where a relation among the signal value level of the image signal, the gain of the image signal, the operation temperature of the image pickup device, and the noise amount is recorded is read (step S70).
- Then, on the basis of the table related to the noise amount, the noise amount is calculated (step S71).
- After that, the high frequency component corresponding to the low frequency component and the high frequency components located in the neighborhood of the high frequency component are extracted (step S24).
- Furthermore, from the high frequency component corresponding to the low frequency component and the high frequency components located in the neighborhood of the high frequency component, the average value is calculated (step S25).
- Next, on the basis of the average value and the noise amount, the upper limit and the lower limit are set as shown in Numeric Expression 3 (step S26).
- Subsequently, in a case where the high frequency component is in the range between the upper limit and the lower limit, it is determined that the high frequency component is the invalid component caused by the noise, and in a case where the high frequency component exceeds the upper limit or falls short of the lower limit, it is determined that the high frequency component is the valid component (step S27).
- Then, the valid component and the invalid component are output while being separated from each other (step S28).
- Furthermore, it is determined whether or not the processing for all the low frequency components has been completed (step S30). In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S20 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in
FIG. 22 . -
FIG. 25 is a flow chart showing the gradation processing in the above-mentioned step S53. - It should be noted that in
FIG. 25 , processing steps basically substantially identified with the processing shown inFIG. 14 of the above-mentioned first embodiment are allocated with the same step numbers. - When the processing is started, first, the low frequency components are sequentially extracted for each pixel (step S40).
- Next, the gradation conversion curve is read (step S42).
- Subsequently, the high frequency component regarded as the valid component corresponding to the low frequency component is extracted (step S44).
- Then, it is determined whether or not the high frequency component regarded as the valid component exists (step S45).
- At this time, in a case where it is determined that the high frequency component regarded as the valid component exists, the gradation conversion is performed on the high frequency component regarded as the valid component (step S46).
- When the processing in the step S46 is ended or in a case where it is determined that the high frequency component regarded as the valid component does not exist in the above-mentioned step S45, the gradation conversion is performed on the low frequency components (step S47).
- Next, the low frequency component on which the gradation processing has been performed and the valid component in the high frequency component on which the gradation processing has been performed are output (step S48).
- After that, it is determined whether or not the processing for all the low frequency components has been completed (step S49). In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S40 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in
FIG. 22 . - It should be noted that in the above description, the configuration of using the low-pass filter and the difference filter for the frequency decomposition and the frequency synthesis is adopted, but the configuration is not necessarily limited to the above. For example, a configuration of using a Gaussian filter and the Laplacian filter for the frequency decomposition and the frequency synthesis may also be adopted. In this case, although the operation amount is increased, an advantage is provided that the performance of the frequency decomposition is better. Then, in a case where the Gaussian filter and the Laplacian filter are used, similarly to the above-mentioned first embodiment, a configuration of performing the frequency decomposition and the frequency synthesis at multi stages can be adopted.
- Also, in the above description, for the color image pickup device, the configuration of using the Bayer-type primary color filter is adopted, but the configuration is not necessarily limited to the above. For example, the single image pickup device using a color-difference line-sequential type complementary color filter shown in
FIG. 17 or the two or three image pickup device may also be applied. - Herein,
FIG. 17 is a diagram illustrating the configuration of the color-difference line-sequential type complementary color filter. - The color-difference line-sequential type complementary color filter has a basic unit of 2×2 pixels. Cyan (Cy) and yellow (Ye) are arranged on the same line of the 2×2 pixels, and magenta (Mg) and green (G) are arranged on the other line of the 2×2 pixels. It should be noted that such a configuration is adopted that the positions of magenta (Mg) and green (G) are inverted for each line.
- According to the second embodiment described above, only the high frequency component where the influence of the noise prominently visually appears with respect to the color signal is separated into the invalid component and the valid component. The gradation processing is performed on the valid component, and the gradation processing is not performed on the invalid component, and an increase in noise accompanying with the gradation processing is suppressed. Thus, it is possible to generate the high quality color image signal.
- Also, as the low frequency component is excluded from the target of the processing after being separated into the valid component and the invalid component, the possibility of generating the adverse effect accompanying with the processing is decreased, and it is possible to improve the stability.
- Furthermore, as the image signal is synthesized with the invalid component, it is possible to obtain the color image signal with little sense of visual discomfort, and the stability and reliability of the processing can be improved.
- Then, as the low-pass filter and the difference filter has a simple filter configuration, the image processing system in which the processing can be performed at a high speed can be configured at a low cost.
- In addition, as the gradation conversion curve is obtained adaptively from the low frequency components of the luminance signals, it is possible to perform the high accuracy gradation conversion on various types of the color image signals.
- Also, as the gradation conversion curve is calculated on the basis of the low frequency component, it is possible to calculate the appropriate gradation conversion curve with little influence from the noise.
- Furthermore, as the gradation processing can be performed while weighting the region-of-interest such as a human being, it is possible to obtain the high quality image signals which are subjectively preferable.
- Then, as the gradation conversion with the identical conversion characteristic is performed on the low frequency component and the valid component in the high frequency component located at the same position, it is possible to obtain the image signal providing the sense of integrity with little sense of visual discomfort.
- In addition, in a case where the valid component in the high frequency component does not exist, the unnecessary gradation conversion is cancelled, and it is thus possible to improve the processing speed.
-
FIGS. 26 to 30 illustrate a third embodiment of the present invention, andFIG. 26 is a block diagram of a configuration of an image processing system. - According to the third embodiment, a part similar to that of the above-mentioned first and second embodiments is allocated with the same reference numerals to appropriately omit the description thereof, and only a different part will be mainly described.
- The image processing system according to the present embodiment has such a configuration that with respect to the above-mentioned image processing system illustrated in
FIG. 1 according to the first embodiment, anedge emphasis unit 1202 constituting edge emphasis means is added, and thefrequency decomposition unit 109, the highfrequency separation unit 112, and thefrequency synthesis unit 115 are respectively replaced by afrequency decomposition unit 1200 constituting separation means and frequency decomposition means, highfrequency separation unit 1201 constituting separation means and high frequency separation means, and afrequency synthesis unit 1203 constituting synthesis means and frequency synthesis means. Other basic configuration is similar to that of the above-mentioned first embodiment. Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described. - The
buffer 105 is connected to theexposure control unit 106, thefocus control unit 107, the conversioncharacteristic calculation unit 111, and thefrequency decomposition unit 1200. - The
frequency decomposition unit 1200 is connected to thebuffer 110. Thebuffer 110 is connected to the conversioncharacteristic calculation unit 111, the highfrequency separation unit 1201, and thegradation processing unit 113. - The high
frequency separation unit 1201 is connected to theedge emphasis unit 1202 and thebuffer 114. Theedge emphasis unit 1202 is connected to thegradation processing unit 113. - The
buffer 114 is connected via thefrequency synthesis unit 1203 to thesignal processing unit 116. - The
control unit 118 is also bi-directionally connected to thefrequency decomposition unit 1200, the highfrequency separation unit 1201, theedge emphasis unit 1202, and thefrequency synthesis unit 1203 to control these units. - Next, the action of the image processing system illustrated in
FIG. 26 is basically similar to that of the first embodiment, and therefore only a different part will be mainly described along the flow of the image signal. - The image signal in the
buffer 105 is transferred to thefrequency decomposition unit 1200. - The
frequency decomposition unit 1200 performs a predetermined frequency decomposition on the transferred image signal to obtain a high frequency component and a low frequency component on the basis of the control of thecontrol unit 118. Then, thefrequency decomposition unit 1200 sequentially transfers the thus obtained high frequency component and the low frequency components to thebuffer 110. It should be noted that according to the present embodiment, for the frequency decomposition, for example, it is supposed to use a known discrete cosine transform of a 64×64 pixel unit. -
FIGS. 27A and 27B are explanatory diagrams for describing the discrete cosine transform;FIG. 27A illustrates the image signal in the real space andFIG. 27B illustrates the signal in the frequency space after the discrete cosine transform, respectively. - In the frequency space of
FIG. 27B , the upper left is set as the origin, that is, as the zero-th order component, and the high frequency components at the first-order or above are arranged on a concentric circle while using the zero-th order component as the origin. - The conversion
characteristic calculation unit 111 reads the image signal from thebuffer 105 for each 64×64 pixel unit used in thefrequency decomposition unit 1200 on the basis of the control of thecontrol unit 118. After that, the conversioncharacteristic calculation unit 111 calculates the gradation characteristic used for the gradation conversion processing similarly to the above-mentioned first embodiment. That is, according to the present embodiment, for the gradation conversion processing, it is supposed to employ the space-variant processing using a plurality of gradation characteristics different for each region at the 64×64 pixel unit. Then, the conversioncharacteristic calculation unit 111 transfers the calculated the gradation characteristic to thegradation processing unit 113. - The high
frequency separation unit 1201 reads the high frequency components from thebuffer 110 and performs the noise reducing processing on the high frequency components on the basis of the control of thecontrol unit 118. After that, the high frequency component is separated into the invalid component caused by the noise and the other valid component. Then, the highfrequency separation unit 1201 transfers the thus separated valid components to theedge emphasis unit 1202 and the above-mentioned invalid components to thebuffer 114, respectively. - The
edge emphasis unit 1202 multiplies the valid component transferred by the highfrequency separation unit 1201 by a pre-set coefficient to perform the edge emphasis processing, and transfers the processing result to thegradation processing unit 113. - The
gradation processing unit 113 reads the low frequency components from thebuffer 110, the valid components in the high frequency components from theedge emphasis unit 1202, and the gradation characteristic from the conversioncharacteristic calculation unit 111, respectively, on the basis of the control of thecontrol unit 118. Then, on the basis of the above-mentioned gradation characteristic, thegradation processing unit 113 performs the gradation processing on the low frequency component and the valid components in the high frequency components. Thegradation processing unit 113 transfers the low frequency component on which the gradation processing has been performed and the valid components in the high frequency components to thebuffer 114. - The
frequency synthesis unit 1203 reads the low frequency component on which the gradation processing has been performed, the valid component in the high frequency component on which the gradation processing has been performed, and the invalid component in the high frequency component from thebuffer 114 on the basis of the control of thecontrol unit 118, and synthesizes the image signal on which the gradation processing has been performed on the basis of these components. It should be noted that according to the present embodiment, for the frequency synthesis, it is supposed to use a known inverse DCT (Discrete Cosine Transform). Then, thefrequency synthesis unit 1203 transfers the synthesized image signal to thesignal processing unit 116. - The
signal processing unit 116 performs a known compression processing or the like on the image signal from thefrequency synthesis unit 1203 and transfers the signal after the processing to theoutput unit 117 on the basis of the control of thecontrol unit 118. - The
output unit 117 records and saves the image signal output from thesignal processing unit 116 in the recording medium such as a memory card. - Next,
FIG. 28 is a block diagram of a configuration example of the highfrequency separation unit 1201. - The high
frequency separation unit 1201 has such a configuration that with respect to the highfrequency separation unit 112 shown inFIG. 5 of the above-mentioned first embodiment, afirst smoothing unit 1300 constituting noise reducing means and first smoothing means and asecond smoothing unit 1301 constituting noise reducing means and second smoothing means are added. Other basic configuration is similar to that of the highfrequency separation unit 112 shown inFIG. 5 . Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described. - The
determination unit 409 is connected to thefirst smoothing unit 1300 and thesecond smoothing unit 1301. Thefirst smoothing unit 1300 is connected to theedge emphasis unit 1202. Thesecond smoothing unit 1301 is connected to thebuffer 114. - The
control unit 118 is bi-directionally connected to thefirst smoothing unit 1300 and thesecond smoothing unit 1301 to control these units. - Subsequently, a description will be given of the action of the high
frequency separation unit 1201. - The low frequency
component extraction unit 400 sequentially extracts the low frequency components from thebuffer 110 on the basis of the control of thecontrol unit 118. It should be noted that according to the present embodiment, as described above, it is supposed to use the discrete cosine transform of the 64×64 pixels. Then, the low frequencycomponent extraction unit 400 extracts frequency components equal to or smaller than a predetermined n-th order among the frequency components at the respective orders shown inFIG. 27B as the low frequency components from the respective regions of the 64×64 pixels. - Regarding the extracted low frequency components, the noise amount is calculated via the
parameter selection unit 404 and theinterpolation unit 405 similarly to the above-mentioned first embodiment. Then, theinterpolation unit 405 transfers the calculated noise amount to the upper limit and lowerlimit setting unit 408. - The high frequency
component extraction unit 406 extracts frequency components at equal to or larger than the (n+1)-th order from the respective regions of the 64×64 pixels corresponding to the low frequency components extracted by the low frequencycomponent extraction unit 400 as the high frequency components on the basis of the control of thecontrol unit 118. - The
average calculation unit 407 separates the high frequency components for each order to calculate the respective average values AV on the basis of the control of thecontrol unit 118 and transfers the calculated average value AV to the upper limit and lowerlimit setting unit 408. - On the basis of the control of the
control unit 118, by using the average value AV from theaverage calculation unit 407 and the noise amount N from theinterpolation unit 405, the upper limit and lowerlimit setting unit 408 sets an upper limit App_Up and a lower limit App_Low for distinguishing the valid component and the invalid component as represented byNumeric Expression 3 as follows for each order. - The upper limit and lower
limit setting unit 408 transfers the thus set upper limit App_Up and the lower limit App_Low to thedetermination unit 409, transfers the average value AV to thesecond smoothing unit 1301, and transfers the average value AV and the noise amount N to thefirst smoothing unit 1300, respectively. - On the basis of the control of the
control unit 118, thedetermination unit 409 reads the high frequency components from the high frequencycomponent extraction unit 406, and also reads the upper limit App_Up and the lower limit App_Low corresponding to the order of the high frequency components from the upper limit and lowerlimit setting unit 408. Then, in a case where the high frequency component exceeds the upper limit App_Up or falls short of the lower limit App_Low, thedetermination unit 409 determines that the high frequency component is the valid component and transfers the high frequency components to thefirst smoothing unit 1300. - On the other hand, in a case where the high frequency component is in the range between the upper limit App_Up and the lower limit App_Low, the
determination unit 409 determines that the high frequency component is the invalid component caused by the noise and transfers the high frequency component to thesecond smoothing unit 1301. - The
second smoothing unit 1301 performs a processing of substituting the high frequency component (herein, the high frequency component is set as P) with the average value AV from the upper limit and lowerlimit setting unit 408 as shown in Numeric Expression 7 below. -
P=AV [Expression 7] - Also, the
first smoothing unit 1300 uses the average value AV from the upper limit and lowerlimit setting unit 408 and the noise amount N to perform the correction on the high frequency component P. The correction has two types of processings. First, in a case where the high frequency component exceeds the upper limit App_Up, thefirst smoothing unit 1300 performs a correction as shown in Numeric Expression 8 below. -
P=AV−N/2 [Expression 8] - On the other hand, in a case where the high frequency components falls short of the lower limit App_Low, the
first smoothing unit 1300 performs a correction as shown in Numeric Expression 9 below. -
P=AV+N/2 [Expression 9] - Then, the processing result obtained by the
first smoothing unit 1300 is transferred to theedge emphasis unit 1202, and the processing result obtained by thesecond smoothing unit 1301 is transferred to thebuffer 114, respectively. - Therefore, only the high frequency component determined as the valid component is transferred via the
edge emphasis unit 1202 to thegradation processing unit 113, and the gradation processing is performed. On the other hand, the high frequency component determined as the invalid component is transferred to thebuffer 114 without performing the gradation processing thereon. - It should be noted that according to the present embodiment too, similarly to the above-mentioned first and second embodiments, the image processing system in which the image pickup unit is separately provided may be used.
- Also, in the above, it is supposed to perform the processing by way of the hardware, but the configuration is not necessarily limited to the above. For example, the image signal from the
CCD 112 is recorded in the recording medium such as a memory card as raw data without applying the process, and also the associated information such as image pickup conditions (for example, the temperature of the image pickup device, the exposure conditions, and the like, for each shooting operation from the control unit 118) is recorded in the recording medium as the header information. Then, the processing can be performed as the computer is allowed to execute the image processing program which is separate software to instruct the computer to read the information of the recording medium. It should be noted that the transmission of various pieces of information from the image pickup unit to the computer is not necessarily performed via the recording medium and may be performed via a communication line or the like. -
FIG. 29 is a flow chart showing a main routine of an image processing program. - It should be noted that in
FIG. 29 , processing steps basically substantially identified with the processing shown inFIG. 11 of the above-mentioned first embodiment are allocated with the same step numbers. - When the processing is started, first, the image signal is read, and also the header information such as the temperature and the exposure conditions of the image pickup device is read (step S1).
- Next, by performing the frequency decomposition such as the discrete cosine transform, the high frequency component and the low frequency component are obtained (step S2).
- Subsequently, as shown in
FIG. 12 , the conversion characteristic is calculated (step S3). - Furthermore, as is described below with reference to
FIG. 30 , the high frequency component is separated into the invalid component caused by the noise and the other valid component (step S80). - Then, as shown in
FIG. 14 , the gradation processing is performed on the low frequency component and the valid component in the high frequency component (step S5). - Next, on the basis of the low frequency component on which the gradation processing has been performed, the valid component in the high frequency component on which the gradation processing has been performed, and the invalid component in the high frequency component, the image signal on which the gradation conversion has been performed is synthesized (step S6).
- Subsequently, the signal processing such as a known compression processing is performed (step S7).
- Then, the image signal after the processing is output (step S8), and the processing is ended.
-
FIG. 30 is a flow chart showing the processing for the high frequency separation in the above-mentioned step S80. - It should be noted that in
FIG. 30 , processing steps basically substantially identified with the processing shown inFIG. 13 of the above-mentioned first embodiment are allocated with the same step numbers. - When the processing is started, first, the low frequency components are sequentially extracted for each pixel (step S20).
- Next, from the read header information, the information such as the temperature and the gain of the image pickup device is set. At this time, if a necessary parameter does not exist for the header information, a pre-set standard value is assigned to the relevant information (step S21).
- Subsequently, the parameter related to the reference noise model is read (step S22).
- Then, on the basis of the parameter of the reference noise model, the noise amount related to the low frequency component is calculated through the interpolation processing (step S23).
- After that, as illustrated in
FIG. 27B , the high frequency components corresponding to the low frequency components are sequentially extracted (step S24). - Next, the average values of the high frequency components corresponding to the low frequency components are calculated for each order (step S25).
- Subsequently, on the basis of the average value and the noise amount, the upper limit and the lower limit are set as shown in Numeric Expression 3 (step S26).
- Then, in a case where the high frequency component is in the range between the upper limit and the lower limit, it is determined that the high frequency component is the invalid component caused by the noise, and in a case where the high frequency component exceeds the upper limit or falls short of the lower limit, it is determined that the high frequency component is the valid component (step S90).
- At this time, in a case where it is determined that the high frequency component is the valid component, the correction processing shown in Numeric Expression 8 or Numeric Expression 9 is performed on the high frequency component (step S91).
- On the other hand, in step S90, in a case where it is determined that the high frequency component is the invalid component, the correction processing shown in Numeric Expression 7 is performed on the high frequency component (step S92).
- When the processing in step S91 or S92 is ended, the valid component and the invalid component are output while being separated from each other (step S93).
- Then, it is determined whether or not the processing for all the high frequency components has been completed (step S29). In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S24 to repeat the above-mentioned processing.
- On the other hand, in the step S29, in a case where it is determined that the processing for all the high frequency components has been completed, it is determined whether or not the processing for all the low frequency components has been completed (step S30). In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S20 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in
FIG. 29 . - It should be noted that in the above description, the configuration of using the discrete cosine transform for the frequency decomposition and the frequency synthesis is adopted, but the configuration is not necessarily limited to the above. For example, similarly to the above-mentioned first embodiment, a configuration of using the wavelet transform can be adopted, and similarly to the second embodiment described above, a configuration of using the low-pass filter and the difference filter in combination can also be adopted.
- Furthermore, in the above description, the configuration of processing the monochrome image signal is adopted, but the configuration is not necessarily limited to the above. For example, similarly to the second embodiment described above, a configuration of calculating the luminance signals from the color image signal obtained from the color image pickup device for the processing can also be adopted.
- According to the third embodiment described above, only the high frequency component where the influence of the noise prominently visually appears is separated into the invalid component and the valid component. The gradation processing is performed on the valid component, and the gradation processing is not performed on the invalid component, and an increase in noise accompanying with the gradation processing is suppressed. Thus, it is possible to generate the high quality image signal.
- Also, as the low frequency component is excluded from the target of the processing after being separated into the valid component and the invalid component, the possibility of generating the adverse effect accompanying with the processing is decreased, and it is possible to improve the stability.
- Furthermore, as the image signal is synthesized with the invalid component, it is possible to obtain the image signal with little sense of visual discomfort, and the stability and reliability of the processing can be improved.
- Also, the discrete cosine transform is excellent at the separation of the frequency, and it is therefore possible to perform the high accuracy processing.
- As the gradation conversion curve is adaptively and also independently calculated for each region from the low frequency component of the image signal, it is possible to perform the gradation conversion at the high accuracy on various image signals.
- Then, as the gradation conversion is performed on the high frequency component on which the noise reducing processing has been performed, an increase in noise accompanying with the gradation processing is suppressed. Thus, it is possible to generate the high quality image signal.
- Also, as the correction processing is performed on the valid component in the high frequency component and the smoothing processing is performed on the invalid component in the high frequency component, the generation of the discontinuity accompanying with the noise reducing processing is prevented, and it is possible to generate the high quality image signal.
- Furthermore, as the edge emphasis processing is performed only on the valid component in the high frequency component and the edge emphasis processing is not performed on the invalid component in the high frequency component, it is possible to emphasize only the edge component without emphasizing the noise component. With the configuration, it is possible to generate the high quality image signal.
-
FIGS. 31 to 36 illustrate a fourth embodiment of the present invention, andFIG. 31 is a block diagram of a configuration of an image processing system. - According to the fourth embodiment, the same configuration as that of the above-mentioned first to third embodiments is allocated with the same reference numerals to appropriately omit the description thereof, and only a different part will be mainly described.
- The image processing system according to the present embodiment has such a configuration that with respect to the above-mentioned image processing system illustrated in
FIG. 1 according to the first embodiment, anoise reducing unit 1400 constituting separation means and noise reducing means, adifference unit 1401 constituting separation means and difference means, and asignal synthesis unit 1403 constituting synthesis means and signal synthesis means are added, thegradation processing unit 113 is replaced by agradation processing unit 1402 constituting conversion means and gradation processing means, and thefrequency decomposition unit 109, the highfrequency separation unit 112, and thefrequency synthesis unit 115 are omitted. Other basic configuration is similar to that of the above-mentioned first embodiment. Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described. - The
buffer 105 is connected to theexposure control unit 106, thefocus control unit 107, thenoise reducing unit 1400, and thedifference unit 1401. - The
noise reducing unit 1400 is connected to thebuffer 110. Thebuffer 110 is connected to the conversioncharacteristic calculation unit 111, thedifference unit 1401, and thegradation processing unit 1402. - The conversion
characteristic calculation unit 111 is connected to thegradation processing unit 1402. Thedifference unit 1401 is connected to thebuffer 114. Thegradation processing unit 1402 is connected to thebuffer 114. Thebuffer 114 is connected via thesignal synthesis unit 1403 to thesignal processing unit 116. - The
control unit 118 is also bi-directionally connected to thenoise reducing unit 1400, thedifference unit 1401, thegradation processing unit 1402, and thesignal synthesis unit 1403 to control these units. - Next, the action of the image processing system illustrated in
FIG. 31 is basically similar to that of the first embodiment, and therefore only a different part will be mainly described along the flow of the image signal. - The image signal in the
buffer 105 is transferred to thenoise reducing unit 1400. - The
noise reducing unit 1400 performs the noise reducing processing on the basis of the control of thecontrol unit 118 and transfers the image signal after the noise reducing processing as the valid component to thebuffer 110. - The conversion
characteristic calculation unit 111 reads the valid component from thebuffer 110, and similarly to the above-mentioned first embodiment, calculates the gradation characteristic used for the gradation conversion processing. It should be noted that according to the present embodiment, for the gradation conversion processing, for example, it is supposed to use a space-variant processing using a plurality of gradation characteristics different for each region of a 64×64 pixel unit. Then, the conversioncharacteristic calculation unit 111 transfers the calculated the gradation characteristic to thegradation processing unit 1402. - On the basis of the control of the
control unit 118, thedifference unit 1401 reads the image signal before the noise reducing processing from thebuffer 105, and also reads the image signal after the noise reducing processing from thebuffer 110 as the valid component to perform a processing of taking a difference thereof. Thedifference unit 1401 transfers a signal obtained as the result of taking the difference as the invalid component to thebuffer 114. - The
gradation processing unit 1402 reads the valid component from thebuffer 110 and the gradation characteristic from the conversioncharacteristic calculation unit 111, respectively, on the basis of the control of thecontrol unit 118. Then, on the basis of the above-mentioned gradation characteristic, thegradation processing unit 1402 performs the gradation processing on the above-mentioned valid component. Thegradation processing unit 1402 transfers the valid component on which the gradation processing has been performed to thebuffer 114. - The
signal synthesis unit 1403 reads the valid component on which the gradation processing has been performed and the invalid component from thebuffer 114 on the basis of the control of thecontrol unit 118 and adds these components, so that the image signal on which the gradation conversion has been performed is synthesized. Thesignal synthesis unit 1403 transfers the image signal thus synthesized to thesignal processing unit 116. - The
signal processing unit 116 performs a known compression processing or the like on the image signal from thesignal synthesis unit 1403 and transfers the signal after the processing to theoutput unit 117 on the basis of the control of thecontrol unit 118. - The
output unit 117 records and saves the image signal output from thesignal processing unit 116 in the recording medium such as a memory card. - Next,
FIG. 32 is a block diagram of a configuration example of thenoise reducing unit 1400. - The
noise reducing unit 1400 is configured by including an imagesignal extraction unit 1500, anaverage calculation unit 1501 constituting noise estimation means and average calculation means, again calculation unit 1502 constituting noise estimation means and collection means, a standardvalue assigning unit 1503 constituting noise estimation means and assigning means, anoise LUT 1504 constituting noise estimation means and table conversion means, an upper limit and lowerlimit setting unit 1505 constituting setting means and upper limit and lower limit setting means, adetermination unit 1506 constituting determination means, afirst smoothing unit 1507 constituting first smoothing means, and asecond smoothing unit 1508 constituting second smoothing means. - The
buffer 105 is connected to the imagesignal extraction unit 1500. The imagesignal extraction unit 1500 is connected to theaverage calculation unit 1501 and thedetermination unit 1506. - The
average calculation unit 1501, thegain calculation unit 1502, and the standardvalue assigning unit 1503 are connected to thenoise LUT 1504. Thenoise LUT 1504 is connected to the upper limit and lowerlimit setting unit 1505. The upper limit and lowerlimit setting unit 1505 is connected to thedetermination unit 1506, thefirst smoothing unit 1507, and thesecond smoothing unit 1508. - The
determination unit 1506 is connected to thefirst smoothing unit 1507 and thesecond smoothing unit 1508. Thefirst smoothing unit 1507 and thesecond smoothing unit 1508 are connected to thebuffer 110. - The
control unit 118 is bi-directionally connected to the imagesignal extraction unit 1500, theaverage calculation unit 1501, thegain calculation unit 1502, the standardvalue assigning unit 1503, thenoise LUT 1504, the upper limit and lowerlimit setting unit 1505, thedetermination unit 1506, thefirst smoothing unit 1507, and thesecond smoothing unit 1508 to control these units. - Subsequently, a description will be given of the action of the
noise reducing unit 1400. - The image
signal extraction unit 1500 sequentially extracts the target pixel on which the noise reducing processing should be performed and neighboring pixels of, for example, 3×3 pixels including the target pixel from thebuffer 105 on the basis of the control of thecontrol unit 118. The imagesignal extraction unit 1500 transfers the target pixel and the neighboring pixels to theaverage calculation unit 1501, and the target pixel to thedetermination unit 1506, respectively. - The
average calculation unit 1501 reads the target pixel and the neighboring pixels from the imagesignal extraction unit 1500 and calculates the average value AV thereof on the basis of the control of thecontrol unit 118. Theaverage calculation unit 1501 transfers the calculated average value AV to thenoise LUT 1504. - The
gain calculation unit 1502 calculates the gain information in theamplification unit 103 to be transferred to thenoise LUT 1504 on the basis of the information related to the ISO sensitivity and the exposure condition transferred from thecontrol unit 118. - Also, the
control unit 118 obtains temperature information of theCCD 102 from thetemperature sensor 120 and transferred the thus obtained temperature information to thenoise LUT 1504. - On the basis of the control of the
control unit 118, in a case where at least one of the above-mentioned gain information and the temperature information cannot be obtained, the standardvalue assigning unit 1503 transfers a standard value of the information that cannot be obtained to thenoise LUT 1504. - The
noise LUT 1504 is a look up table where a relation among the signal value level of the image signal, the gain of the image signal, the operation temperature of the image pickup device, and the noise amount is recorded. The look up table is designed, for example, by using the technology disclosed in Japanese Unexamined Patent Application Publication No. 2004-128985. - The
noise LUT 1504 outputs the noise amount N on the basis of the average value AV related to the target pixel from theaverage calculation unit 1501, the gain information from thegain calculation unit 1502 or the standardvalue assigning unit 1503, and the temperature information from thecontrol unit 118 or the standardvalue assigning unit 1503. The noise amount N and the average value AV from theaverage calculation unit 1501 are transferred from thenoise LUT 1504 to the upper limit and lowerlimit setting unit 1505. - On the basis of the control of the
control unit 118, the upper limit and lowerlimit setting unit 1505 uses the average value AV and the noise amount N from thenoise LUT 1504 to set the upper limit App_Up and the lower limit App_Low for identifying whether the target pixel belongs to the noise or not as shown inNumeric Expression 3. - The upper limit and lower
limit setting unit 1505 transfers the thus set upper limit App_Up and the lower limit App_Low to thedetermination unit 1506, transfers the average value AV to thesecond smoothing unit 1508, and transfers the average value AV and the noise amount N to thefirst smoothing unit 1507, respectively. - The
determination unit 1506 reads the target pixel from the imagesignal extraction unit 1500 and the upper limit App_Up and the lower limit App_Low from the upper limit and lowerlimit setting unit 1505, respectively, on the basis of the control of thecontrol unit 118. Then, in a case where the target pixel exceeds the upper limit App_Up or falls short of the lower limit App_Low, thedetermination unit 1506 determines that the target pixel does not belong to the noise and transfers the target pixel to thefirst smoothing unit 1507. - On the other hand, in a case where the target pixel is in the range between the upper limit App_Up and the lower limit App_Low, the
determination unit 1506 determines that the target pixel belongs to the noise and transfers the target pixel to thesecond smoothing unit 1508. - The
second smoothing unit 1508 performs the processing of substituting the target pixel (herein, the target pixel is set as P) with the average value AV from the upper limit and lowerlimit setting unit 1505 as shown in Numeric Expression 7. - Also, the
first smoothing unit 1507 uses the average value AV and the noise amount N from the upper limit and lowerlimit setting unit 1505 to perform the correction on the target pixel P. The correction has two types of processings. In a case where the target pixel P exceeds the upper limit App_Up, thefirst smoothing unit 1507 performs the correction shown in Numeric Expression 8. On the other hand, thefirst smoothing unit 1507 performs the correction shown in Numeric Expression 9 in a case where the target pixel P falls short of the lower limit App_Low. - Then, the processing result obtained by the
first smoothing unit 1507 and the processing result obtained by thesecond smoothing unit 1508 are both transferred to thebuffer 110. - Next,
FIG. 33 is a block diagram of a configuration example of thegradation processing unit 1402. - The
gradation processing unit 1402 has such a configuration that with respect to thegradation processing unit 113 shown inFIG. 6 of the above-mentioned first embodiment, the low frequencycomponent extraction unit 500, the high frequencycomponent extraction unit 504 is omitted, and an imagesignal extraction unit 1600 constituting extraction means is added. Other basic configuration is similar to that of thegradation processing unit 113 shown inFIG. 6 . Therefore, the same components are allocated with the same names and reference numerals to appropriately omit the description thereof, and only a different part will be mainly described. - The
buffer 110 is connected to the imagesignal extraction unit 1600. The imagesignal extraction unit 1600 is connected to thedistance calculation unit 501 and thegradation conversion unit 505. - The
control unit 118 is also bi-directionally connected to the imagesignal extraction unit 1600 to control the unit. - Subsequently, a description will be given of the action of the
gradation processing unit 1402. - The image
signal extraction unit 1600 sequentially extracts the image signals after the noise reducing processing as valid components from thebuffer 110 for each pixel on the basis of the control of thecontrol unit 118. The imagesignal extraction unit 1600 transfers the extracted valid component to thedistance calculation unit 501 and thegradation conversion unit 505. - After that, similarly to the above-mentioned first embodiment, the
distance calculation unit 501 and the gradation conversionequation setting unit 502 sets the gradation conversion equation with respect to the target pixel as shown inNumeric Expression 4. Then, the gradation conversionequation setting unit 502 transfers the set gradation conversion equation to thebuffer 503. - On the basis of the control of the
control unit 118, thegradation conversion unit 505 reads the valid component from the imagesignal extraction unit 1600 and also reads the gradation conversion equation from thebuffer 503 to perform the gradation conversion on the valid component. Thegradation conversion unit 505 transfers the valid component after the gradation conversion to thebuffer 114. - It should be noted that according to the present embodiment too, similarly to the above-mentioned first to third embodiments, the image processing system in which the image pickup unit is separately provided may be used.
- Also, in the above, it is supposed to perform the processing by way of the hardware, but the configuration is not necessarily limited to the above. For example, the image signal from the
CCD 112 is recorded in the recording medium such as a memory card as raw data without applying the process, and also the associated information such as image pickup conditions (for example, the temperature of the image pickup device, the exposure conditions, and the like, for each shooting operation from the control unit 118) is recorded in the recording medium as the header information. Then, the processing can be performed as the computer is allowed to execute the image processing program which is separate software to instruct the computer to read the information of the recording medium. It should be noted that the transmission of various pieces of information from the image pickup unit to the computer is not necessarily performed via the recording medium and may be performed via a communication line or the like. -
FIG. 34 is a flow chart showing a main routine of an image processing program. - It should be noted that in
FIG. 34 , processing steps basically substantially identified with the processing shown inFIG. 11 of the above-mentioned first embodiment are allocated with the same step numbers. - When the processing is started, first, the image signal is read, and also the header information such as the temperature and the exposure conditions of the image pickup device is read (step S1).
- Next, as is described below with reference to
FIG. 35 , the noise reducing processing is performed to calculate the image signal after the noise reducing processing as the valid component (step S100). - Subsequently, as shown in
FIG. 12 , the conversion characteristic is calculated (step S3). - Furthermore, from the difference between the image signal and the image signal after the noise reducing processing, the invalid component is calculated (step S101).
- Then, as is described below with reference to
FIG. 36 , the gradation processing is performed on the valid component (step S102). - Next, on the basis of the valid component on which the gradation processing has been performed and the invalid component, the image signal on which the gradation conversion has been performed is synthesized (step S103).
- Subsequently, the signal processing such as a known compression processing is performed (step S7).
- Then, the image signal after the processing is output (step S8), and the processing is ended.
-
FIG. 35 is a flow chart showing the processing for the noise reduction in the above-mentioned step S100. - When the processing is started, first, the target pixel on which the noise reducing processing should be performed and neighboring pixels, for example, of 3×3 pixels including the target pixel are sequentially extracted (step S110).
- Next, an average value of the target pixel and the neighboring pixels is calculated (step S111).
- Subsequently, from the read header information, the information such as the temperature and the gain of the image pickup device is set. At this time, if a necessary parameter does not exist for the header information, a pre-set standard value is assigned to the relevant information (step S112).
- Then, the table related to the noise amount where a relation among the signal value level of the image signal, the gain of the image signal, the operation temperature of the image pickup device, and the noise amount is recorded is read (step S113).
- Furthermore, on the basis of the table related to the noise amount, the noise amount is calculated (step S114).
- After that, on the basis of the average value and the noise amount, the upper limit and the lower limit are set as shown in Numeric Expression 3 (step S115).
- Next, it is determined whether the target pixel belongs to the noise or not through the comparison with the upper limit and the lower limit (step S116).
- At this time, in a case where the target pixel exceeds the upper limit or falls short of the lower limit, it is determined that the target pixel does not belong to the noise, and the correction processing shown in Numeric Expression 8 or Numeric Expression 9 is performed on the target pixel (step S117).
- On the other hand, in step S116, in a case where the target pixel is in the range between the upper limit and the lower limit, it is determined that the target pixel belongs to the noise, the correction processing shown in Numeric Expression 7 is performed on the target pixel (step S118).
- Then, the corrected target pixel is output as the pixel after the noise reducing processing (step S119).
- After that, the image signal after the noise reducing processing is set as the valid component, and it is determined whether the processing has been completed for all the valid components or not (step S120). In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S110 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in
FIG. 34 . -
FIG. 36 is a flow chart showing the gradation processing in the above-mentioned step S102. - It should be noted that in
FIG. 36 , processing steps basically substantially identified with the processing shown inFIG. 14 of the above-mentioned first embodiment are allocated with the same step numbers. - When the processing is started, first, the image signals after the noise reducing processing are sequentially extracted as valid components for each pixel (step S130).
- Next, as illustrated in
FIG. 8 , the distances between the target pixel of the valid component and the centers of the four neighboring regions are calculated (step S41). - Subsequently, the gradation conversion curves in the four neighboring regions are read (step S42).
- Furthermore, as shown in
Numeric Expression 4, the gradation conversion equation with respect to the target pixel is set (step S43). - Then, by applying the gradation conversion equation shown in
Numeric Expression 4 with respect to the target pixel of the valid component, the gradation conversion is performed (step S47). - Next, the target pixel on which the gradation processing has been performed is output (step S48).
- After that, it is determined whether the processing has been completed for all the image signals after the noise reducing processing or not (step S131). In a case where it is determined that the processing has not been completed, the flow is returned to the above-mentioned step S130 to repeat the above-mentioned processing. On the other hand, in a case where it is determined that the processing has been completed, the flow is returned to the processing shown in
FIG. 34 . - It should be noted that in the above description, the configuration of processing the monochrome image signal is adopted, but the configuration is not necessarily limited to the above. For example, similarly to the second embodiment described above, it is possible to adopt a configuration of processing the color image signal obtained from the color image pickup device.
- According to the fourth embodiment described above, the gradation processing is performed only on the image signal after the noise reduction, and an increase in noise accompanying with the gradation processing is suppressed. Thus, it is possible to generate the high quality image signal.
- Also, as the conversion characteristic is calculated on the basis of the image signal after the noise reduction, the appropriate conversion characteristic with little influence from the noise can be calculated, and it is possible to improve the stability and reliability of the processing. At this time, as the gradation conversion curve is adaptively calculated from the image signal after the noise reduction, it is possible to perform the high accuracy gradation conversion on various types of image signals.
- Furthermore, the present embodiment corresponds to the processing system in which the gradation conversion processing is combined with the noise reducing processing. Therefore, the affinity and compatibility with the existing system are high, and the present embodiment can be applied to a large number of image processing systems. Furthermore, the higher performance can be achieved as a whole, and the system scale can be reduced, which leads to the realization of the lower cost.
- Then, the image signal after the noise reduction on which the gradation processing has been performed and the invalid component are synthesized with each other. Thus, the error generated in the noise reducing processing can be suppressed, and it is possible to perform the stable gradation processing. Also, it is possible to generate the high quality image signal with little sense of visual discomfort.
- In addition, as the gradation conversion curve is adaptively obtained, it is possible to perform the high accuracy gradation conversion on various types of image signals.
- Also, as the gradation conversion curve is obtained independently for each region, the degree of freedom is further improved, and also it is possible to obtain the high quality image signals for scenes with a large contrast.
- It should be noted that the present invention is not limited to the embodiments described above, and various modifications and applications can of course be made without departing from the gist of the present invention.
Claims (46)
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006247169A JP4837503B2 (en) | 2006-09-12 | 2006-09-12 | Image processing system and image processing program |
JP2006-247169 | 2006-09-12 | ||
PCT/JP2007/067222 WO2008032610A1 (en) | 2006-09-12 | 2007-09-04 | Image processing system and image processing program |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2007/067222 Continuation WO2008032610A1 (en) | 2006-09-12 | 2007-09-04 | Image processing system and image processing program |
Publications (2)
Publication Number | Publication Date |
---|---|
US20090219416A1 true US20090219416A1 (en) | 2009-09-03 |
US8194160B2 US8194160B2 (en) | 2012-06-05 |
Family
ID=39183671
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/400,028 Active 2028-12-02 US8194160B2 (en) | 2006-09-12 | 2009-03-09 | Image gradation processing apparatus and recording |
Country Status (3)
Country | Link |
---|---|
US (1) | US8194160B2 (en) |
JP (1) | JP4837503B2 (en) |
WO (1) | WO2008032610A1 (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080240556A1 (en) * | 2007-02-28 | 2008-10-02 | Takao Tsuruoka | Image processing apparatus, image processing program, and image processing method |
US20090201387A1 (en) * | 2008-02-05 | 2009-08-13 | Fujifilm Corporation | Image capturing apparatus, image capturing method, image processing apparatus, image processing method, and program storing medium |
US20090232395A1 (en) * | 2006-05-24 | 2009-09-17 | Matsushita Electric Industrial Co., Ltd. | Image processing device |
US20100238356A1 (en) * | 2009-03-18 | 2010-09-23 | Victor Company Of Japan, Ltd. | Video signal processing method and apparatus |
US20100329559A1 (en) * | 2009-06-29 | 2010-12-30 | Canon Kabushiki Kaisha | Image processing apparatus and control method thereof |
US20110002539A1 (en) * | 2009-07-03 | 2011-01-06 | Olympus Corporation | Image processing device, image processing method, and storage medium storing image processing program |
US20110267542A1 (en) * | 2010-04-30 | 2011-11-03 | Canon Kabushiki Kaisha | Image processing apparatus and control method thereof |
US20120147226A1 (en) * | 2010-12-10 | 2012-06-14 | Sony Corporation | Image processing device, image processing method, and program |
CN109684926A (en) * | 2018-11-21 | 2019-04-26 | 佛山市第一人民医院(中山大学附属佛山医院) | Non-contact vein image acquisition method and device |
CN113423024A (en) * | 2021-06-21 | 2021-09-21 | 上海宏英智能科技股份有限公司 | Vehicle-mounted wireless remote control method and system |
EP3945713A1 (en) * | 2020-07-29 | 2022-02-02 | Beijing Xiaomi Mobile Software Co., Ltd. | Image processing method and apparatus, and storage medium |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102696217A (en) * | 2009-09-30 | 2012-09-26 | 夏普株式会社 | Image enlargement device, image enlargement program, and display apparatus |
KR101025569B1 (en) | 2009-10-13 | 2011-03-28 | 중앙대학교 산학협력단 | Apparatus and method for reducing noise of image based on discrete wavelet transform |
JP2012004787A (en) * | 2010-06-16 | 2012-01-05 | Sony Corp | Image processing system, image processing method and computer program |
JP5743815B2 (en) * | 2011-09-05 | 2015-07-01 | 日立マクセル株式会社 | Imaging device |
JP6287100B2 (en) | 2013-11-20 | 2018-03-07 | 株式会社リコー | Image processing apparatus, image processing method, program, and storage medium |
JP6327869B2 (en) * | 2014-01-29 | 2018-05-23 | キヤノン株式会社 | Image processing apparatus, imaging apparatus, control method, and program |
KR102104414B1 (en) * | 2014-02-25 | 2020-04-24 | 한화테크윈 주식회사 | Auto focussing method |
US9667842B2 (en) | 2014-08-30 | 2017-05-30 | Apple Inc. | Multi-band YCbCr locally-adaptive noise modeling and noise reduction based on scene metadata |
US9525804B2 (en) * | 2014-08-30 | 2016-12-20 | Apple Inc. | Multi-band YCbCr noise modeling and noise reduction based on scene metadata |
WO2016203690A1 (en) * | 2015-06-19 | 2016-12-22 | パナソニックIpマネジメント株式会社 | Image capture device and image processing method |
US9626745B2 (en) | 2015-09-04 | 2017-04-18 | Apple Inc. | Temporal multi-band noise reduction |
JP6724982B2 (en) * | 2016-04-13 | 2020-07-15 | ソニー株式会社 | Signal processing device and imaging device |
WO2018227943A1 (en) * | 2017-06-14 | 2018-12-20 | Shenzhen United Imaging Healthcare Co., Ltd. | System and method for image processing |
JP6761964B2 (en) * | 2017-12-18 | 2020-09-30 | パナソニックIpマネジメント株式会社 | Communication system, image generation method, and communication device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6754398B1 (en) * | 1999-06-10 | 2004-06-22 | Fuji Photo Film Co., Ltd. | Method of and system for image processing and recording medium for carrying out the method |
US20040252907A1 (en) * | 2001-10-26 | 2004-12-16 | Tsukasa Ito | Image processing method, apparatus, and program |
US20050157189A1 (en) * | 2003-10-24 | 2005-07-21 | Olympus Corporation | Signal-processing system, signal-processing method, and signal-processing program |
US20060050157A1 (en) * | 2002-10-03 | 2006-03-09 | Olympus Corporation | Imaging system and reproducing system |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0856316A (en) | 1994-06-09 | 1996-02-27 | Sony Corp | Image processor |
JP3424060B2 (en) * | 1997-01-27 | 2003-07-07 | 松下電器産業株式会社 | Gradation correction device and video signal processing device using the same |
JP2001167264A (en) * | 1999-09-30 | 2001-06-22 | Fuji Photo Film Co Ltd | Method and device for image processing and recording medium |
JP3465226B2 (en) | 1999-10-18 | 2003-11-10 | 学校法人慶應義塾 | Image density conversion processing method |
JP2004287794A (en) * | 2003-03-20 | 2004-10-14 | Minolta Co Ltd | Image processor |
-
2006
- 2006-09-12 JP JP2006247169A patent/JP4837503B2/en not_active Expired - Fee Related
-
2007
- 2007-09-04 WO PCT/JP2007/067222 patent/WO2008032610A1/en active Application Filing
-
2009
- 2009-03-09 US US12/400,028 patent/US8194160B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6754398B1 (en) * | 1999-06-10 | 2004-06-22 | Fuji Photo Film Co., Ltd. | Method of and system for image processing and recording medium for carrying out the method |
US20040252907A1 (en) * | 2001-10-26 | 2004-12-16 | Tsukasa Ito | Image processing method, apparatus, and program |
US20060050157A1 (en) * | 2002-10-03 | 2006-03-09 | Olympus Corporation | Imaging system and reproducing system |
US20050157189A1 (en) * | 2003-10-24 | 2005-07-21 | Olympus Corporation | Signal-processing system, signal-processing method, and signal-processing program |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090232395A1 (en) * | 2006-05-24 | 2009-09-17 | Matsushita Electric Industrial Co., Ltd. | Image processing device |
US20080240556A1 (en) * | 2007-02-28 | 2008-10-02 | Takao Tsuruoka | Image processing apparatus, image processing program, and image processing method |
US8351695B2 (en) * | 2007-02-28 | 2013-01-08 | Olympus Corporation | Image processing apparatus, image processing program, and image processing method |
US20090201387A1 (en) * | 2008-02-05 | 2009-08-13 | Fujifilm Corporation | Image capturing apparatus, image capturing method, image processing apparatus, image processing method, and program storing medium |
US8134610B2 (en) * | 2008-02-05 | 2012-03-13 | Fujifilm Corporation | Image capturing apparatus, image capturing method, image processing apparatus, image processing method, and program storing medium using spatial frequency transfer characteristics |
US20100238356A1 (en) * | 2009-03-18 | 2010-09-23 | Victor Company Of Japan, Ltd. | Video signal processing method and apparatus |
US8339518B2 (en) * | 2009-03-18 | 2012-12-25 | JVC Kenwood Corporation | Video signal processing method and apparatus using histogram |
US20100329559A1 (en) * | 2009-06-29 | 2010-12-30 | Canon Kabushiki Kaisha | Image processing apparatus and control method thereof |
US8649597B2 (en) * | 2009-06-29 | 2014-02-11 | Canon Kabushiki Kaisha | Image processing apparatus and control method thereof detecting from a histogram a gradation level whose frequency is a peak value |
US8355597B2 (en) * | 2009-07-03 | 2013-01-15 | Olympus Corporation | Image processing device including gradation conversion processor, noise reduction processor, and combining-raio calculator, and method and storage device storing progam for same |
US20110002539A1 (en) * | 2009-07-03 | 2011-01-06 | Olympus Corporation | Image processing device, image processing method, and storage medium storing image processing program |
US8456578B2 (en) * | 2010-04-30 | 2013-06-04 | Canon Kabushiki Kaisha | Image processing apparatus and control method thereof for correcting image signal gradation using a gradation correction curve |
US20110267542A1 (en) * | 2010-04-30 | 2011-11-03 | Canon Kabushiki Kaisha | Image processing apparatus and control method thereof |
US20120147226A1 (en) * | 2010-12-10 | 2012-06-14 | Sony Corporation | Image processing device, image processing method, and program |
US8477219B2 (en) * | 2010-12-10 | 2013-07-02 | Sony Corporation | Image processing device, image processing method, and program |
CN109684926A (en) * | 2018-11-21 | 2019-04-26 | 佛山市第一人民医院(中山大学附属佛山医院) | Non-contact vein image acquisition method and device |
EP3945713A1 (en) * | 2020-07-29 | 2022-02-02 | Beijing Xiaomi Mobile Software Co., Ltd. | Image processing method and apparatus, and storage medium |
CN113423024A (en) * | 2021-06-21 | 2021-09-21 | 上海宏英智能科技股份有限公司 | Vehicle-mounted wireless remote control method and system |
Also Published As
Publication number | Publication date |
---|---|
JP4837503B2 (en) | 2011-12-14 |
WO2008032610A1 (en) | 2008-03-20 |
JP2008072233A (en) | 2008-03-27 |
US8194160B2 (en) | 2012-06-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8194160B2 (en) | Image gradation processing apparatus and recording | |
US8184924B2 (en) | Image processing apparatus and image processing program | |
US8553111B2 (en) | Noise reduction system, image pickup system and computer readable storage medium | |
US7738699B2 (en) | Image processing apparatus | |
US8184181B2 (en) | Image capturing system and computer readable recording medium for recording image processing program | |
US8363123B2 (en) | Image pickup apparatus, color noise reduction method, and color noise reduction program | |
US8115833B2 (en) | Image-acquisition apparatus | |
US8736723B2 (en) | Image processing system, method and program, including a correction coefficient calculation section for gradation correction | |
US8310566B2 (en) | Image pickup system and image processing method with an edge extraction section | |
US8223226B2 (en) | Image processing apparatus and storage medium storing image processing program | |
US8154630B2 (en) | Image processing apparatus, image processing method, and computer readable storage medium which stores image processing program | |
US7734110B2 (en) | Method for filtering the noise of a digital image sequence | |
US20060012693A1 (en) | Imaging process system, program and memory medium | |
US8351695B2 (en) | Image processing apparatus, image processing program, and image processing method | |
US8463034B2 (en) | Image processing system and computer-readable recording medium for recording image processing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: OLYMPUS CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TSURUOKA, TAKAO;REEL/FRAME:022362/0418 Effective date: 20090226 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
AS | Assignment |
Owner name: OLYMPUS CORPORATION, JAPAN Free format text: CHANGE OF ADDRESS;ASSIGNOR:OLYMPUS CORPORATION;REEL/FRAME:039344/0502 Effective date: 20160401 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |