WO2006004151A1 - 信号処理システム及び信号処理プログラム - Google Patents
信号処理システム及び信号処理プログラム Download PDFInfo
- Publication number
- WO2006004151A1 WO2006004151A1 PCT/JP2005/012478 JP2005012478W WO2006004151A1 WO 2006004151 A1 WO2006004151 A1 WO 2006004151A1 JP 2005012478 W JP2005012478 W JP 2005012478W WO 2006004151 A1 WO2006004151 A1 WO 2006004151A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- noise
- signal
- luminance
- amount
- area
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 157
- 230000009467 reduction Effects 0.000 claims abstract description 80
- 238000000605 extraction Methods 0.000 claims abstract description 27
- 238000003384 imaging method Methods 0.000 claims abstract description 6
- 238000000034 method Methods 0.000 claims description 103
- 230000008569 process Effects 0.000 claims description 94
- 238000004364 calculation method Methods 0.000 claims description 88
- 238000012937 correction Methods 0.000 claims description 64
- 238000009499 grossing Methods 0.000 claims description 51
- 230000035945 sensitivity Effects 0.000 claims description 10
- 230000000295 complement effect Effects 0.000 claims description 5
- 238000000926 separation method Methods 0.000 abstract description 22
- 238000010586 diagram Methods 0.000 description 41
- 230000000875 corresponding effect Effects 0.000 description 34
- 239000000872 buffer Substances 0.000 description 26
- 238000012546 transfer Methods 0.000 description 19
- 238000001444 catalytic combustion detection Methods 0.000 description 18
- 238000006243 chemical reaction Methods 0.000 description 10
- 239000000284 extract Substances 0.000 description 8
- 230000003595 spectral effect Effects 0.000 description 8
- 238000001514 detection method Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 238000011156 evaluation Methods 0.000 description 7
- 238000011946 reduction process Methods 0.000 description 6
- 238000004458 analytical method Methods 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 230000002596 correlated effect Effects 0.000 description 3
- 238000004321 preservation Methods 0.000 description 3
- 238000005070 sampling Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000003321 amplification Effects 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 238000001228 spectrum Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000009795 derivation Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000005375 photometry Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/618—Noise processing, e.g. detecting, correcting, reducing or removing noise for random or high-frequency noise
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/135—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements
- H04N25/136—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements using complementary colours
Definitions
- the present invention relates to a process for reducing random noise of color signals and luminance signals caused by an image sensor system, and by dynamically estimating the amount of noise generated, only noise components are affected.
- the noise components contained in the digitized signal obtained by the image sensor and the accompanying analog circuit and A / D converter power can be broadly classified into fixed pattern noise and random noise.
- Fixed pattern noise is noise caused mainly by the image sensor, represented by defective pixels.
- random noise is generated in the image sensor and analog circuit and has characteristics close to white noise characteristics.
- random noise for example, as disclosed in Japanese Patent Laid-Open No. 2001-157057, the luminance noise amount is functioned with respect to the signal level, and the luminance noise amount with respect to the signal level is estimated based on the function noise.
- a method for controlling the frequency characteristics of filtering based on the amount of noise is disclosed. As a result, an appropriate noise reduction process is performed on the signal level.
- an input signal is separated into a luminance signal and a color difference signal, an edge strength is obtained from the luminance signal and the color difference signal, and the color difference signal is applied to an area other than the edge portion.
- N the luminance noise amount
- D the signal level converted to the density value
- N abeD
- a, b, and c constant terms and are given statically.
- the amount of luminance noise changes dynamically depending on factors such as temperature, exposure time, and gain during shooting. That is, there is a problem that the function cannot be adapted to the noise amount at the time of shooting and the estimation accuracy of the noise amount is poor.
- the power to control the frequency characteristics of filtering from the amount of noise. The minutes are treated equally without distinction. For this reason, the edge portion in the region where the signal level force noise amount is estimated to be large deteriorates. That is, there is a problem that processing that distinguishes the original signal and noise cannot be handled, and the preservation of the original signal is poor. Furthermore, there is a problem that cannot be dealt with with respect to color noise generated between each color signal.
- the present invention pays attention to the above-mentioned problems, and performs modeling of the noise amount of the color signal and the luminance signal corresponding to factors that change dynamically such as temperature and gain at the time of shooting as well as the signal level! Accordingly, it is an object of the present invention to provide a signal processing system and a signal processing program that enable noise reduction processing optimized for shooting conditions. In addition, by performing noise reduction processing independently of luminance noise and color noise, it is possible to provide a signal processing system and a signal processing program that reduce both noises with high accuracy and generate high-quality signals. Objective.
- a signal processing system is a signal processing system that performs noise reduction processing on a signal from an image sensor having a color filter disposed on the front surface.
- Extraction means for extracting a local region comprising at least one or more neighboring regions in the vicinity of the region and the attention region; separation means for calculating a luminance signal and a color difference signal for each of the attention region and the neighboring region; and the attention region Selection means for selecting the neighboring area similar to the above, noise estimation means for estimating the amount of noise based on the attention area and the neighboring area selected by the selection means, and noise of the attention area based on the noise amount.
- Noise reduction means for reducing. Examples corresponding to the present invention correspond to Example 1 shown in FIGS.
- the extraction means is the extraction unit 112 shown in FIGS. 1, 8, and 10
- the separation means is the Y / C separation unit 113 shown in FIGS. 1, 8, and 10
- the selection means is the FIGS.
- the selection unit 114 shown in FIG. 8, FIG. 10, FIG. 12, and FIG. 13 is the noise estimation unit, and the noise estimation unit 115 shown in FIG. 1, FIG. 5, FIG. 8, FIG. Corresponds to the noise reduction unit 116 shown in FIG. 1, FIG. 7, FIG. 8, and FIG.
- a preferred application example of the present invention is to extract a local region including a region of interest for which noise reduction processing is performed by the extraction unit 112 and at least one neighboring region in the vicinity of the region of interest, and perform Y / C separation.
- the unit 113 separates the signal into a luminance signal and a color difference signal
- the selection unit 114 selects a neighborhood region similar to the region of interest
- the noise estimation unit 115 calculates the noise amount from the region of interest and the selected neighborhood region.
- This is a signal processing system that estimates and reduces noise in a region of interest by a noise reduction unit 116.
- a neighborhood region similar to a region of interest for which noise reduction processing is performed is selected, a noise amount is estimated for each region of interest and the selected neighborhood region, and noise corresponding to the estimated noise amount is selected. Since the reduction process is performed, it is possible to estimate the amount of noise with high accuracy and to reduce the noise optimally for the entire screen, and to obtain a high-quality signal.
- the image sensor is a single-plate image sensor in which R (red), G (green), and B (blue) Bayer (hereinafter referred to as Bayer) primary color filters are arranged on the front surface, or Cy
- R red
- G green
- B blue
- Cy This is a single-plate imaging device with (cyan), Mg (magenta), Ye (yellow), and G (green) color difference line sequential complementary filters arranged on the front.
- Examples corresponding to the present invention correspond to Example 1 shown in FIGS. 1 to 9 and Example 2 shown in FIGS. 10 to 15.
- a preferred application example of the present invention is the Bayer-type primary color filter shown in FIG. 2A or FIG.
- the noise reduction processing is performed in accordance with the Bayer type or color difference line sequential type color filter arrangement, so that high-speed processing is possible.
- the attention area and the vicinity area are the luminance signal and the color. This is an area that contains at least one set of color filters necessary to calculate the difference signal.
- the embodiment corresponding to the present invention corresponds to the embodiment 1 shown in FIGS. 1 to 9 and the embodiment 2 shown in FIGS. 10 to 15.
- a preferred application example of the present invention is a signal processing system using a region of interest and a neighboring region shown in Figs. 2A, 2C, 2D and 11A.
- the present invention since it is possible to calculate the luminance signal and the color difference signal in each of the attention area where noise reduction processing is performed and the neighboring area where the noise amount is estimated, noise amount estimation using a wider range is possible. This makes it possible to improve the estimation accuracy.
- the selecting means is based on at least one of the hue signal calculating means for calculating a hue signal for each of the attention area and the vicinity area, and the luminance signal and the hue signal. Similarity determining means for determining the similarity between the area and the neighboring area; and a neighboring area selecting means for selecting the neighboring area based on the similarity.
- the embodiment corresponding to the present invention corresponds to Embodiment 1 shown in FIGS. 1 to 9 and Embodiment 2 shown in FIGS. 10 to 15.
- the hue calculation means is the hue calculation section 203 shown in FIGS. 3, 12, and 13
- the similarity determination means is the similarity determination section 206 shown in FIGS. 3, 12, and 13
- the neighborhood region selection means is the figure. 3, the neighborhood region selection unit 207 shown in FIGS. 12 and 13 is applicable.
- a preferable application example of the present invention is that the hue calculation unit 203 calculates the hue signals of the attention region and the neighboring region, and the similarity determination unit 206 calculates the attention region based on at least one of the luminance signal and the hue signal.
- This is a signal processing system that determines the similarity of neighboring areas and extracts a neighboring area similar to the attention area by the neighboring area selection unit 207.
- the present invention since a neighborhood region similar to the region of interest is extracted based on at least one of the luminance signal and the hue signal, it is possible to estimate the amount of noise from a homogeneous region force, and to estimate the accuracy. Will improve. In addition, the calculation of the luminance signal and hue signal is easy, and a high-speed and low-cost system can be provided.
- the selection means includes a hue calculation means for calculating a hue signal for each of the attention area and the vicinity area, and an edge for calculating an edge signal for each of the attention area and the vicinity area.
- the hue calculating means is the hue calculating section 203 shown in FIG. 12
- the edge calculating means is the edge calculating section 600 shown in FIG. 12
- the similarity determining means is the similarity determining section 206 shown in FIG.
- the region selection means corresponds to the neighborhood region selection unit 207 shown in FIG.
- the hue calculation unit 203 calculates the hue signal of the attention area and the neighboring area
- the edge calculation unit 600 calculates the edge signal
- the similarity determination unit 206 This is a signal processing system that determines the similarity between the attention area and the neighboring area based on at least one of the luminance signal, the hue signal, and the edge signal, and extracts a neighboring area similar to the attention area by the neighboring area selection unit 207. .
- the present invention since a neighboring region similar to the region of interest is extracted based on at least one of the luminance signal, the hue signal, and the edge signal, it is possible to estimate the amount of noise from a homogeneous region. Accuracy is improved. In addition, calculation of the luminance signal, hue signal, and edge signal is easy, and a high-speed and low-cost system can be provided.
- the selecting means calculates a hue signal for calculating the hue signal for each of the attention area and the vicinity area, and calculates a frequency signal for each of the attention area and the vicinity area.
- Frequency calculation means for determining similarity between the attention area and the neighboring area based on at least one of the luminance signal, the hue signal, and the frequency signal; and the neighborhood based on the similarity Neighboring region selection means for selecting a region.
- the embodiment corresponding to the present invention corresponds to embodiment 2 shown in FIGS.
- the hue calculation means is the hue calculation section 203 shown in FIG. 13
- the frequency calculation means is the DCT conversion section 700 shown in FIG. 13
- the similarity judgment means is the similarity judgment section 206 shown in FIG.
- the area selection means corresponds to the neighborhood area selection unit 207 shown in FIG.
- the application example of the present invention is that the hue calculation unit 203 calculates the hue signal of the attention area and the neighboring area, the DCT conversion unit 700 calculates the frequency signal, and the similarity determination unit 206
- the This is a signal processing system that determines the similarity between a region of interest and a neighborhood region based on at least one of a luminance signal, a hue signal, and a frequency signal, and extracts a neighborhood region similar to the region of interest using a neighborhood region selection unit 207.
- the present invention since a neighboring region similar to the region of interest is extracted based on at least one of the luminance signal, the hue signal, and the frequency signal, it is possible to estimate the amount of noise from a homogeneous region. Accuracy is improved. In addition, selection based on frequency signals can make similarity verification more accurate.
- the selection means includes control means for controlling the neighboring regions used in the noise estimation means and the noise reduction means to be different.
- the embodiment corresponding to the present invention corresponds to the embodiment 1 shown in FIGS. 1 to 9 and the embodiment 2 shown in FIGS. 10 to 15.
- the control means corresponds to the control unit 119 shown in FIGS.
- control unit 119 uses the noise estimation unit 115 and the noise reduction unit 116 for the attention area and the neighboring area obtained by the extraction unit 112 and the selection unit 114. This is a signal processing system that controls different neighboring areas.
- the neighborhood region used by the noise estimation unit is controlled to be small, and the neighborhood region used by the noise reduction unit is controlled to be large. Therefore, the noise estimation process is narrow, and the accuracy is obtained by estimating from the region. In the noise reduction processing, the effect is improved by reducing from a wide area.
- the area size suitable for each process can be set, and higher quality signals can be obtained.
- the selection means has a removal means for removing predetermined minute fluctuations from the signals of the attention area and the vicinity area.
- the embodiment corresponding to the present invention corresponds to Embodiment 1 shown in FIGS. 1 to 9 and Embodiment 2 shown in FIGS. 10 to 15.
- the removing means corresponds to the minute fluctuation removing unit 200 shown in FIG. 3, FIG. 12, and FIG.
- a preferred application example of the present invention is a signal processing system in which a minute fluctuation removing unit 200 removes minute fluctuations in a region of interest and a neighboring area.
- the hue signal is obtained after removing minute fluctuations in the signal.
- the stability of the phase signal is improved, and the neighborhood region can be extracted with higher accuracy.
- the selecting means includes coefficient calculating means for calculating a weighting coefficient for the neighboring region based on the similarity.
- Examples corresponding to the present invention correspond to Example 1 shown in Figs. 1 to 9 and Example 2 shown in Figs.
- the coefficient calculation means corresponds to the coefficient calculation unit 208 shown in FIG. 3, FIG. 12, and FIG.
- a preferred application example of the present invention is a signal processing system in which the coefficient calculation unit 208 calculates a weighting coefficient based on the similarity between the attention area and the neighboring area.
- the weighting factor is calculated based on the similarity of the neighboring area, the similarity with the attention area can be utilized in more stages, and the noise amount can be estimated with high accuracy.
- the noise estimation means includes color noise estimation means for estimating a color noise amount from the attention area and the vicinity area selected by the selection means, or the attention area and the selection means. It has at least one luminance noise estimating means for estimating the luminance noise amount from the selected neighborhood region.
- the embodiment corresponding to the present invention corresponds to Embodiment 1 shown in FIGS. 1 to 9 and Embodiment 2 shown in FIGS. 10 to 15.
- the color noise estimation means is the noise estimation unit 115 shown in FIG. 1, FIG. 5, FIG. 8, FIG. 10, and FIG. 14, and the luminance noise estimation means is shown in FIG. This corresponds to the noise estimation unit 115.
- a preferred application example of the present invention is a signal processing system in which at least one of a color noise amount and a luminance noise amount is estimated by the noise estimation unit 115.
- the estimation accuracy can be improved by estimating the color noise amount and the luminance noise amount independently.
- the color noise estimating means includes a collecting means for collecting information related to a temperature value of the image sensor and a gain value for the signal, and V and information obtained by the collecting means.
- a means for assigning a standard value an average color difference calculating means for calculating an average color difference value from the attention area and a neighboring area selected by the selecting means; information from the collecting means or the giving means; and the average color difference value
- Color noise amount calculation means for obtaining a color noise amount based on the above.
- the embodiment corresponding to the present invention corresponds to the embodiment 1 shown in FIGS. 1 to 9 and the embodiment 2 shown in FIGS. 10 to 15.
- the collecting means is the temperature sensor 121, the control unit 119 and the gain calculating unit 302 shown in FIGS.
- the average color difference calculation means is the average calculation section 301 shown in FIG. 5 and FIG. 14, and the color noise amount calculation means is the parameter ROM 304, parameter selection section 305, interpolation section 306, correction section 307 and FIG.
- information used for noise amount estimation is collected by the temperature sensor 121, the control unit 119, and the gain calculation unit 302, and the temperature sensor 121, the control unit 119 is collected by the standard value providing unit 303.
- a standard value is set, the average calculation unit 301 calculates the average color difference value from the attention area and the neighboring area, and the parameter ROM 304, parameter selection unit 305, In this signal processing system, the interpolation unit 306, the correction unit 307, or the lookup table unit 800 obtains the amount of color noise.
- various information related to the amount of noise is dynamically obtained for each shooting, and for information that cannot be obtained, standard values are set, and the information color noise amount is calculated.
- dynamically adapting to different conditions for each shooting it is possible to estimate the amount of color noise with high accuracy. Even if necessary information cannot be obtained, the amount of color noise can be estimated, and a stable noise reduction effect can be obtained.
- the luminance noise estimating means includes a collecting means for collecting information related to a temperature value of the image sensor and a gain value for the signal, and a standard value for information that is not obtained by the collecting means.
- the embodiment corresponding to the present invention corresponds to the embodiment 1 shown in FIGS. 1 to 9 and the embodiment 2 shown in FIGS. 10 to 15.
- the collecting means is the temperature sensor 121, the control unit 119 and the gain calculating unit 302 shown in FIGS. 5 and 14 shown in FIGS. 1 and 10, and the assigning means is the standard value giving unit 303 shown in FIGS.
- the average luminance calculation means is the average calculation shown in FIG. 5 and FIG.
- the unit 301 corresponds to the luminance noise amount calculation means corresponding to the parameter ROM 304, the parameter selection unit 305, the interpolation unit 306, the correction unit 307 shown in FIG. 5, and the lookup table unit 800 shown in FIG.
- information used for noise amount estimation is collected by the temperature sensor 121, the control unit 119 and the gain calculation unit 3 02, and the temperature sensor 121 and the control unit 119 are collected by the standard value providing unit 303.
- a standard value is set, the average calculation unit 301 calculates the average luminance value from the attention area and the neighboring area, and the parameter ROM 304, parameter selection unit 305, This is a signal processing system in which the luminance noise amount is obtained by the interpolation unit 306, the correction unit 307, or the lookup table unit 800.
- the present invention various information related to the amount of noise is dynamically obtained for each shooting, and a standard value is set for information that cannot be obtained, and the luminance noise amount is calculated from the information. Therefore, it is possible to estimate the amount of luminance noise with high accuracy by dynamically adapting to different conditions for each shooting. Even when necessary information cannot be obtained, the luminance noise amount can be estimated, and a stable noise reduction effect can be obtained.
- the collecting means has a temperature sensor for measuring the temperature value of the imaging element.
- the embodiment corresponding to the present invention corresponds to the embodiment 1 shown in FIGS. 1 to 9 and the embodiment 2 shown in FIGS. 10 to 15.
- the temperature sensor corresponds to the temperature sensor 12 1 shown in FIGS.
- a preferred application example of the present invention is a signal processing system that measures the temperature of the CCD 103 from the temperature sensor 121 in real time.
- the temperature of the image sensor at the time of shooting is measured and used as noise amount estimation information. Therefore, the noise amount is dynamically adapted to the temperature change at the time of shooting, and the noise amount is estimated with high accuracy. Is possible.
- the collecting means includes gain calculating means for obtaining the gain value based on at least one information of ISO sensitivity, exposure information, and white balance information.
- Examples corresponding to the present invention correspond to Example 1 shown in FIGS. 1 to 9 and Example 2 shown in FIGS. 10 to 15.
- the gain calculation means is the gain calculation unit 3 shown in FIGS.
- Control part 119 corresponds.
- a preferred application example of the present invention is a signal processing system in which the control unit 119 transfers ISO sensitivity, exposure information, white balance information, and the like, and the gain calculation unit 302 calculates the total gain amount at the time of shooting. It is.
- ISO sensitivity, exposure information, white balance information power Gain amount at the time of shooting is obtained and used as noise amount estimation information, so it can be dynamically adapted to gain changes at the time of shooting and is highly accurate. It is possible to estimate the amount of noise.
- the color noise amount calculating means includes a recording means for recording at least one set of a parameter group including a reference color noise model and a correction coefficient corresponding to a predetermined hue, and the collecting means or The parameter group power based on the information from the assigning means and the average color difference value, the parameter selection means for selecting the necessary parameters, and the reference color in the parameter group selected by the average color difference value and the parameter selection means.
- Interpolation means for obtaining the reference color noise amount by interpolation calculation based on the noise model, and correction for obtaining the color noise amount by correcting the reference color noise amount based on the correction coefficient in the parameter group selected by the parameter selection means. Means.
- the embodiment corresponding to the present invention corresponds to the embodiment 1 shown in FIGS.
- the recording means is the parameter ROM 304 shown in FIG. 5
- the parameter selection means is the parameter selection section 305 shown in FIG. 5
- the interpolation means is the interpolation section 306 shown in FIG. 5
- the correction means is shown in FIG.
- the correction unit 307 is applicable.
- the coefficient and correction coefficient of the reference color noise model used for estimating the noise amount measured in advance in the parameter ROM 304 are recorded and stored in the parameter selection unit 305.
- This is a signal processing system for obtaining the amount of color noise.
- the color noise amount is obtained by performing the interpolation and correction processing based on the reference color noise model, it is possible to estimate the noise amount with high accuracy. Also, interpolation and correction processing can be easily implemented, and a low-cost system can be provided.
- the reference color noise model is composed of a plurality of coordinate point data consisting of color noise amounts for color difference values.
- a preferred application example of the present invention is a signal processing system using a reference color noise model composed of a plurality of coordinate point data shown in FIG. 6B.
- the reference color noise model is composed of a plurality of coordinate point data, it is possible to achieve low cost with a small amount of memory required for the model.
- the color noise amount calculating means includes a look-up table means for obtaining the color noise amount by inputting the information on the collecting means or the applying means force and the average color difference value.
- An embodiment corresponding to the present invention corresponds to the embodiment 2 shown in Figs.
- the lookup table means corresponds to the lookup table section 800 shown in FIG.
- a preferred application example of the present invention is a signal processing system that obtains the amount of color noise in lookup table unit 800.
- the lookup table force color noise amount is calculated, so that high-speed processing is possible.
- the luminance noise amount calculating means includes a recording means for recording a parameter group including a reference luminance noise model and a correction coefficient, information from the collecting means or the giving means, and the average.
- Parameter selection means for selecting a required parameter from the parameter group based on the brightness value, and reference brightness by interpolation based on the average brightness value and a reference brightness noise model in the parameter group selected by the parameter selection means.
- Interpolating means for obtaining a noise amount
- correcting means for obtaining a luminance noise amount by correcting the reference luminance noise amount based on a correction coefficient in the parameter group selected by the parameter selecting means.
- the embodiment corresponding to the present invention corresponds to the embodiment 1 shown in FIGS.
- the recording means is the parameter ROM 304 shown in FIG. 5
- the parameter selection means is the parameter selection section 305 shown in FIG. 5
- the interpolation means is the interpolation section 306 shown in FIG. 5
- the correction means is shown in FIG.
- the correction unit 307 is applicable.
- the coefficient and correction coefficient of the reference luminance noise model used for estimating the noise amount measured in advance in the parameter ROM 304 are recorded.
- the data selection unit 305 selects the coefficient and correction coefficient of the reference luminance noise model
- the interpolation unit 306 calculates the reference luminance noise amount based on the reference luminance noise model
- the correction unit 307 calculates the correction coefficient. This is a signal processing system that determines the amount of luminance noise by correcting it based on the above.
- the luminance noise amount is obtained by performing interpolation and correction processing based on the reference luminance noise model, so that the noise amount can be estimated with high accuracy.
- Interpolation and correction processing can be easily implemented, and a low-cost system can be provided.
- the reference luminance noise model is composed of a plurality of coordinate point data including luminance noise amounts with respect to luminance values.
- the embodiment corresponding to the present invention corresponds to the embodiment 1 shown in FIGS.
- a preferred application example of the present invention is a signal processing system using a reference luminance noise model including a plurality of coordinate point data shown in FIG. 6B.
- the reference luminance noise model is composed of a plurality of coordinate point data, it is possible to reduce costs by reducing the amount of memory required for the model.
- the luminance noise amount calculating means includes lookup table means for receiving the information from the collecting means or the assigning means and the average luminance value as input and calculating the luminance noise amount.
- the embodiment corresponding to the present invention corresponds to the embodiment 2 shown in FIGS.
- the lookup table means corresponds to the lookup table section 800 shown in FIG.
- a preferred application example of the present invention is a signal processing system in which the look-up table unit 800 determines the luminance noise amount.
- the luminance noise amount is calculated from the lookup table, high-speed processing is possible.
- the noise reduction means reduces color noise from the attention area based on the noise amount or color noise reduction means for reducing color noise from the attention area based on the noise amount. And at least one luminance noise reduction means.
- Examples corresponding to the present invention correspond to Example 1 shown in FIGS. 1 to 9 and Example 2 shown in FIGS. 10 to 15.
- the color noise reduction means are shown in Fig. 1, Fig. 7, Fig. 8, and Fig. 10.
- the noise reduction unit 116 corresponds to the noise noise reduction unit 116 shown in FIG. 1, FIG. 7, FIG. 8, and FIG.
- a preferred application example of the present invention is a signal processing system in which at least one of color noise or luminance noise is reduced by the noise reduction unit 116.
- each reduction accuracy can be improved by independently reducing the amount of color noise and the amount of luminance noise.
- the color noise reduction means relates to setting means for setting a noise range in the attention area based on a noise amount of the noise estimation means, and a color difference signal of the attention area.
- the embodiment corresponding to the present invention corresponds to the embodiment 1 shown in FIGS.
- the setting means is the range setting section 400 shown in FIG. 7, the first smoothing means is the first smoothing section 402 shown in FIG. 7, and the second smoothing means is the second smoothing section 4003 shown in FIG. Is applicable.
- the first smoothing unit 402 performs smoothing on the color difference signal of the attention area determined to belong to the noise range! This is a signal processing system that corrects the color difference signal in the region of interest that is determined not to belong to the noise range by the second smoothing unit 403.
- the smoothing process is performed on the color difference signal of the attention area determined to belong to the noise range, and the correction process is performed on the color difference signal of the attention area determined to not belong.
- the generation of discontinuities associated with noise reduction processing is prevented, and a high-quality signal can be obtained.
- the luminance noise reduction means relates to setting means for setting a noise range in the attention area based on a luminance noise amount from the noise estimation means, and a luminance signal of the attention area.
- a first smoothing unit that performs smoothing when belonging to a noise range
- a second smoothing unit that performs correction when the luminance signal of the region of interest does not belong to the noise range.
- the embodiment corresponding to the present invention corresponds to Embodiment 1 shown in FIGS.
- Setting hand 7 corresponds to the range setting section 400 shown in FIG. 7
- the first smoothing means corresponds to the first smoothing section 402 shown in FIG. 7
- the second smoothing means corresponds to the second smoothing section 4003 shown in FIG. To do.
- a preferred application example of the present invention is to smooth the luminance signal of the attention area determined to belong to the noise range by the first smoothing unit 402, and into the noise range by the second smoothing unit 403.
- This is a signal processing system that corrects the luminance signal of a region of interest that is determined not to belong.
- the smoothing process is performed on the luminance signal of the attention area determined to belong to the noise range, and the correction process is performed on the luminance signal of the attention area determined not to belong. It is possible to prevent discontinuity due to the luminance noise reduction processing and to obtain a high-quality signal.
- the signal processing program according to the present invention corresponds to each of the signal processing systems according to the above-described invention, and can obtain the same effect as the computer by executing the processing.
- FIG. 1 is a configuration diagram of a signal processing system according to Embodiment 1 of the present invention.
- FIG. 2A is an explanatory diagram regarding a local region in a Bayer type color filter.
- FIG. 2B is an explanatory diagram regarding a local region in a Bayer type color filter.
- FIG. 2C is an explanatory diagram regarding a local region in a Bayer type color filter.
- FIG. 2D is an explanatory diagram regarding a local region in a Bayer type color filter.
- FIG. 3 is a configuration diagram of the selection unit in FIG.
- FIG. 4A is an explanatory diagram regarding hue classification based on the slope gradient.
- FIG. 4B is an explanatory diagram regarding hue classification based on the slope gradient.
- FIG. 4C is an explanatory diagram regarding hue classification based on a spectral gradient.
- FIG. 4D is an explanatory diagram regarding hue classification based on a spectral gradient.
- FIG. 5 is a block diagram of the noise estimation unit of FIG.
- FIG. 6A is an explanatory diagram regarding estimation of noise amount.
- FIG. 6B is an explanatory diagram regarding estimation of noise amount.
- FIG. 6C is an explanatory diagram regarding estimation of noise amount.
- FIG. 6D is an explanatory diagram regarding noise amount estimation.
- FIG. 7 is a block diagram of the noise reduction unit of FIG.
- FIG. 8 is a configuration diagram of a signal processing system according to another embodiment of the first embodiment of the present invention.
- FIG. 9 is a flowchart of noise reduction processing according to the first embodiment.
- FIG. 10 is a configuration diagram of a signal processing system according to a second embodiment of the present invention.
- FIG. 11A is an explanatory diagram regarding a local region in a color difference line sequential color filter.
- FIG. 11B is an explanatory diagram regarding a local region in a color difference line sequential color filter.
- FIG. 11C is an explanatory diagram regarding a local region in a color difference line sequential color filter.
- FIG. 12 is a configuration diagram of the selection unit in FIG.
- FIG. 13 is a configuration diagram of another configuration selection unit according to the second embodiment.
- FIG. 14 is a block diagram of the noise estimation unit in FIG.
- FIG. 15 is a flowchart of noise reduction processing according to the second embodiment of the present invention.
- FIG. 1 shows a configuration diagram of the signal processing system according to the first embodiment of the present invention
- FIGS. 2A to 2D are explanatory diagrams regarding a local region in a Bayer-type color filter
- FIG. 3 is a configuration diagram of a selection unit in FIG. 4A to 4D are explanatory diagrams regarding hue classification based on spectral gradients
- FIG. 5 is a configuration diagram of the noise estimation unit of FIG. 1
- FIGS. 6A to 6D are explanatory diagrams regarding noise amount estimation
- FIG. FIG. 8 is a block diagram of another embodiment of the first embodiment
- FIG. 9 is a flowchart of noise reduction processing in the first embodiment.
- FIG. 1 is a configuration diagram of Embodiment 1 of the present invention.
- the image taken through the lens system 100, the aperture 101, the low-pass filter 102, and the single plate CCD 103 is sampled by a correlated double sampling circuit (hereinafter abbreviated as CDS, CDS is abbreviated as Correlated Double Sampling) 104, Gain control amplifier (hereinafter abbreviated as “Gain”) 105 amplifies, and analog / digital conversion (hereinafter abbreviated as “A / D”) 106 converts it to a digital signal.
- the signal from the A / D 106 is transferred to the extraction unit 112 via the notch 107.
- Noffa 107 has a pre-white balance (hereinafter referred to as Pre Also abbreviated as WB) unit 108, photometric evaluation unit 109, and in-focus detection unit 110.
- WB pre-white balance
- the PreWB unit 108 is connected to Gainl05, the photometric evaluation unit 109 is connected to the aperture 101, CCD103, and Gainl05, and the focusing point detection unit 110 is connected to the AF motor 111.
- the signal from the extraction unit 112 is connected to the Y / C separation unit 113 and the selection unit 114.
- the Y / C separation unit 113 is connected to the selection unit 114, and the selection unit 114 is connected to the noise estimation unit 115 and the noise reduction unit 116.
- the noise reduction unit 116 is connected to an output unit 118 such as a memory card via a signal processing unit 117.
- a control unit 119 such as a microcomputer includes a CDS104, Gainl05, A / D106, PreWB unit 108, photometry evaluation unit 109, in-focus detection unit 110, extraction unit 112, Y / C separation unit 113, selection unit 114.
- the noise estimation unit 115, the noise reduction unit 116, the signal processing unit 117, and the output unit 118 are bidirectionally connected.
- An external I / F unit 120 having an interface for switching between a power switch, a shutter button, and various modes at the time of shooting is also connected to the control unit 119 bidirectionally. Further, a signal from a temperature sensor 121 arranged in the vicinity of the CCD 103 is connected to the control unit 119.
- the signal flow is described.
- the shutter button halfway to enter the pre-shooting mode.
- a video signal photographed through the lens system 100, the aperture 101, the low-pass filter 102, and the CCD 103 is read out as an analog signal by the CDS 104 by known correlated double sampling.
- the CCD 103 is assumed to be a single-plate CCD having a Bayer-type primary color filter on the front surface.
- FIG. 2A shows the configuration of a Bayer color filter.
- the Bayer type uses 2 X 2 pixels as a basic unit, and red (R) and blue (B) are arranged one pixel at a time, and green (G) is arranged at two pixels.
- the analog signal is amplified by a predetermined amount by GainlO 5, converted to a digital signal by A / D 106, and transferred to buffer 107.
- GainlO 5 a predetermined amount
- a / D 106 converts to a digital signal with 12-bit gradation.
- the video signal in the buffer 107 is transferred to the PreWB unit 108, the photometric evaluation unit 109, and the in-focus point detection unit 110.
- PreWB section 108 calculates a simple white balance coefficient by integrating signals of a predetermined luminance level in the video signal for each color signal. Transfer the above coefficient to Gainl05, color signal White balance is performed by multiplying a different gain for each.
- the photometric evaluation unit 109 takes into account the set ISO sensitivity, shutter speed at the limit of camera shake, etc., and obtains the brightness level in the signal to obtain the appropriate exposure so that the shutter speed of the aperture 101 and CCD103 and the gain of Gainl 05 can be adjusted. Control etc.
- the focus detection unit 110 detects the edge intensity in the signal and controls the AF motor 111 so as to maximize the edge intensity, thereby obtaining a focus signal.
- full shooting is performed by fully pressing the shutter button via the external I / F unit 120, and the video signal is transferred to the buffer 107 in the same manner as the pre-shooting.
- the actual shooting is performed based on the white balance coefficient obtained by the PreWB unit 108, the exposure condition obtained by the photometric evaluation unit 109, and the focusing condition obtained by the in-focus detection unit 110.
- the time condition is transferred to the control unit 119.
- the video signal in the buffer 107 is transferred to the extraction unit 112. Based on the control of the control unit 119, the extraction unit 112 sequentially extracts a local region that also has a region of interest and a nearby region force as shown in FIG.
- the Y / C separation unit 113 Based on the control of the control unit 119, the Y / C separation unit 113 also calculates the luminance signal Y and the color difference signals Cb and Cr for the region of interest in the region of interest. In this embodiment, an RGB primary color filter is assumed, and the luminance signal and the color difference signal are calculated based on Equation (1).
- the calculated luminance signal and color difference signal are transferred to the selection unit 114.
- the selection unit 114 uses the local region from the extraction unit 112 and the luminance signal and color difference signal from the Y / C separation unit 113 to select a neighboring region similar to the attention region.
- the region of interest, the selected neighboring region, and the corresponding luminance signal and color difference signal are transferred to the noise estimation unit 115 and the noise reduction unit 116. Also, a weighting factor for the selected neighborhood region is calculated and transferred to the noise estimation unit 115.
- the noise estimation unit 115 selects the region of interest and selection from the extraction unit 112 based on the control of the control unit 119.
- the amount of noise is estimated based on the adjacent area, luminance signal, color difference signal, weighting factor, and other information at the time of shooting, and this is transferred to the noise reduction unit 116.
- the noise reduction unit 116 Based on the control of the control unit 119, the noise reduction unit 116 performs noise reduction processing of the attention region based on the attention region, luminance signal, color difference signal from the extraction unit 112, and noise amount from the noise estimation unit 115 !, and after processing Is transferred to the signal processing unit 117.
- the processing in the extraction unit 112, Y / C separation unit 113, selection unit 114, noise estimation unit 115, and noise reduction unit 116 is performed synchronously in units of local regions based on the control of the control unit 119.
- the signal processing unit 117 Based on the control of the control unit 119, the signal processing unit 117 performs well-known enhancement processing, compression processing, and the like on the video signal after noise reduction, and transfers the video signal to the output unit 118.
- the output unit 118 records and saves the signal on a memory card or the like.
- FIGS. 2A to 2D are explanatory diagrams regarding a local region in a Bayer color filter.
- Figure 2A shows the configuration of a 6 x 6 pixel local area
- Figure 2B shows the separation into luminance Z color difference signals
- Figure 2C shows another form of 6 x 6 pixels
- Figure 2D shows another form of 10 x 10 pixels. Each area is shown.
- FIG. 2A shows the configuration of the local region in the present example.
- the attention area is 2 ⁇ 2 pixels
- the neighboring area is 2 ⁇ 2 pixels
- eight are arranged so as to include the attention area
- the local area is 6 ⁇ 6 pixels.
- the extraction unit 112 extracts the local region while overlapping the 2 rows and 2 columns so that the region of interest covers all signals.
- FIG. 2B shows the luminance signal and the color difference signal calculated based on the equation (1) for each region of interest and neighboring regions.
- the luminance signal and color difference signal of the attention area are represented by Y, Cb, Cr
- the luminance signal Y and the color difference signals Cb and Cr are calculated using equation (2).
- FIG. 2C shows another configuration of a local area of 6 ⁇ 6 pixel size, and the neighboring areas are arranged so that the neighboring areas overlap by one row and one column.
- Fig. 2D shows another configuration of a local area of 10 X 10 pixels.
- the neighboring area consists of four 2 X 2 pixels and four 3X3 pixels, and is sparsely arranged in the local area. .
- the focus area and the neighboring area can be configured in any form as long as one or more R, G, and B pairs are required for calculating the luminance signal and the color difference signal.
- FIG. 3 shows an example of the configuration of the selection unit 114, which includes a minute fluctuation removal unit 200, a noffer 1201, a gradient calculation unit 202, a hue calculation unit 203, a hue class ROM 204, a buffer 2205, and a similarity determination unit. 206, a neighborhood region selection unit 207, and a coefficient calculation unit 208.
- the extraction unit 112 is connected to the hue calculation unit 203 via the minute fluctuation removal unit 200, the buffer 1201, and the gradient calculation unit 202.
- the hue class ROM 204 is connected to the hue calculation unit 203.
- the Y / C separation unit 113 is connected to the similarity determination unit 206 and the neighborhood region selection unit 207 via the notifier 2205.
- Hue calculation unit 203 is connected to similarity determination unit 206, and similarity determination unit 206 is connected to neighborhood region selection unit 207 and coefficient calculation unit 208.
- the neighborhood region selection unit 207 is connected to the noise estimation unit 115 and the noise reduction unit 116.
- the coefficient calculation unit 208 is connected to the noise estimation unit 115.
- the control unit 119 is bidirectionally connected to the minute fluctuation removing unit 200, the gradient calculating unit 202, the hue calculating unit 203, the similarity determining unit 206, the neighborhood region selecting unit 207, and the coefficient calculating unit 208.
- the local region from the extraction unit 112 is transferred to the minute fluctuation removing unit 200 based on the control of the control unit 119, and a predetermined minute fluctuation component is removed. This is done by removing the lower bits of the video signal.
- the A / D106 is assumed to be digital in 12-bit gradation. By shifting the lower 4 bits, a minute fluctuation component is removed and converted into an 8-bit signal. Transfer to buffer 1201.
- the gradient calculation unit 202, the hue calculation unit 203, and the hue class ROM 204 are based on the control of the control unit 119.
- a vector gradient is obtained and transferred to the similarity determination unit 206.
- FIGS. 4A to 4D are explanatory diagrams regarding hue classification based on the spectral gradient.
- 4A shows the input image
- FIG. 4B shows the hue classification based on the spectral gradient
- FIG. 4C shows the CCD output signal
- FIG. 4D shows the result of the hue classification.
- Fig. 4A shows an example of the input image.
- the upper area A is white and the lower area B is red.
- Fig. 4B is a plot of signal values (I) for the R, G, and B spectra in the A and B regions.
- Area B is red and
- FIG. 4C shows an image when the input image of FIG. 4A is captured by the Bayer-type single-plate CCD shown in FIG. 2A and the focus area and the vicinity area described above are set.
- the power to find the gradient of the spectrum in the region of interest and the neighborhood region There are two G signals in each region. This is handled by calculating the average value and using it as the G signal.
- Fig. 4D shows a state in which 0 to 12 classes are assigned in the attention area and neighboring area units as described above.
- the obtained image is output to the similarity determination unit 206.
- the gradient calculation unit 202 obtains the magnitude relationship between the RGB signals in units of the attention area and the neighboring area, and transfers this to the hue calculation unit 203.
- the hue calculation unit 203 obtains 13 hue classes based on the magnitude relationship between the RGB signals from the gradient calculation unit 202 and the information on the hue class from the hue class ROM 204, and transfers these to the similarity determination unit 206.
- the hue class ROM 204 stores information on the spectral gradient shown in Table 1 and 13 hue classes.
- the luminance signal and the color difference signal from the Y / C separation unit 113 are stored in the buffer 2205.
- the similarity determination unit 206 reads the luminance signals of the attention area and the neighboring area from the nother 2205.
- the similarity determination unit 206 determines the similarity between the attention area and the neighboring area based on the hue class and the luminance signal from the hue calculation unit 203. This means that “the same hue class as the region of interest” and “the luminance signal Y of the region of interest is close to the range of ⁇ 20%.
- Neighboring areas satisfying the condition “brightness signal Yi of area belongs” are determined to have high similarity, and the above determination result is transferred to neighboring area selecting section 207 and coefficient calculating section 208.
- the neighborhood region selection unit 207 reads out from the buffer 2205 the luminance signal Yi ′ and color difference signals CW ′ and Cri ′ of the neighborhood region for which the similarity is determined to be high from the similarity determination unit 206 based on the control of the control unit 119. Transfer to the noise estimation unit 115. In addition, the luminance signal Y and the color difference signals Cb and Cr of the region of interest are read from 2205 buffers, and noise estimation is performed.
- the coefficient calculation unit 208 calculates the weighting coefficient Wi ′ for the neighborhood area determined to have a high degree of similarity. This is calculated based on equation (3).
- the calculated weight coefficient Wi ′ is transferred to the noise estimation unit 115.
- FIG. 5 shows an example of the configuration of the noise estimation unit 115.
- the selection unit 114 is connected to the buffer 300 and the average calculation unit 301.
- the buffer 300 is connected to the average calculator 301 and the average calculator 301 is connected to the parameter selector 305.
- the gain calculation unit 302, the standard value assigning unit 303, and the parameter ROM 304 are connected to the parameter selection unit 3 05.
- the parameter selection unit 305 is connected to the interpolation unit 306 and the correction unit 307.
- the interpolation unit 306 is connected to the correction unit 307.
- the correction unit 307 is connected to the noise reduction unit 116.
- the control unit 119 is connected bidirectionally to an average calculation unit 301, a gain calculation unit 302, a standard value assigning unit 303, a parameter selection unit 305, an interpolation unit 306, and a correction unit 307.
- the luminance signal and the color difference signal of the attention area and the vicinity area determined to have high similarity from the selection unit 114 are stored in the buffer 300.
- the weighting coefficient related to the neighborhood area determined to have a high degree of similarity is transferred to the average calculation unit 301.
- the average calculation unit 301 reads the luminance signal and the color difference signal from the buffer 300 based on the control of the control unit 119, and uses the weighting coefficient to calculate the average value AV, AVcb, AVer of the luminance signal and the color difference signal for the local region.
- the average value of the luminance signal and the color difference signal is transferred to the parameter selection unit 305.
- Gain calculating section 302 obtains the amplification amount in Gain 105 based on the ISO sensitivity, exposure conditions, and white balance coefficient information transferred from control section 119, and transfers the gain to parameter selection section 305.
- the control unit 119 obtains the temperature information of the CCD 103 from the temperature sensor 121 and transfers this to the parameter selection unit 305.
- the parameter selection unit 305 estimates the amount of noise based on the average value of the luminance signal and the color difference signal from the average calculation unit 301, the gain information from the gain calculation unit 302, and the temperature information from the control unit 119.
- FIG. 6A to FIG. 6D are explanatory diagrams relating to the estimation of the noise amount.
- Fig. 6A shows the relationship between noise level and signal level
- Fig. 6B shows a simplified noise model
- Fig. 6C shows how to calculate the amount of noise from a simplified noise model
- Fig. 6D shows six methods for color noise models. Hue is shown.
- N s a s L 2 + s L + y s (5 )
- j8 s are constant terms.
- the amount of noise varies not only with the signal level but also with the temperature and gain of the element.
- Figure 6A plots the amount of noise for three ISO sensitivities 100, 200, and 400 related to gain as an example. In other words, it shows the noise amount for 1, 2, and 4 times the gain.
- For temperature t the average amount of noise at three environmental temperatures of 20, 50, and 80 ° C is shown.
- Each curve has the form shown in Eq. (5), but the coefficient varies depending on the ISO sensitivity related to the gain.
- N s a sgt L 2 + p 3 ⁇ 4 t L + Y sgt (6)
- a sgt, iS sgt, and y sgt are constant terms.
- a sgt, iS sgt, and y sgt are constant terms.
- iS sgt, and y sgt are constant terms.
- the model is simplified as shown in Fig. 6B.
- the model that gives the maximum amount of noise is selected as the reference noise model, which is approximated by a predetermined number of broken lines.
- the inflection point of the broken line is expressed by coordinate data (Ln, Nn) consisting of the signal level L and the noise amount N.
- n indicates the number of inflection points.
- a correction coefficient ksgt for deriving other noise models such as the reference noise model is also prepared.
- the correction coefficient ksgt is calculated by the least square method from between each noise model and the reference noise model. Derivation of other noise models such as the reference noise model is performed by multiplying the correction coefficient ksgt.
- Figure 6C is the same as Figure 6B.
- a method of calculating the noise amount from the simplified noise model shown is shown. For example, suppose that a noise level Ns corresponding to a given signal level 1, signal s, gain g, and temperature power is obtained. First, it searches for which section of the reference noise model signal level 1 belongs to. Here, it is assumed that it belongs to the section between (Ln, Nn) and (Ln + 1, Nn + 1). The reference noise amount N in the reference noise model is obtained by linear interpolation.
- the noise amount Ns is obtained by multiplying the correction coefficient ksgt.
- the above-mentioned reference noise model has basically the same configuration as the power that can be divided into a reference luminance noise model for luminance signals and a reference color noise model for color difference signals.
- the amount of color noise for the color difference signals Cb and Cr varies depending on the hue direction.
- the reference colors are set for each of the six hues R (red), G (green), B (blue), Cy (cyan), Mg (magenta), and Ye (yellow).
- the color difference signals Cb and Cr are not two types.
- the inflection point coordinate data (Ln, Nn) and correction coefficient ksgt of the reference noise model for the luminance and color difference signals are recorded in the ROM 304 for the meter.
- the parameter selection unit 305 obtains the signal level 1 from the average values AV, AVcb, and AVer of the luminance signal and the color difference signal from the average calculation unit 301 and calculates the gain.
- the gain information force from the output unit 302 also sets the gain g
- the temperature information force from the control unit 119 also sets the temperature t.
- the hue signal H is obtained from the average value AVcb, AVer of the color difference signal based on the equation (9), and the hue closest to the above six hue medium strength hue signals H of G, B, Cy, Mg, Ye is selected. Cb_H, Cr_I "[is set.
- the coordinate data (Ln, Nn) and (Ln + 1, Nn + 1) of the section to which the signal level 1 belongs are searched from the parameter ROM 304 and transferred to the interpolation unit 306. Further, the correction coefficient ksgt is searched from the parameter ROM 304 and transferred to the correction unit 307. Based on the control of the control unit 119, the interpolation unit 306 is based on the signal level 1 from the parameter selection unit 305 and the coordinate data (Ln, Nn) and (Ln + 1, Nn + 1) of the section based on the formula (7).
- the correction unit 307 calculates the noise amount Ns based on the correction coefficient ksgt from the parameter selection unit 305 and the reference noise amount N force (8) from the interpolation unit 306. Average value of color difference signal AV, AVcb, AVer
- the noise reduction unit 116 It is not necessary to obtain information such as temperature t and gain g for each image.
- a configuration in which arbitrary information is recorded in the standard value assigning unit 303 and the calculation process is omitted is also possible. Thereby, high-speed processing and power saving can be realized.
- the force using the hues in the six directions shown in FIG. 6D as the reference color noise model need not be limited to this. For example, a free configuration is possible, such as using an important skin color as a memory color.
- FIG. 7 shows an example of the configuration of the noise reduction unit 116, which includes a range setting unit 400, a switching unit 401, a first smoothing unit 402, and a second smoothing unit 403.
- Noise estimation unit 115 is connected to range setting unit 400, and range setting unit 400 is connected to switching unit 401, first smoothing unit 402, and second smoothing unit 403.
- the selection unit 114 is connected to the switching unit 401, and the switching unit 401 is connected to the first smoothing unit 402 and the second smoothing unit 403.
- the first smoothing unit 402 and the second smoothing unit 403 are connected to the signal processing unit 117.
- the control unit 119 is bi-directionally connected to the range setting unit 400, the switching unit 401, the first smoothing unit 402, and the second smoothing unit 403.
- the noise estimation unit 115 transfers the average values AV, AVcb, AVer and the noise amount Ns of the luminance signal and the color difference signal to the range setting unit 400.
- Range setting section 400 includes a range
- Equation (10) the upper limit Us and the lower limit Ds are set as shown in Equation (10) as an allowable range regarding the noise amount of the luminance signal and the color difference signal.
- U Cr AV Cr + N Cr / 2
- D Cr AV Cf ⁇ N Cr / 2
- the allowable ranges Us and Ds are transferred to the switching unit 401.
- the range setting unit 400 calculates the average value AV, AVcb, AVer and the noise amount Ns of the luminance signal and the color difference signal as the first smoothing.
- the switching unit 401 reads the luminance signal Y and the color difference signals Cb and Cr of the region of interest from the selection unit 114 based on the control of the control unit 119.
- the switching unit 401 transfers the luminance signal Y and the color difference signals Cb and Cr of the attention area to the first smoothing unit 402 when “belonging to the noise range”, and to the second smoothing unit 403 otherwise.
- the first smoothing unit 402 receives the luminance signal Y and the attention area from the switching unit 401.
- the processing in the equation can be converted as shown in the equation (12) based on the equation (2).
- the processing in Eq. (12) is the attention area that was processed with the luminance signal Y and the color difference signals Cb and Cr.
- the second smoothing unit 403 includes the average value AV of the luminance signal from the range setting unit 400 and the luminance signal Y and the color difference signals Cb and Cr of the attention area from the switching unit 401.
- G 23 AV Y -N y / 2
- B 33 B 33 + AV Y -N Cb / 2 (14)
- R 22 R 22 + AV Y -N Cf / 2 Also, if “below the noise range! To correct.
- G n AV Y + N Y / 2
- R 22 R 22 + AV Y -N Cr / 2
- the processing in (14) or (16) is processed with the luminance signal Y and the color difference signals Cb and Cr. V also means that the attention area is restored to the original RGB signal.
- the RGB signal of equation (14) or equation (16) is transferred to the signal processing unit 117.
- the estimation accuracy of each can be improved by estimating the color noise amount and the luminance noise amount independently. Since a model is used to calculate the amount of noise, it is possible to estimate the amount of noise with high accuracy. Also, interpolation and correction processing based on the reference model can be easily implemented, and a low-cost system can be provided.
- the amount of memory required for the model can be reduced and the cost can be reduced.
- the noise reduction process sets an allowable range based on the amount of noise, so that it is excellent in the preservation of the original signal and can be reduced without causing discontinuities. Furthermore, since the signal after noise reduction processing is output as the original signal, compatibility with conventional processing systems is maintained, and various system combinations are possible.
- the noise amount estimation and the noise reduction processing have been performed using all the selected neighboring regions, but it is not necessary to be limited to such a configuration.
- the neighboring region in the diagonal direction of the region of interest is excluded, and the accuracy is improved by estimating in a relatively narrow region, and the comparison using all the selected neighboring regions is performed in the noise reduction process. It can be configured freely, for example, by increasing the smoothing ability, such as in a wide area.
- the lens system 100 the aperture 101, the low-pass filter 102, the CCD 103, and the CDS 104, Gainl05, A / D106, PreWB unit 108, Metering evaluation unit 109, Focus detection unit 110, AF motor 111, Temperature sensor 121
- the configuration need not be limited.
- a recording medium such as a memory card in which a video signal picked up by a separate image pickup unit is in an unprocessed raw data form, and additional information such as image pickup conditions is recorded in the header part. Processing is also possible.
- FIG. 8 shows the configuration shown in FIG.
- the motor 111 and the temperature sensor 121 are omitted, and the input unit 500 and the header information analysis unit 501 are added.
- the basic configuration is the same as in Fig. 1, and the same name and number are assigned to the same configuration. Only different parts will be described below.
- the input unit 500 is connected to the buffer 107 and the header information analysis unit 501.
- the control unit 119 is bi-directionally connected to the input unit 500 and the header information analysis unit 501.
- the external I / F unit 120 such as a mouse or a keyboard
- signals and header information stored in a recording medium such as a memory card are read from the input unit 500.
- the signal from the input unit 500 is transferred to the buffer 107, and the header information is transferred to the header information analysis unit 501.
- the header information analysis unit 501 extracts information at the time of shooting as the header information power and transfers the information to the control unit 119.
- the subsequent processing is the same as in Figure 1.
- FIG. 9 shows a flow relating to software processing of noise reduction processing.
- step S1 read the signal and header information such as temperature and gain.
- step S2 a local region composed of a region of interest and a neighboring region as shown in FIG. 2A is extracted.
- step S3 the luminance signal and the color difference signal are separated as shown in equation (1).
- step S4 the attention area and the neighboring area in the local area are classified into hue classes as shown in Table 1.
- step S5 based on the hue class information from step S4 and the luminance information from step S3, the similarity between each neighboring area and the attention area is determined.
- step S6 Calculate the indicated weighting factor.
- step S7 the luminance signal and color difference signal of the focus area and the neighboring area where the similarity is determined to be high based on the similarity from step S5 are selected from step S3.
- step S8 the average value of the luminance signal and the color difference signal shown in equation (4) is calculated.
- step S9 information such as temperature and gain is set from the read header information. If the required parameters do not exist in the header information, a predetermined standard value is assigned.
- step S10 the coordinate data and correction coefficient of the reference noise model are read.
- step S11 the reference noise amount is obtained by the interpolation process shown in equation (7).
- step S12 the amount of noise is obtained by the correction process shown in equation (8).
- step S13 it is determined whether or not the luminance signal and color difference signal of the region of interest belong to the allowable range shown in equation (10). If so, go to step S14. If not, go to step S14. Branch to S15.
- step S14 the processing shown in equation (12) is performed.
- step S15 the processing shown in equations (14) and (16) is performed.
- step S16 it is determined whether or not all local regions are completed. If not completed, the process branches to step S2, and if completed, the process branches to step S17.
- step S17 known enhancement processing and compression processing are performed.
- step S18 the processed signal is output and the process ends.
- FIG. 10 is a block diagram of the system according to the signal view of the second embodiment of the present invention.
- FIGS. 11A to 11C are explanatory diagrams regarding local regions in the color difference line sequential color filter
- FIG. 12 is a block diagram of the selection unit of FIG.
- FIG. 13 is a configuration diagram of a selection unit having another configuration
- FIG. 14 is a configuration diagram of the noise estimation unit of FIG. 10
- FIG. 15 is a flowchart of the noise reduction processing of the second embodiment.
- FIG. 10 is a configuration diagram of Embodiment 2 of the present invention.
- the connection from the extraction unit 112 to the selection unit 114 in the first embodiment of the present invention is eliminated.
- the basic configuration is the same as in the first embodiment, and the same name and number are assigned to the same configuration.
- FIG. 11A shows the configuration of the color difference line sequential type color filter.
- FIG. 11A shows the configuration of a local region of 8 ⁇ 8 pixels
- FIG. 11B shows separation into luminance Z color difference signals
- FIG. 11C shows edge component extraction.
- the color difference line sequential method uses 2 X 2 pixels as a basic unit, and cyan (Cy), magenta (Mg), yellow (Ye), and green (G) are arranged one by one. However, the positions of Mg and G are reversed for each line.
- the video signal in the buffer 107 is transferred to the extraction unit 112.
- the extraction unit 112 Based on the control of the control unit 119, the extraction unit 112 sequentially extracts a 4 ⁇ 4 pixel region of interest and a local region force of 8 ⁇ 8 pixels as shown in FIG. Transfer to / C separation unit 113 and selection unit 114. In this case, the extraction unit 112 extracts the local region while overlapping the 2 rows and 2 columns so that the region of interest covers the entire signal.
- the Y / C separation unit 113 calculates the luminance signal Y and the color difference signals Cb and Cr from the neighboring region and the region of interest in units of 2 ⁇ 2 pixels based on the equation (17).
- the luminance signal and chrominance signal are related to the attention area and neighboring area of 4 X 4 pixels.
- Figure 11B shows the luminance signal and chrominance signal calculated based on Eq. (17) for each region of interest and neighboring region.
- the data is transferred to the selection unit 114.
- the selection unit 114 uses the luminance signal and the color difference signal from the Y / C separation unit 113 based on the control of the control unit 119 to select a neighborhood region similar to the attention region.
- the region, the selected neighboring region, and the corresponding luminance signal and color difference signal are transferred to the noise estimating unit 115 and the noise reducing unit 116. Also, a weighting coefficient for the selected neighboring region is calculated, and the noise estimating unit 115 is also calculated. Forwarded to
- the noise estimation unit 115 performs noise based on the region of interest from the extraction unit 112, the selected neighboring region, the luminance signal, the color difference signal, the weighting factor, and other information at the time of shooting. The amount is estimated and transferred to the noise reduction unit 116. Noise reduction unit 116 is controlled Based on the control of the unit 119, noise reduction processing is performed on the attention area based on the attention area, luminance signal, chrominance signal from the extraction section 112, and the noise amount from the noise estimation section 115 !, and the processed attention area is signal processed Transfer to part 117.
- the processing in the extraction unit 112, Y / C separation unit 113, selection unit 114, noise estimation unit 115, and noise reduction unit 116 is performed in synchronization with each local region based on the control of the control unit 119.
- the signal processing unit 117 Based on the control of the control unit 119, the signal processing unit 117 performs well-known enhancement processing, compression processing, and the like on the video signal after noise reduction, and transfers the video signal to the output unit 118.
- the output unit 118 records and saves the signal on a memory card or the like.
- FIG. 12 shows an example of the configuration of the selection unit 114 of FIG. 10.
- An edge calculation unit 600 is added to the selection unit 114 shown in FIG. 3 of Example 1 of the present invention, and the gradient calculation unit 202
- the hue class ROM 204 has been deleted.
- the basic configuration is the same as the selection unit 114 shown in FIG. 3, and the same name and number are assigned to the same configuration. Only the differences will be described below.
- the Y / C separation unit 113 is connected to the minute fluctuation removal unit 200.
- the minute fluctuation removing unit 200 is connected to the hue calculating unit 203 via the buffer 1201.
- the buffer 2205 is connected to the edge calculation unit 600, and the edge calculation unit 600 is connected to the similarity determination unit 206.
- the control unit 119 is connected to the edge calculation unit 600 in both directions.
- the luminance signal and the color difference signal in the attention area and the neighboring area from the Y / C separation unit 113 are transferred to the minute fluctuation removal unit 200 and the buffer 2205.
- the minute fluctuation removing unit 200 removes minute fluctuation components by shifting the lower bits of the color difference signal, and transfers them to the buffer 1201.
- the hue calculation unit 203 calculates the hue signal H based on the equation (9) from the color difference signals of the attention area and the neighboring area of the buffer 1201. As shown in Fig. 11B, nine hue signals are obtained from the attention area and neighboring areas. By averaging these hue signals, the hue signals of each area are obtained.
- the calculated hue signal is transferred to the similarity determination unit 206.
- the edge calculation unit 600 reads the luminance signals of the attention region and the neighboring region from the buffer 2205 based on the control of the control unit 119.
- the edge intensity value E is calculated by applying the 3 X 3 Laplacian operator shown in Eq. (18) to the luminance signal in each region.
- the attention area and the neighboring area have 9 luminance signals as shown in FIG. 11B, so the edge intensity values are as shown in FIG. 11C.
- One point is calculated for each area.
- the calculated edge strength value is transferred to the similarity determination unit 206.
- the similarity determination unit 206 reads the luminance signals of the attention area and the neighboring area from the nother 2205.
- the similarity determination unit 206 determines the similarity between the attention region and the neighboring region based on the hue signal from the hue calculation unit 203, the edge intensity value from the edge calculation unit 600, and the luminance signal. This means that the hue signal H of the region of interest is close to the range of ⁇ 25%.
- the hue signal Hi of the neighboring area belongs to "and the edge intensity value E of the attention area is close to the range of ⁇ 20%.
- the edge intensity value Ei of the neighboring region belongs ”and“ the luminance signal Y of the attention region is close to the range of ⁇ 20%.
- a neighboring region that satisfies the condition “the luminance signal Yi of the neighboring region belongs” is determined to have a high similarity, and the above determination result is transferred to the neighboring region selecting unit 207 and the coefficient calculating unit 208.
- the subsequent processing is the same as that of the first embodiment of the present invention shown in FIG.
- the brightness, hue, and edge intensity are used to determine the similarity between the attention area and the neighboring area.
- frequency information can be used as shown in FIG. 13 replaces the edge calculation unit 600 in FIG. 12 with a DCT conversion unit 700.
- the basic configuration is the same as the selection unit 114 shown in FIG. 12, and the same name and number are assigned to the same configuration. Yes. Only the different parts are described below.
- the nota 2205 is connected to the DCT conversion unit 700, and the DCT conversion unit 700 is connected to the similarity determination unit 206.
- the control unit 119 is connected to the DCT conversion unit 700 in both directions.
- the DCT conversion unit 700 reads the luminance signals of the attention area and the neighboring area from the nother 2205. A known DCT conversion is performed on the luminance signal of each region. The converted frequency signal is transferred to the similarity determination unit 206. In the similarity determination unit 206, the similarity determination unit 206 determines the similarity between the attention area and the neighboring area based on the hue signal from the hue calculation section 203, the frequency signal from the DCT conversion section 700, and the luminance signal. .
- FIG. 14 shows an example of the configuration of the noise estimation unit 115 of FIG. 10, and the example 1 of the present invention
- a look-up table unit 800 is added to the noise estimation unit 115 shown in FIG. 5, and the parameter ROM 304, the parameter selection unit 305, the interpolation unit 306, and the correction unit 307 are omitted.
- the basic configuration is equivalent to the noise estimation unit 115 shown in FIG. 5, and the same name and number are assigned to the same configuration. Only different parts will be described below.
- Average calculating section 301, gain calculating section 302, and standard value assigning section 303 are connected to look-up table section 800.
- Look-up table unit 800 is connected to noise reduction unit 116.
- the control unit 119 is bidirectionally connected to the look-up table unit 800.
- the average calculation unit 301 reads the luminance signal and the color difference signal from the buffer 300 based on the control of the control unit 119, and calculates the average values AV, AVcb, AVer of the luminance signal and the color difference signal for the local region using the weighting factor.
- the average values of the luminance signal and the color difference signal are transferred to the look-up table unit 800.
- the gain calculation unit 302 obtains the amplification amount in Gain 105 based on the ISO sensitivity and exposure conditions transferred from the control unit 119 and information on the white balance coefficient, and transfers the gain to the lookup table unit 800. Further, the control unit 119 obtains the temperature information of the CCD 103 from the temperature sensor 121 and transfers it to the lookup table unit 800.
- Lookup table section 800 estimates the amount of noise based on the average value of the luminance signal and the color difference signal from average calculation section 301, gain information from gain calculation section 302, and temperature information from control section 119.
- the look-up table unit 800 is a look-up table that records the relationship among temperature, signal value level, gain, shutter speed, and noise amount, and is constructed by the same method as in the first embodiment.
- the amount of noise obtained by look-up table unit 800 is transferred to noise reduction unit 116.
- the standard value assigning unit 303 has a function of giving a standard value when any of the parameters is omitted.
- the estimation accuracy of each can be improved. Since a look-up table is used to calculate the amount of noise, it is possible to estimate the amount of noise at high speed. In addition, since the noise reduction process sets an allowable range based on the noise amount, it is possible to perform a reduction process that is excellent in the preservation of the original signal and prevents the occurrence of discontinuity.
- the signal after noise reduction processing is output as the original signal, compatibility with the conventional processing system is maintained, and various system combinations are possible.
- the luminance signal and color difference signal are obtained in accordance with the color filter arrangement of the color difference line order method, high-speed processing is possible.
- a single color CCD of complementary color type color difference line sequential method has been described as an example, but the present invention is not limited to this.
- the present invention can be similarly applied to the primary color Bayer type shown in the first embodiment. It can also be applied to two- and three-plate CCDs.
- the signal from the CCD 103 can be output as unprocessed raw data, and the temperature, gain, shutter speed, etc. at the time of shooting from the control unit 119 can be output as header information and processed separately by software. .
- FIG. 15 shows a flow relating to software processing of noise reduction processing. Note that the same step numbers are assigned to the same processing steps as the noise reduction processing flow in the first embodiment of the present invention shown in FIG.
- step S1 header information such as signal, temperature, and gain is read.
- step S2 a local region composed of a region of interest and a neighboring region as shown in FIG. 11A is extracted.
- step S3 the luminance signal and the color difference signal are separated as shown in equation (17).
- step S4 a hue signal is calculated from the attention area and the neighboring area in the local area based on the equation (9).
- the edge strength value is calculated by applying the Laplacian operator shown in Equation (18).
- step S5 based on the hue information from step S4, the luminance information from step S3, and the edge information from step S20, the similarity between each neighboring region and the region of interest is determined.
- step S6 the weighting coefficient shown in equation (3) is calculated.
- step S7 the luminance signal and the color difference signal of the attention area and the neighboring area where the similarity is determined to be high based on the similarity from step S5 are selected from step S3.
- step S8 the brightness shown in equation (4) The average value of the degree signal and the color difference signal is calculated.
- step S9 information such as temperature and gain is set from the read header information. If the necessary parameters do not exist in the header information, a predetermined standard value is assigned.
- step S21 the amount of noise is calculated using a lookup table.
- step S13 it is determined whether or not the luminance signal and color difference signal of the region of interest belong to the allowable range shown in equation (10). If so, go to step S14. If not, step S14. Branch to S15.
- step S14 the processing shown in equation (11) is performed.
- step S15 the processing shown in equations (13) and (15) is performed.
- step S16 it is determined whether or not all local regions are completed. If not completed, the process branches to step S2, and if completed, the process branches to step S17.
- step S17 known enhancement processing and compression processing are performed.
- step S18 the processed signal is output and the process ends.
- the noise amount of the color signal and the luminance signal corresponding to the factors that change dynamically such as the temperature and gain at the time of shooting as well as the signal level is modeled. This makes it possible to perform noise reduction processing optimized for the shooting situation.
- noise reduction processing is performed independently for luminance noise and color noise, so that both noises can be reduced with high accuracy and a high-quality signal can be generated.
- the present invention can be widely used in an apparatus such as an imaging apparatus or an image reading apparatus that needs to reduce color signal and luminance signal random noise caused by an imaging element system with high accuracy. is there.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Processing Of Color Television Signals (AREA)
- Image Processing (AREA)
- Color Television Image Signal Generators (AREA)
- Facsimile Image Signal Circuits (AREA)
- Image Input (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05757820A EP1764738A1 (en) | 2004-07-07 | 2005-07-06 | Signal processing system and signal processing program |
US11/649,924 US20070132864A1 (en) | 2004-07-07 | 2007-01-03 | Signal processing system and signal processing program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004-201091 | 2004-07-07 | ||
JP2004201091A JP2006023959A (ja) | 2004-07-07 | 2004-07-07 | 信号処理システム及び信号処理プログラム |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/649,924 Continuation US20070132864A1 (en) | 2004-07-07 | 2007-01-03 | Signal processing system and signal processing program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006004151A1 true WO2006004151A1 (ja) | 2006-01-12 |
Family
ID=35782951
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/012478 WO2006004151A1 (ja) | 2004-07-07 | 2005-07-06 | 信号処理システム及び信号処理プログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US20070132864A1 (ja) |
EP (1) | EP1764738A1 (ja) |
JP (1) | JP2006023959A (ja) |
WO (1) | WO2006004151A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007043325A1 (ja) * | 2005-10-12 | 2007-04-19 | Olympus Corporation | 画像処理システム、画像処理プログラム |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7583303B2 (en) * | 2005-01-31 | 2009-09-01 | Sony Corporation | Imaging device element |
JP4265546B2 (ja) * | 2005-01-31 | 2009-05-20 | ソニー株式会社 | 撮像装置、画像処理装置および画像処理方法 |
EP1976268A1 (en) * | 2005-12-28 | 2008-10-01 | Olympus Corporation | Imaging system and image processing program |
JP4959237B2 (ja) | 2006-06-22 | 2012-06-20 | オリンパス株式会社 | 撮像システム及び撮像プログラム |
JP4653059B2 (ja) | 2006-11-10 | 2011-03-16 | オリンパス株式会社 | 撮像システム、画像処理プログラム |
JP4523008B2 (ja) | 2007-01-10 | 2010-08-11 | 学校法人神奈川大学 | 画像処理装置および撮像装置 |
JP5052189B2 (ja) | 2007-04-13 | 2012-10-17 | オリンパス株式会社 | 映像処理装置及び映像処理プログラム |
JP5165300B2 (ja) * | 2007-07-23 | 2013-03-21 | オリンパス株式会社 | 映像処理装置および映像処理プログラム |
JP5012315B2 (ja) * | 2007-08-20 | 2012-08-29 | セイコーエプソン株式会社 | 画像処理装置 |
JP2009188822A (ja) * | 2008-02-07 | 2009-08-20 | Olympus Corp | 画像処理装置及び画像処理プログラム |
JP2010147568A (ja) * | 2008-12-16 | 2010-07-01 | Olympus Corp | 画像処理装置、画像処理方法、および、画像処理プログラム |
JP5197423B2 (ja) * | 2009-02-18 | 2013-05-15 | オリンパス株式会社 | 画像処理装置 |
US9204113B1 (en) | 2010-06-28 | 2015-12-01 | Ambarella, Inc. | Method and/or apparatus for implementing high dynamic range image processing in a video processing system |
JP6071419B2 (ja) * | 2012-10-25 | 2017-02-01 | キヤノン株式会社 | 画像処理装置及び画像処理方法 |
JP6349614B2 (ja) * | 2015-05-15 | 2018-07-04 | エスゼット ディージェイアイ テクノロジー カンパニー リミテッドSz Dji Technology Co.,Ltd | 画像処理方法及び画像処理システム |
US9977986B2 (en) * | 2015-11-19 | 2018-05-22 | Streamax Technology Co, Ltd. | Method and apparatus for switching a region of interest |
WO2021225030A1 (ja) * | 2020-05-08 | 2021-11-11 | ソニーセミコンダクタソリューションズ株式会社 | 電子機器及び撮像装置 |
KR20220090920A (ko) * | 2020-12-23 | 2022-06-30 | 삼성전자주식회사 | 이미지 센서, 이미지 센서의 동작 방법 및 이를 포함하는 이미지 센싱 장치 |
JP2022137916A (ja) * | 2021-03-09 | 2022-09-22 | キヤノン株式会社 | 画像処理装置、画像形成システム、画像処理方法及びプログラム |
KR20220145694A (ko) * | 2021-04-22 | 2022-10-31 | 에스케이하이닉스 주식회사 | 이미지 센싱 장치 및 그의 동작방법 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001157057A (ja) * | 1999-11-30 | 2001-06-08 | Konica Corp | 画像読取装置 |
JP2001175843A (ja) * | 1999-12-15 | 2001-06-29 | Canon Inc | 画像処理方法、装置および記憶媒体 |
JP2004128985A (ja) * | 2002-10-03 | 2004-04-22 | Olympus Corp | 撮像システム、再生システム、撮像プログラム、再生プログラム |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6980326B2 (en) * | 1999-12-15 | 2005-12-27 | Canon Kabushiki Kaisha | Image processing method and apparatus for color correction of an image |
-
2004
- 2004-07-07 JP JP2004201091A patent/JP2006023959A/ja active Pending
-
2005
- 2005-07-06 EP EP05757820A patent/EP1764738A1/en not_active Withdrawn
- 2005-07-06 WO PCT/JP2005/012478 patent/WO2006004151A1/ja not_active Application Discontinuation
-
2007
- 2007-01-03 US US11/649,924 patent/US20070132864A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001157057A (ja) * | 1999-11-30 | 2001-06-08 | Konica Corp | 画像読取装置 |
JP2001175843A (ja) * | 1999-12-15 | 2001-06-29 | Canon Inc | 画像処理方法、装置および記憶媒体 |
JP2004128985A (ja) * | 2002-10-03 | 2004-04-22 | Olympus Corp | 撮像システム、再生システム、撮像プログラム、再生プログラム |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007043325A1 (ja) * | 2005-10-12 | 2007-04-19 | Olympus Corporation | 画像処理システム、画像処理プログラム |
JP2007110338A (ja) * | 2005-10-12 | 2007-04-26 | Olympus Corp | 画像処理システム、画像処理プログラム |
US8019174B2 (en) | 2005-10-12 | 2011-09-13 | Olympus Corporation | Image processing system, image processing method, and image processing program product |
Also Published As
Publication number | Publication date |
---|---|
EP1764738A1 (en) | 2007-03-21 |
US20070132864A1 (en) | 2007-06-14 |
JP2006023959A (ja) | 2006-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2006004151A1 (ja) | 信号処理システム及び信号処理プログラム | |
JP4465002B2 (ja) | ノイズ低減システム、ノイズ低減プログラム及び撮像システム。 | |
JP3899118B2 (ja) | 撮像システム、画像処理プログラム | |
JP4547223B2 (ja) | 撮像システム、ノイズ低減処理装置及び撮像処理プログラム | |
JP3934597B2 (ja) | 撮像システムおよび画像処理プログラム | |
JP3762725B2 (ja) | 撮像システムおよび画像処理プログラム | |
JP4054184B2 (ja) | 欠陥画素補正装置 | |
JP4427001B2 (ja) | 画像処理装置、画像処理プログラム | |
JP4979595B2 (ja) | 撮像システム、画像処理方法、画像処理プログラム | |
JP4660342B2 (ja) | 画像処理システム、画像処理プログラム | |
JP5165300B2 (ja) | 映像処理装置および映像処理プログラム | |
WO2005104531A1 (ja) | 映像信号処理装置と映像信号処理プログラムおよび映像信号記録媒体 | |
US7916187B2 (en) | Image processing apparatus, image processing method, and program | |
WO2008056565A1 (fr) | Système d'analyse d'image et programme de traitement d'image | |
WO2007049418A1 (ja) | 画像処理システム、画像処理プログラム | |
JP5052189B2 (ja) | 映像処理装置及び映像処理プログラム | |
WO2005099356A2 (ja) | 撮像装置 | |
JP4916341B2 (ja) | 画像処理装置及び画像処理プログラム | |
JP2004266323A (ja) | 撮像システム、画像処理プログラム | |
JP2009100207A (ja) | ノイズ低減システム、ノイズ低減プログラム及び撮像システム | |
JP2009027615A (ja) | 映像処理装置および映像処理プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2005757820 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11649924 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 2005757820 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 11649924 Country of ref document: US |