WO2005081542A1 - 画像処理方法 - Google Patents
画像処理方法 Download PDFInfo
- Publication number
- WO2005081542A1 WO2005081542A1 PCT/JP2004/001889 JP2004001889W WO2005081542A1 WO 2005081542 A1 WO2005081542 A1 WO 2005081542A1 JP 2004001889 W JP2004001889 W JP 2004001889W WO 2005081542 A1 WO2005081542 A1 WO 2005081542A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- value
- pixel
- image signal
- image
- color
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 32
- 230000002093 peripheral effect Effects 0.000 claims abstract description 10
- 238000003384 imaging method Methods 0.000 claims description 60
- 238000004364 calculation method Methods 0.000 claims description 48
- 238000000034 method Methods 0.000 claims description 37
- 238000001514 detection method Methods 0.000 claims description 16
- 230000009467 reduction Effects 0.000 description 42
- 238000010586 diagram Methods 0.000 description 19
- 230000008569 process Effects 0.000 description 17
- 230000000694 effects Effects 0.000 description 11
- 238000006243 chemical reaction Methods 0.000 description 9
- 239000003086 colorant Substances 0.000 description 7
- 238000011946 reduction process Methods 0.000 description 6
- 238000003708 edge detection Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000000926 separation method Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000009792 diffusion process Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/135—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements
- H04N25/136—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements using complementary colours
Definitions
- the present invention relates to an image processing method for removing noise included in an image signal captured by an image sensor.
- a two-dimensional image such as a CCD or the like in which the R, G, and B component color filters, which are the three primary colors, are separately attached. It has three image sensors.
- the image processing apparatus splits the optical image of the subject incident from the optical system in one shooting with a spectral prism or the like, and makes the optical image after the splitting incident on three two-dimensional imaging elements.
- a multi-plate imaging method is used to obtain full-color image information for the screen.
- each of the pixels arranged two-dimensionally has either a color filter for the R component, a color filter for the G component, or a color filter for the B component.
- a single-chip imaging method is used in which color signals for the colors are obtained by calculation using the color signals of the peripheral pixels of the pixel concerned and pseudo full-color image information of each pixel is obtained.
- Single-panel imaging method includes an image sensor compared to multiple-panel imaging method Since the number of optical components is small, it is possible to configure the device at a small size and at low cost, and it is widely used in consumer digital still cameras and digital video cameras.
- the single-panel imaging method performs a color signal interpolation process from image information captured by a single-panel imaging element to which a primary color filter is attached to generate a non-imaging color signal. But you get a full color image
- the R, G, and B signals of each pixel generated by the color interpolation processing are finally converted into luminance and color difference signals for screen display and image compression processing such as JPEG 'MPEG, and before image compression.
- image compression processing such as JPEG 'MPEG
- a filter process for performing processes such as noise reduction and contour enhancement is performed on an image composed of the luminance and color difference signals.
- a luminance / chrominance separation circuit performs a conversion process to a luminance / chrominance signal
- the luminance / chrominance separation circuit performs the above-described color interpolation processing and then performs conversion processing to a luminance / chrominance signal.
- noise is reduced by performing a noise enhancement process called a contouring process or a coring on the luminance / chrominance signal converted by the luminance / chrominance separation circuit.
- noise reduction processing is performed when performing color interpolation processing on an image captured by a single-chip image sensor. Since a predetermined mouth-to-mouth fill is performed after the luminance signals of all pixels are generated by performing the processing, it is not possible to prevent the diffusion of noise generated by performing the color interpolation processing.
- an additional line buffer is used to construct a pixel window for noise reduction processing on the luminance signal after color interpolation processing. Required.
- the noise level detected between adjacent pixels of different imaging colors differs particularly at color edges.
- the continuity of the noise reduction processing in the screen is lost, and the sense of stability of the image quality is lost.
- the noise level value of the processed adjacent same-color pixel is used recursively, but it is not effective in reducing the random noise generated independently of the adjacent pixel.
- a region with a high noise level is determined as a significant edge as image information by the fuzzy function, and the noise reduction processing is not performed.However, noise generated in pixels adjacent to the edge can be reduced. The noise cannot be emphasized by the outline enhancement processing that is generally used in the later stage.
- Patent Document 1 Patent No. 2 7 8 7 7 8 1
- Patent Document 2 Japanese Patent Publication No. 2 0 0 1 — 1 7 7 7 6 7
- Patent Document 3 Japanese Patent Application Laid-Open No. 2003-87798
- Patent Document 4 Japanese Patent Application Laid-Open No. 2000-15053
- the conventional image processing method is configured as described above, the noise that is superimposed on the image signal when the image sensor performs photoelectric conversion and the noise that is superimposed on the analog signal after photoelectric conversion (analog Noise generated by the signal processing circuit is diffused around the target pixel by color interpolation processing. Therefore, as the number of pixels of the image sensor increases and the light receiving area per element decreases, As the sensitivity declined, there was a problem that the relatively increased noise could not be reduced sufficiently.
- noise can be reduced to some extent by applying coring for luminance noise and performing pass-and-pass processing for color noise, but the actual captured image is not spot noise, but rather the entire image. Random noise occurs over the surface. For this reason, the noise components diffused by the color interpolation process overlap each other, and the original image signal is buried in the noise. Therefore, it is difficult to remove the luminance noise and the color noise after being converted into the luminance / chrominance signal.
- the present invention has been made to solve the above-described problems.
- An image processing method capable of sufficiently reducing noise superimposed on a target pixel without diffusing the noise to peripheral pixels.
- the aim is to get a way. Disclosure of the invention
- An image processing method includes: an edge intensity value calculation step of calculating an edge intensity value near a target pixel from a feature value of a minute area calculated in a feature value calculation step; A step for calculating a mouth-pass fill value of the pixel of interest from the image signal value, and a step for calculating the edge intensity value calculated in the edge strength value calculation step and a method for calculating the fill value
- the image signal value of the target pixel is corrected using the mouth-pass filter value calculated in the step.
- FIG. 1 is a configuration diagram showing an image processing apparatus to which an image processing method according to Embodiment 1 of the present invention is applied.
- FIG. 2 is a flowchart showing an image processing method according to Embodiment 1 of the present invention.
- FIG. 3 is an explanatory diagram showing the arrangement of color filters in a single-plate image sensor.
- FIG. 4 is an explanatory diagram showing linear interpolation of a G signal.
- FIG. 5 is an explanatory diagram showing linear interpolation of the B signal.
- FIG. 6 is an explanatory diagram showing linear interpolation of the R signal.
- FIG. 7 is an explanatory diagram showing a pixel window and the like.
- FIG. 8 is an explanatory diagram showing a feature value window.
- FIG. 9 is an explanatory diagram showing weighting coefficients.
- FIG. 10 is an explanatory diagram showing an edge intensity correction curve.
- FIG. 11 is an explanatory diagram showing a feature value window.
- FIG. 12 is an explanatory diagram showing a luminance signal, a color difference signal, and the like.
- FIG. 13 is an explanatory diagram showing a luminance signal, a color difference signal, and the like.
- FIG. 14 is a flowchart showing an image processing method according to Embodiment 3 of the present invention.
- FIG. 15 is an explanatory diagram showing a G component distribution of peripheral pixels.
- FIG. 16 is a configuration diagram showing an image processing apparatus to which the image processing method according to Embodiment 4 of the present invention is applied.
- FIG. 17 is a flowchart showing an image processing method according to Embodiment 4 of the present invention.
- FIG. 18 is an explanatory diagram showing a binarization distribution.
- FIG. 19 is an explanatory diagram showing a noise reduction process and the like according to the fourth embodiment.
- FIG. 1 is a configuration diagram showing an image processing apparatus to which an image processing method according to Embodiment 1 of the present invention is applied.
- an image data input unit 1 is configured so that an image pickup color signal (image signal) obtained by an image pickup device in which one of the three primary colors is arranged for each of the pixels arranged in a second-order manner. Value).
- the area cutout unit 2 cuts out a predetermined area centered on a target pixel on which noise reduction processing is performed from an imaging area of the image sensor, and extracts an imaging color signal of the predetermined area.
- the feature value calculation unit 3 calculates a feature value of a minute area in the predetermined area from the imaging color signal of the predetermined area extracted by the area extraction unit 2.
- the feature value calculation unit 3 uses the imaging color signals output from the R component color filter, the G component color filter, and the B component color filter corresponding to the minute area in the predetermined area, and Calculate the feature value of the region.
- the edge intensity value calculation unit 4 calculates an edge intensity value in the vicinity of the pixel of interest from the feature value of the minute area calculated by the feature value calculation unit 3.
- the edge intensity value correction unit 5 corrects the edge intensity value calculated by the edge intensity value calculation unit 4 according to an edge intensity correction curve.
- the fill evening value calculation unit 6 calculates a one-pass fill evening value of the target pixel from the imaging color signals of peripheral pixels having the same color component as the target pixel.
- the image signal value correction unit 7 uses the edge intensity value calculated by the edge intensity value calculation unit 4 and the edge intensity value corrected by the edge intensity value correction unit 5 to calculate the image signal value of the pixel of interest and the low-pass filter. By weighting and adding the values, the image signal value of the target pixel is corrected.
- FIG. 2 is a flowchart showing an image processing method according to Embodiment 1 of the present invention.
- the image processing apparatus uses a single-chip image sensor in which color filters of three primary colors of R, G, and B are arranged in a Bayer type as shown in FIG.
- the R, G, and B signals in FIG. 3 are imaging color signals sampled at each pixel position of the photoelectric conversion element, where R is red (R signal), G is green (G signal), and B is Indicates blue (B signal).
- a non-imaging color signal is generated by performing a color interpolation process using an imaging color signal, which is an imaging result of a single-chip imaging device to which a primary color filter is attached, to obtain a full color image.
- an imaging color signal which is an imaging result of a single-chip imaging device to which a primary color filter is attached.
- the procedure will be briefly described.
- the image processing apparatus performs a noise reduction process described later and then performs a color interpolation process to obtain a full-color image.
- an imaging color signal of the G signal exists only at the position shown in FIG. 4
- the G signal level of the center screen where no G signal is present is calculated from the average value of the G signals of the four pixels at the top, bottom, left and right And interpolate to obtain G signals for all pixels.
- the B signals of the upper and lower pixels From the B signal of the middle pixel from the B signal of the upper, lower, left and right pixels, and interpolate by generating the B 2 signal of the center pixel from the B signal of the upper, lower, left, and right pixels. Then, by generating and interpolating the b3 signal of the intermediate pixel from, the B signals for all pixels are obtained.
- the R signal for all pixels is obtained in the same manner as the B signal.
- R, G, and B signals can be obtained in all pixels.
- the image data input unit 1 inputs an image signal value which is an imaging color signal of each pixel imaged by a single-chip imaging device.
- the area cutout unit 2 executes noise reduction processing from the image pickup area of the image pickup element as shown in FIG. 7 (a).
- a predetermined area hereinafter, referred to as a pixel window
- 5 ⁇ 5 pixels centering on the pixel P (2, 2) is cut out, and an imaging color signal of a pixel in the pixel window is output.
- Fig. 7 (b), (c) and (d) are explanatory diagrams showing the pixel arrangement in the pixel window actually cut out, and the pixel of interest is the G component, R component or B component. In some cases, these three cases exist.
- the feature value calculation unit 3 calculates the feature value of the minute region in the pixel window.
- the feature value calculation unit 3 calculates P (i, j), P (i, j) as a minute area including the R component, the G component, and the B component. (i + 1, j, P (i, j + 1) s P (i + 1, j + 1) (where 0 ⁇ i ⁇ 3, 0 ⁇ j ⁇ 3) Defines an area including 4 pixels Do Then, the feature value calculation unit 3 calculates a feature value D (i, j) by substituting the imaging color signal of the pixel constituting the minute region into the following equation (1) for each minute region ( Step ST 1).
- D (i, j) (P (i, j) + P (i + 1, j) + P (i, j + l) + P (i + 1, j + 1)) / 4 (1) pixels
- the pixels that make up the small area are P (0, 0), P (l, ⁇ ), P (0, 1), P (l, 1),
- the feature value of the minute area is D (0, 0).
- the pixels that make up the small area are P (2, 2), P (3, 2), P (2, 3), P (3, 3),
- the feature value of the small area is D (2, 2).
- the edge intensity value calculation unit 4 calculates an edge intensity value (hereinafter, referred to as an input edge intensity value E dl) near the pixel of interest from the feature value of each minute region. (Step ST 2) o
- the edge strength value calculation unit 4 calculates a weighting coefficient as shown in FIG. 9 for the feature value in the feature value window of FIG. After multiplying by, the result of each multiplication is added as in the following equation (2) to calculate the input page strength value E d1 in the feature value window.
- the edge intensity value correcting unit 5 calculates a predetermined edge intensity correction curve (see FIG. 10). ), And outputs the output edge intensity value K out as the corrected input edge intensity value E d1 (step ST3).
- the edge intensity value correction unit 5 calculates the output edge intensity value K out by substituting the input edge intensity value Ed 1 into a function f representing the edge intensity correction curve as shown in the following equation (3). I do.
- the edge intensity correction curve may be subdivided according to the input edge intensity value Ed1, and each section of the edge intensity correction curve may be linearly approximated. This makes it possible to replace the calculation for calculating the output edge intensity value K out from a higher-order expression of second order or higher with a linear expression, and the circuit scale when this image processing method is implemented by an electronic circuit. Can be reduced. Further, when the present image processing method is realized by a program, the effect of increasing the calculation speed can be obtained.
- the output edge strength value K 0 ut corresponding to the input edge strength value E d 1 is stored in a memory such as a look-up table in advance, and the edge strength correction unit 5 corresponds to the input edge strength value E d 1.
- the output edge intensity value K 0 Ut may be referred to.
- the image signal value correction unit 7 calculates the low-pass filter sunset value Plpf of the pixel of interest by the fill intensity calculation unit 6 as shown in the following equation (5). Using the calculated input edge intensity value E d1 and the output edge intensity value K out calculated by the edge intensity value correction unit 5, an image signal which is an imaging color signal of the pixel of interest P (2, 2) is obtained. The image signal value of the target pixel P (2, 2) is corrected by weighting and adding the value and the low-pass fill evening value P1pf.
- the feature value calculation unit 3 generates a feature value window D (i, j) as shown in FIG. 11 by calculating Expression (1).
- the edge strength value calculator 4 calculates the input edge strength value E d1 in the feature value window by calculating equation (2).
- the edge intensity value correction unit 5 calculates the output edge intensity value K out by calculating Expression (3).
- the edge intensity correction curve of FIG. 10 is set, when the input edge intensity value Ed1 is "24", the output edge intensity value Kout becomes "6".
- the fill evening value calculation unit 6 calculates the mouth-to-pass fill evening value Plpf of the pixel of interest by calculating Expression (4).
- the image signal value corrector 7 corrects the image signal value of the target pixel P (2, 2) by calculating Expression (5).
- the imaging result of the imaging device is as shown in FIG.
- Y represents a luminance signal
- Cb and Cr represent color difference signals.
- the color difference signals are all positive by adding 128 to facilitate the subsequent calculations.
- the luminance and chrominance signals are generated according to equation (6)
- the luminance signal is as shown in Fig. 12 (e)
- the chrominance signal is as shown in Fig. 12 (f) and (g).
- the color interpolation processing is performed before the noise reduction processing as in the conventional example, that is, when the color interpolation processing is performed by inputting the imaging result of FIG. 3 As shown in (c), (d), and (e), the R, G, and B color components are generated by interpolation.
- the edge intensity value calculation unit 4 that calculates the edge intensity value near the pixel of interest from the feature value of the minute area calculated by the feature value calculation unit 3,
- An edge intensity correction unit 5 that corrects the edge intensity value calculated by the intensity value calculation unit 3 according to an edge intensity correction curve, and an image signal value of a peripheral pixel having the same color component as that of the eye pixel.
- a low-pass fill value calculating unit 6 for calculating the low-pass fill value is provided, and the image signal value of the target pixel and the mouth-pass fill value are weighted and added using the edge intensity values before and after the correction to obtain the value of the target pixel. Since the image signal value is configured to be corrected, the noise superimposed on the target pixel is not diffused to the surrounding pixels, so that it is possible to sufficiently reduce the noise.
- the method of calculating the feature value of the minute area by calculating equation (1) has been described.
- a feature value calculation method including all of the R component, the G component, and the B component is appropriately selected. You may make it. That is, as long as the method of calculating the feature value is a method including all the color components, the method is not limited to equation (1). However, a certain effect can be obtained.
- the periphery of the target pixel when the periphery of the target pixel includes a chromatic color edge, it is effective to maintain the continuity of the edge detection result with other adjacent color pixels and, consequently, the continuity of the noise removal effect.
- a parameter corresponding to the edge intensity value other than the method described in Embodiment 1 above may be used or used together to increase the degree of freedom in noise detection. Detection accuracy can be improved.
- the first derivative and the second derivative are calculated for each pixel of the same color component in the pixel window
- the edge distribution can be estimated in color units and reflected, for example, by adding it to the edge strength value E d 1.
- the target pixel is the R pixel positions
- E d R is the input edge intensity value of the R color component
- E d G is the input edge intensity value of the G color component
- E d B is the input edge intensity value of the B color component
- E dl is the equation (2) ) Is the input edge intensity value in the feature value window calculated from).
- a 5 ⁇ 5 pixel window centered on the pixel of interest is shown as an example, but the present invention is not limited to this. That is, 7 ⁇ 7 pixels, 7 ⁇ 5 pixels, etc. can be set arbitrarily, and the shape of the feature value window and each calculation formula in FIG. 8 can be changed accordingly.
- the calculation may be made only from the pixels located in the horizontal and vertical directions when viewed from the pixel of interest P (2, 2) in FIG. 7 (a).
- the pixel values in the horizontal, vertical and oblique directions may be weighted and added.
- the equation (5) is calculated to calculate the correction value P, (2, 2) of the image signal value of the target pixel, the component of the target pixel P (2, 2) does not become 0.
- the pixel of interest P (2, 2) may be weighted and added to the mouth-to-pass fill evening value P 1 pf in advance. It can be varied in various ways depending on the purpose of performing the characteristic noise reduction processing of the imaging result.
- the image of the pixel of interest P (2, 2) is calculated.
- the weighting addition of the signal value and the low-pass fill value P 1 pf has been described, but this is not a limitation.
- the low-pass filter that is weaker than the low-pass filter value P lpf is used instead of the pixel of interest P (2, 2) in the calculation of Equation (5).
- the image signal value of the pixel close to the pixel of interest P (2, 2) subjected to the evening may be weighted and added to the mouth-pass fill evening value P 1 pf.
- the color interpolation process is performed after the noise reduction process is performed.
- the noise reduction process in the first embodiment forms a pixel window centering on a target pixel. Therefore, if it is realized by an electronic circuit, a line buffer for at least several lines is required.
- FIG. 14 is a flow chart showing an image processing method according to Embodiment 3 of the present invention, and shows a method of performing noise reduction processing when performing color interpolation processing.
- the linear interpolation method is the simplest method of color interpolation processing, but a general image processing device generally uses a complicated method with improved image quality. For example, a method of a color interpolation process disclosed in Japanese Patent Application Laid-Open No. 2003-134365 will be described.
- an imaging result of an imaging device is based on a change amount in a minute area of a reference color (color signal) different from a color component to be generated by interpolation.
- a color interpolation method is disclosed in which an offset amount is determined as an estimated change amount of a color to be generated by interpolation to faithfully reproduce the unevenness of the page and realize high-resolution color interpolation.
- a configuration for performing the first to third stages of processing is disclosed.
- a pixel whose imaging result of the imaging element is an R component (hereinafter referred to as an R pixel position) ), And the G component at the pixel that is the B component (hereinafter referred to as the B pixel position).
- the R and B components are generated for the pixels whose G image is the G component (hereinafter referred to as the G pixel position).
- a B component is generated at the pixel position and an R component is generated at the B pixel position.
- the color interpolation processing unit acts as the region extraction unit 2 in FIG. 1 to obtain a 5 ⁇ 5 pixel centered on the pixel of interest. Cut out the pixel window.
- the color interpolation processing unit When the target pixel is located at the R pixel position or the B pixel position (step ST11), the color interpolation processing unit performs the G component generation processing, which is the first stage of the color interpolation processing (step ST1). 2).
- the color interpolation processing unit When the color interpolation processing unit performs the generation processing of the G component at the R pixel position or the B pixel position, the color interpolation processing unit performs the same processing as the noise reduction processing in the first embodiment (performs the image processing method in FIG. 2).
- the noise reduction processing of the R color component or the B color component of the pixel is performed (step ST13).
- the color interpolation processing unit performs the image signal processing of the target pixel in the same manner as the noise reduction process in the first embodiment (performs the image processing method of FIG. 2). Noise reduction processing is performed on the value (original image signal value of the image sensor) (step ST14).
- step ST 14 noise reduction of the G component generated in step ST 12.
- the processing differs from the noise reduction processing of the G component at the G pixel position only in the calculation formula of the low-pass fill evening value P lpf, that is, the G component generated in step ST 12 is replaced by P g (2, 2), the G component distribution of the surrounding pixels is As shown in the figure, for example, the following equation (8) is operated to calculate the mouth-to-pass fill evening value P 1 pf.
- the color interpolation processing unit performs the noise reduction processing of the G component as described above. , B components are generated (step ST15).
- the color interpolation processing unit performs, as a third step, generation of a B component at the R pixel position and generation of an R component at the B pixel position (step ST16).
- the imaging result of the image sensor is sequentially scanned, and the color interpolation processing including the noise reduction processing as shown in FIG. 14 is performed over the entire screen. Therefore, it is possible to obtain an RGB full-color image in which noise is favorably reduced without spreading the noise. Further, by converting the output signal of the color interpolation processing into a luminance / chrominance signal as needed, a luminance / chrominance signal with effectively reduced noise can be obtained.
- the color interpolation processing including the noise reduction processing is the color interpolation processing disclosed in Japanese Patent Application Laid-Open No. 2003-134345.
- the noise reduction processing is included in the conventional linear interpolation method or the color interpolation processing using the shooting results of other multiple lines. Even if the processing is incorporated, the same effect can be obtained by sharing the line buffer.
- it before performing color interpolation processing on the imaging result of the imaging element, it may be arranged in another image processing using a plurality of lines.
- FIG. 16 is a configuration diagram showing an image processing apparatus to which the image processing method according to Embodiment 4 of the present invention is applied.
- the binarization unit 11 calculates an average value of the feature values of the minute regions calculated by the feature value calculation unit 3, compares the average value with each feature value, and binarizes each feature value.
- the contour line detection unit 12 performs pattern matching between the distribution of the feature values in the pixel window binarized by the binarization unit 11 and a predetermined binarization distribution. Detect a line segment.
- the image signal value correction unit 13 uses the image signal values of a plurality of pixels including the pixel of interest in the same direction as the contour line segment. The image signal value of the target pixel is corrected.
- the edge intensity value calculating unit 4 calculates the input edge intensity value E d 1, and the edge line segment is detected. In this case, since the input edge strength value E d 1 is not calculated, the corrected image signal value is output from the image signal value correction unit 7 only when the outline is not detected.
- FIG. 17 is a flowchart showing an image processing method according to Embodiment 4 of the present invention. '
- the feature value calculation unit 3 receives the imaging color signal of the pixel in the pixel window from the area extraction unit 2, the feature value calculation unit 3 The feature value D (i, j) of the small area in the window is calculated.
- the binarization unit 11 calculates the value of the feature value D (i, j) as shown in the following equation (9). An average value D a V e is calculated (step ST 21).
- Step ST22 The contour line detection unit 12 is used to determine the distribution of the feature values in the pixel window that has been binarized by the binarization unit 11 and a predetermined binarization distribution (see Fig. 18). Then, the contour line is detected by performing pattern matching (step ST23).
- the contour line segment detection unit 12 performs pattern matching between the distribution of the binarized pixel values in the pixel window and a predetermined binarization distribution, and if they match, the contour is detected. A line segment exists, and it is determined that the pixel is a pixel on the edge of the straight line or a pixel adjacent to the edge. On the other hand, if they do not match, it is determined that there is no contour line (step ST24).
- the edge intensity value calculation unit 4 the edge intensity value correction unit 5, the file value calculation unit 6, and the image
- the signal value correction unit 7 executes a process to calculate a correction value of the image signal value.
- the image signal value correction unit 13 uses the image signal values of a plurality of pixels including the pixel of interest in the same direction as the contour line segment. Then, the image signal value of the target pixel is corrected (step ST25).
- FIG. 19 shows an example of a case in which a photographing result in which noise is generated adjacent to the contour line is processed.
- (a) is a single-panel imaging method. The pixel window captured by the element is shown, and it is assumed that the pixel of interest P (2, 2) is at the G pixel position.
- FIG. 19 (b) shows the pixel values actually photographed with the pixel arrangement of FIG. 19 (a), and the level “2 5 5" is distributed on the background of the signal level "0". And the level "3" and level “20" are distributed in the adjacent row. That is, in FIG. 19 (b), a straight line in the vertical direction is photographed, and noise of level “8” is generated in the central pixel for which the noise reduction processing is performed.
- FIG. 19 (c) shows the pixel window of FIG. 19 (b) converted to a feature value D (i, j) using equation (1).
- the preset binarization distribution is a distribution as shown in FIG. 18, pattern matching between the binarization result of FIG. 19 (d) and the binarization distribution of FIG. 18 is performed. Then, it is detected that the corresponding pixel is a pixel adjacent to an edge having directionality in the vertical direction.
- the image signal value correction unit 13 is, for example, a line forming an edge as in the following equation. If the mouth-to-pass fill is applied in the vertical direction, the noise component in the contour line can be reduced satisfactorily.
- FIG. 19 (e) shows the configuration of Japanese Patent Laid-Open No.
- FIG. 19 (f) shows the G component distribution when the color interpolation method disclosed in Japanese Patent Application Publication No. 1344523 is implemented, and FIG. 19 (f) shows the noise adjacent to the page in the fourth embodiment.
- the conventional edge detection means detects a strong edge holding image information and does not reduce it, but rather emphasizes it in the outline enhancement processing. This makes it possible to effectively reduce noise adjacent to the page.
- Embodiment 5 the conventional edge detection means detects a strong edge holding image information and does not reduce it, but rather emphasizes it in the outline enhancement processing. This makes it possible to effectively reduce noise adjacent to the page. Embodiment 5.
- the feature value used in the pattern matching using the value calculated by the equation (1) has been described.
- the feature value is a value in which all the color components of R GB are mixed. If you have, chromatic Etsu The same effect can be obtained because the detection of the edge can be performed effectively.
- the present invention is not limited to this, and any one of the pixel window and the feature value window may be used. Also, any size can be used.
- the pattern matching is performed with a preset binarization pattern in order to detect a contour segment.
- the present invention is not limited to this. That is, the contour line direction detection may be performed using a first-order differential filter output value or a second-order differential filter output value having directionality as the fill coefficient.
- the output of the noise reduction processing result in the first embodiment or the noise reduction processing result adjacent to the edge is shown.
- the result of the noise reduction processing in the first embodiment is described.
- a weighted average of the noise reduction processing results adjacent to the edge may be performed, and a result in consideration of both processing results may be output.
- the present invention is not limited to this. That is, it is also possible to use only the noise reduction processing adjacent to the edge alone.
- the noise reduction processing adjacent to the page in the fourth embodiment may be arranged in the color interpolation processing in the third embodiment or in another image processing. At this time, they may be arranged so as to be used together with the noise reduction processing of the first embodiment, or may be arranged alone.
- the photographing result of the single-chip image sensor is used has been described. Not five. That is, a luminance signal can be used as the feature value.
- the luminance signal is a single-chip image sensor
- noise diffusion by the color interpolation process cannot be suppressed.However, by suppressing the peak of the diffused noise, a certain noise reduction effect can be obtained. Is obtained.
- the present invention can be applied to all image devices such as a facsimile device, a copying machine, a television receiver, and the like, which include an image reading device or an image transmission unit that generates noise.
- the noise generated in the color edge can be effectively reduced by performing the noise reduction processing of the fourth embodiment on the color difference signal.
- the color filter is a primary color system and the arrangement thereof is described as an example in FIG. 3, but the present invention is not limited to this. In this case, another arrangement or a complementary color filter may be used. Further, the same effect can be obtained even when the image pickup device has a honeycomb shape other than the square array, for example.
- a consumer digital still camera / digital video camera or the like equipped with a two-dimensional image sensor such as a CCD uses the image sensor. It is suitable for removing noise contained in an image signal when an image is taken.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Color Television Image Signal Generators (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
Description
Claims
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/553,424 US7970231B2 (en) | 2004-02-19 | 2004-02-19 | Image processing method |
PCT/JP2004/001889 WO2005081542A1 (ja) | 2004-02-19 | 2004-02-19 | 画像処理方法 |
JP2006519060A JP4668185B2 (ja) | 2004-02-19 | 2004-02-19 | 画像処理方法 |
EP04712669.3A EP1729523B1 (en) | 2004-02-19 | 2004-02-19 | Image processing method |
CN200480027185XA CN1857008B (zh) | 2004-02-19 | 2004-02-19 | 图像处理方法 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2004/001889 WO2005081542A1 (ja) | 2004-02-19 | 2004-02-19 | 画像処理方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005081542A1 true WO2005081542A1 (ja) | 2005-09-01 |
Family
ID=34878940
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/001889 WO2005081542A1 (ja) | 2004-02-19 | 2004-02-19 | 画像処理方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US7970231B2 (ja) |
EP (1) | EP1729523B1 (ja) |
JP (1) | JP4668185B2 (ja) |
CN (1) | CN1857008B (ja) |
WO (1) | WO2005081542A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008005462A (ja) * | 2006-05-22 | 2008-01-10 | Fujitsu Ltd | 画像処理装置 |
WO2008133145A1 (ja) * | 2007-04-18 | 2008-11-06 | Rosnes Corporation | 固体撮像装置 |
WO2009008430A1 (ja) * | 2007-07-10 | 2009-01-15 | Olympus Corporation | 画像処理装置、画像処理プログラム及び撮像装置 |
US8477210B2 (en) | 2008-11-21 | 2013-07-02 | Mitsubishi Electric Corporation | Image processing device and image processing method |
US8971660B2 (en) | 2009-03-16 | 2015-03-03 | Ricoh Company, Ltd. | Noise reduction device, noise reduction method, noise reduction program, and recording medium |
Families Citing this family (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006042267A (ja) * | 2004-07-30 | 2006-02-09 | Canon Inc | 画像処理方法、画像処理装置、およびプログラム |
JP4677753B2 (ja) * | 2004-10-01 | 2011-04-27 | 株式会社ニコン | 動画像処理装置及び方法 |
TWI336595B (en) * | 2005-05-19 | 2011-01-21 | Mstar Semiconductor Inc | Noise reduction method |
JP4979595B2 (ja) * | 2005-12-28 | 2012-07-18 | オリンパス株式会社 | 撮像システム、画像処理方法、画像処理プログラム |
JP4284628B2 (ja) * | 2006-12-15 | 2009-06-24 | ソニー株式会社 | 撮像装置、画像処理装置、画像処理方法、画像処理方法のプログラム及び画像処理方法のプログラムを記録した記録媒体 |
US8194984B2 (en) * | 2007-03-05 | 2012-06-05 | Fujitsu Limited | Image processing system that removes noise contained in image data |
CN104702926B (zh) * | 2007-04-11 | 2017-05-17 | Red.Com 公司 | 摄像机 |
US8237830B2 (en) | 2007-04-11 | 2012-08-07 | Red.Com, Inc. | Video camera |
JP4925198B2 (ja) * | 2007-05-01 | 2012-04-25 | 富士フイルム株式会社 | 信号処理装置および方法、ノイズ低減装置および方法並びにプログラム |
US20090092338A1 (en) * | 2007-10-05 | 2009-04-09 | Jeffrey Matthew Achong | Method And Apparatus For Determining The Direction of Color Dependency Interpolating In Order To Generate Missing Colors In A Color Filter Array |
JP4525787B2 (ja) * | 2008-04-09 | 2010-08-18 | 富士ゼロックス株式会社 | 画像抽出装置、及び画像抽出プログラム |
JP2012191465A (ja) * | 2011-03-11 | 2012-10-04 | Sony Corp | 画像処理装置、および画像処理方法、並びにプログラム |
WO2012153736A1 (ja) * | 2011-05-12 | 2012-11-15 | オリンパスメディカルシステムズ株式会社 | 内視鏡システム |
JP2013165476A (ja) * | 2011-11-11 | 2013-08-22 | Mitsubishi Electric Corp | 画像処理装置、画像処理方法、画像表示装置、プログラム及び記録媒体 |
JP5880121B2 (ja) * | 2012-02-21 | 2016-03-08 | 株式会社リコー | 画像処理装置 |
WO2014127153A1 (en) | 2013-02-14 | 2014-08-21 | Red. Com, Inc. | Video camera |
US9514515B2 (en) * | 2013-03-08 | 2016-12-06 | Sharp Kabushiki Kaisha | Image processing device, image processing method, image processing program, and image display device |
TWI634543B (zh) * | 2017-06-26 | 2018-09-01 | 友達光電股份有限公司 | 驅動裝置與驅動方法 |
KR102620350B1 (ko) | 2017-07-05 | 2024-01-02 | 레드.컴, 엘엘씨 | 전자 디바이스에서의 비디오 이미지 데이터 처리 |
JP7142772B2 (ja) * | 2019-05-16 | 2022-09-27 | 三菱電機株式会社 | 画像処理装置及び方法、並びに画像読み取り装置、並びにプログラム及び記録媒体 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10243407A (ja) * | 1997-02-27 | 1998-09-11 | Olympus Optical Co Ltd | 画像信号処理装置及び画像入力処理装置 |
JPH11215515A (ja) * | 1998-01-27 | 1999-08-06 | Eastman Kodak Japan Ltd | 画像センサのライン毎ノイズ除去装置及び方法 |
JP2000341702A (ja) * | 1999-05-26 | 2000-12-08 | Fuji Photo Film Co Ltd | 信号生成方法および装置並びに記録媒体 |
Family Cites Families (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2787781B2 (ja) | 1990-03-28 | 1998-08-20 | 富士写真フイルム株式会社 | デジタル電子スチルカメラ |
US5200841A (en) * | 1990-05-25 | 1993-04-06 | Nikon Corporation | Apparatus for binarizing images |
JP3249605B2 (ja) * | 1992-11-25 | 2002-01-21 | イーストマン・コダックジャパン株式会社 | 書類辺縁検出装置 |
JPH07236060A (ja) | 1994-02-22 | 1995-09-05 | Nikon Corp | 画像処理装置 |
US6148115A (en) * | 1996-11-08 | 2000-11-14 | Sony Corporation | Image processing apparatus and image processing method |
KR100230391B1 (ko) * | 1996-11-29 | 1999-11-15 | 윤종용 | 휘도신호의 윤곽성분을 적응적으로 보정하는방법 및 회로 |
US6229578B1 (en) * | 1997-12-08 | 2001-05-08 | Intel Corporation | Edge-detection based noise removal algorithm |
JP4118397B2 (ja) * | 1998-07-01 | 2008-07-16 | イーストマン コダック カンパニー | 固体カラー撮像デバイスのノイズ除去方法 |
JP4281929B2 (ja) | 1998-11-18 | 2009-06-17 | カシオ計算機株式会社 | エッジ強調装置及びエッジ強調方法 |
DE60040786D1 (de) * | 1999-08-05 | 2008-12-24 | Sanyo Electric Co | Bildinterpolationsverfahren |
US6377313B1 (en) * | 1999-09-02 | 2002-04-23 | Techwell, Inc. | Sharpness enhancement circuit for video signals |
JP4253095B2 (ja) | 1999-12-15 | 2009-04-08 | 富士フイルム株式会社 | 画像データ・フィルタリング装置および方法 |
JP4599672B2 (ja) * | 1999-12-21 | 2010-12-15 | 株式会社ニコン | 補間処理装置および補間処理プログラムを記録した記録媒体 |
US6961476B2 (en) * | 2001-07-27 | 2005-11-01 | 3M Innovative Properties Company | Autothresholding of noisy images |
EP1289309B1 (en) | 2001-08-31 | 2010-04-21 | STMicroelectronics Srl | Noise filter for Bayer pattern image data |
JP2003087809A (ja) | 2001-09-11 | 2003-03-20 | Acutelogic Corp | 画像処理装置及び画像処理方法 |
KR100396898B1 (ko) * | 2001-09-13 | 2003-09-02 | 삼성전자주식회사 | 이미지센서 출력데이터 처리장치 및 처리방법 |
JP2003134523A (ja) | 2001-10-25 | 2003-05-09 | Mitsubishi Electric Corp | 撮像装置及び撮像方法 |
US6904169B2 (en) | 2001-11-13 | 2005-06-07 | Nokia Corporation | Method and system for improving color images |
US7023487B1 (en) * | 2002-01-25 | 2006-04-04 | Silicon Image, Inc. | Deinterlacing of video sources via image feature edge detection |
JP3915563B2 (ja) * | 2002-03-19 | 2007-05-16 | 富士ゼロックス株式会社 | 画像処理装置および画像処理プログラム |
JP2003304549A (ja) * | 2002-04-11 | 2003-10-24 | Olympus Optical Co Ltd | カメラ及び画像信号処理システム |
-
2004
- 2004-02-19 EP EP04712669.3A patent/EP1729523B1/en not_active Expired - Lifetime
- 2004-02-19 US US10/553,424 patent/US7970231B2/en not_active Expired - Fee Related
- 2004-02-19 WO PCT/JP2004/001889 patent/WO2005081542A1/ja not_active Application Discontinuation
- 2004-02-19 CN CN200480027185XA patent/CN1857008B/zh not_active Expired - Fee Related
- 2004-02-19 JP JP2006519060A patent/JP4668185B2/ja not_active Expired - Fee Related
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10243407A (ja) * | 1997-02-27 | 1998-09-11 | Olympus Optical Co Ltd | 画像信号処理装置及び画像入力処理装置 |
JPH11215515A (ja) * | 1998-01-27 | 1999-08-06 | Eastman Kodak Japan Ltd | 画像センサのライン毎ノイズ除去装置及び方法 |
JP2000341702A (ja) * | 1999-05-26 | 2000-12-08 | Fuji Photo Film Co Ltd | 信号生成方法および装置並びに記録媒体 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1729523A4 |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008005462A (ja) * | 2006-05-22 | 2008-01-10 | Fujitsu Ltd | 画像処理装置 |
WO2008133145A1 (ja) * | 2007-04-18 | 2008-11-06 | Rosnes Corporation | 固体撮像装置 |
JP4446259B2 (ja) * | 2007-04-18 | 2010-04-07 | 株式会社 Rosnes | 固体撮像装置 |
JPWO2008133145A1 (ja) * | 2007-04-18 | 2010-07-22 | 株式会社 Rosnes | 固体撮像装置 |
WO2009008430A1 (ja) * | 2007-07-10 | 2009-01-15 | Olympus Corporation | 画像処理装置、画像処理プログラム及び撮像装置 |
US8724920B2 (en) | 2007-07-10 | 2014-05-13 | Olympus Corporation | Image processing device, program recording medium, and image acquisition apparatus |
US8477210B2 (en) | 2008-11-21 | 2013-07-02 | Mitsubishi Electric Corporation | Image processing device and image processing method |
US8971660B2 (en) | 2009-03-16 | 2015-03-03 | Ricoh Company, Ltd. | Noise reduction device, noise reduction method, noise reduction program, and recording medium |
Also Published As
Publication number | Publication date |
---|---|
US20060232690A1 (en) | 2006-10-19 |
CN1857008B (zh) | 2010-05-05 |
EP1729523B1 (en) | 2014-04-09 |
EP1729523A4 (en) | 2009-10-21 |
JP4668185B2 (ja) | 2011-04-13 |
US7970231B2 (en) | 2011-06-28 |
EP1729523A1 (en) | 2006-12-06 |
CN1857008A (zh) | 2006-11-01 |
JPWO2005081542A1 (ja) | 2007-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2005081542A1 (ja) | 画像処理方法 | |
US7860334B2 (en) | Adaptive image filter for filtering image information | |
KR100786931B1 (ko) | 화상 신호 처리 장치 및 화상 신호 처리 방법 | |
JP4054184B2 (ja) | 欠陥画素補正装置 | |
US7065246B2 (en) | Image processing apparatus | |
KR20090087811A (ko) | 촬상 장치, 화상 처리 장치, 화상 처리 방법, 화상 처리방법의 프로그램 및 화상 처리 방법의 프로그램을 기록한기록 매체 | |
JP2002077645A (ja) | 画像処理装置 | |
TW200838285A (en) | Image processing apparatus, image capturing apparatus, image processing method in these apparatuses, and program allowing computer to execute the method | |
US7982787B2 (en) | Image apparatus and method and program for producing interpolation signal | |
EP2360929B1 (en) | Image processing device | |
US7609300B2 (en) | Method and system of eliminating color noises caused by an interpolation | |
KR100700017B1 (ko) | 조정 가능한 임계값을 이용한 컬러 보간 장치 | |
JP2003348382A (ja) | 輪郭強調回路 | |
JP5494249B2 (ja) | 画像処理装置、撮像装置及び画像処理プログラム | |
JP4197821B2 (ja) | 画像処理装置 | |
US10348984B2 (en) | Image pickup device and image pickup method which performs diagonal pixel offset and corrects a reduced modulation depth in a diagonal direction | |
US20240147080A1 (en) | Image processing apparatus and method, and image capturing apparatus | |
JP2009017583A (ja) | 画像処理装置 | |
TWI280062B (en) | Signal separation apparatus applied in image transmission system and related method | |
KR100784158B1 (ko) | 에지 향상을 위한 컬러 보간 장치 | |
JP4176023B2 (ja) | 信号処理回路 | |
JP2005286678A (ja) | 画像信号処理回路 | |
JP3035007B2 (ja) | 画像読取装置 | |
JPH08279902A (ja) | 撮像装置 | |
JP2001257885A (ja) | 画像処理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200480027185.X Country of ref document: CN |
|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006519060 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2006232690 Country of ref document: US Ref document number: 10553424 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2004712669 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 10553424 Country of ref document: US |
|
WWP | Wipo information: published in national office |
Ref document number: 2004712669 Country of ref document: EP |