WO2014208434A1 - 画像処理装置、画像処理方法及び画像処理プログラム - Google Patents
画像処理装置、画像処理方法及び画像処理プログラム Download PDFInfo
- Publication number
- WO2014208434A1 WO2014208434A1 PCT/JP2014/066233 JP2014066233W WO2014208434A1 WO 2014208434 A1 WO2014208434 A1 WO 2014208434A1 JP 2014066233 W JP2014066233 W JP 2014066233W WO 2014208434 A1 WO2014208434 A1 WO 2014208434A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- component
- frequency
- processing
- image processing
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 219
- 238000003672 processing method Methods 0.000 title claims description 5
- 238000012937 correction Methods 0.000 claims abstract description 51
- 238000000034 method Methods 0.000 claims abstract description 30
- 230000008569 process Effects 0.000 claims abstract description 25
- 238000000354 decomposition reaction Methods 0.000 claims description 72
- 238000007906 compression Methods 0.000 claims description 27
- 230000006835 compression Effects 0.000 claims description 23
- 238000004364 calculation method Methods 0.000 claims description 21
- 230000002194 synthesizing effect Effects 0.000 claims description 11
- 230000006837 decompression Effects 0.000 claims description 10
- 230000009467 reduction Effects 0.000 claims description 6
- 230000007423 decrease Effects 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 2
- 230000003247 decreasing effect Effects 0.000 claims description 2
- 238000003384 imaging method Methods 0.000 abstract description 43
- 238000001914 filtration Methods 0.000 abstract description 7
- 238000010586 diagram Methods 0.000 description 11
- 230000003287 optical effect Effects 0.000 description 11
- 238000006243 chemical reaction Methods 0.000 description 8
- 230000002146 bilateral effect Effects 0.000 description 6
- 230000015572 biosynthetic process Effects 0.000 description 4
- 230000006866 deterioration Effects 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 230000009466 transformation Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 101100273992 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) CFT1 gene Proteins 0.000 description 3
- 230000001629 suppression Effects 0.000 description 3
- 101100203200 Danio rerio shha gene Proteins 0.000 description 2
- 238000003705 background correction Methods 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002441 reversible effect Effects 0.000 description 2
- 230000035945 sensitivity Effects 0.000 description 2
- 101100095847 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) SIZ1 gene Proteins 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000740 bleeding effect Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000002427 irreversible effect Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000001172 regenerating effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/10—Image enhancement or restoration using non-spatial domain filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20064—Wavelet transform [DWT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20172—Image enhancement details
- G06T2207/20182—Noise reduction or smoothing in the temporal domain; Spatio-temporal filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2209/00—Details of colour television systems
- H04N2209/04—Picture signal generators
- H04N2209/041—Picture signal generators using solid-state devices
- H04N2209/042—Picture signal generators using solid-state devices having a single pick-up sensor
- H04N2209/045—Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
- H04N2209/046—Colour interpolation to calculate the missing colour values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/12—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
Definitions
- the present invention relates to an image processing apparatus, and more particularly to an image processing apparatus, an image processing method, and an image processing program for performing image processing on a Bayer image.
- Patent Document 1 first performs an interpolation process on a Bayer image including R pixels, G pixels, and B pixels, and generates an RGB image having all R, G, and B components in each pixel. Then, after generating a luminance component and a color difference component from the RGB image, different image processing is performed for each component. As described above, different image processing is performed depending on whether it is a luminance component or a color component. For example, since noise characteristics are different for each component, processing is performed according to the component for appropriate noise removal processing. This is because it is better.
- Patent Document 1 wavelet transformation is performed on each of the luminance component and the color component to generate components in a plurality of different frequency bands, and then noise removal processing is performed on the components in each frequency band. Thereby, it is possible to appropriately perform noise removal processing while taking into consideration the difference in noise characteristics between the low frequency component and the high frequency component.
- each pixel component in order to perform image processing by paying attention to the difference in characteristics between color components and luminance components in an image, each pixel component must be interpolated. In this case, data having a larger data amount than the original Bayer image must be processed during image processing. When the amount of data to be processed increases, the capacity of the memory for storing the data increases or the processing takes time.
- An object of the present invention is to provide an image processing apparatus, an image processing method, and an image processing program that process a relatively small amount of data while performing image processing according to the difference between color components and luminance components in an image.
- the image processing apparatus includes a row in which R (red) pixels and G (green) pixels are alternately arranged in a horizontal direction, G (green) pixels, and B (blue) pixels.
- Image acquisition means for acquiring a Bayer image according to a Bayer array in which rows arranged alternately in the horizontal direction are arranged in the vertical direction, and the Bayer image indicating luminance information in each of the vertical direction and the horizontal direction
- a first frequency resolving unit that generates a plurality of component images by separating a low frequency component and a high frequency component indicating color information; and an image based on the component image after the frequency resolving by the first frequency resolving unit
- At least one of a pair of processing consisting of a feature amount calculation process for calculating a feature amount related to, a correction process for the component image after frequency decomposition, and an information compression process and an information expansion process for the component image after frequency decomposition
- An image processing unit that executes image processing so that the high-frequency component and the low-
- the image processing method includes a row in which R (red) pixels and G (green) pixels are alternately arranged in the horizontal direction, and G (green) pixels and B (blue) pixels in the horizontal direction.
- An image acquisition step for acquiring a Bayer image in accordance with a Bayer array in which alternating rows are alternately arranged in the vertical direction, and a low-frequency component indicating luminance information in the vertical and horizontal directions of the Bayer image And a high-frequency component indicating color information, and a frequency decomposition step for generating a plurality of component images, and a feature amount calculation for calculating a feature amount relating to the image based on the component image after the frequency decomposition in the frequency decomposition step Image processing, correction processing for the component image after frequency decomposition, and at least one of a pair of processing including information compression processing and information expansion processing for the component image after frequency decomposition Image processing step that executes different processing for the high-frequency component and the low-frequency component, and the decomposed image synthesis that regenerates
- the program according to the first aspect is such that R (red) pixels and G (green) pixels are alternately arranged in a horizontal direction, and G (green) pixels and B (blue) pixels are alternately arranged in a horizontal direction.
- An image acquisition step of acquiring a Bayer image in accordance with a Bayer array in which rows arranged alternately in a vertical direction, and a low-frequency component and a color indicating luminance information in the vertical direction and the horizontal direction of the Bayer image A frequency decomposition step that generates a plurality of component images by separating the information into high-frequency components indicating information, and a feature amount calculation process that calculates a feature amount related to the image based on the component image after the frequency decomposition in the frequency decomposition step; Image processing of at least one of a correction process for the component image after frequency decomposition and an information compression process and an information expansion process for the component image after frequency decomposition An image processing step that is executed so that the high-frequency component and the low-frequency component are processed differently, and
- the program according to the present invention can be recorded and distributed on a magnetic recording medium such as a flexible disk, an optical recording medium such as a DVD-ROM, a computer-readable recording device such as a hard disk or a USB memory, and the like via the Internet. It can be distributed by downloading.
- the image processing apparatus includes a row in which R (red) pixels and G (green) pixels are alternately arranged in a horizontal direction, G (green) pixels, and B (blue) pixels.
- Image acquisition means for acquiring a Bayer image according to a Bayer array in which rows arranged alternately in the horizontal direction are arranged in the vertical direction, and the Bayer image indicating luminance information in each of the vertical direction and the horizontal direction
- a frequency resolving unit that generates a plurality of component images by separating into a low frequency component and a high frequency component indicating color information, and calculates a feature amount related to the image based on the component image after the frequency resolving by the frequency resolving unit
- the high-frequency component and the low-frequency component differ in at least one of feature amount calculation processing, correction processing for the component image after frequency decomposition, and information compression processing for the component image after frequency decomposition. That an image processing means for executing such a process, the image processing means and an output means for outputting at least
- the present invention is based on the knowledge that a low frequency component indicates luminance information and a high frequency component indicates color information among a plurality of frequency band components obtained by performing frequency decomposition on a Bayer image. Based on this knowledge, the present invention performs frequency decomposition on a Bayer image to separate a low-frequency component indicating luminance information and a high-frequency component indicating color information, and then performing different image processing on each image, The image processing corresponding to each of the color component and the luminance component is performed. This makes it possible to perform appropriate image processing according to the components, and also performs frequency decomposition directly on the Bayer image to acquire each component, so that processing is performed during image processing compared to the case of interpolation. The amount of data to be saved can be reduced.
- the image processing apparatus regenerates the original image by combining the component images after performing the image processing.
- the frequency decomposition by the first frequency decomposition means in the first aspect focuses on the fact that the image data is decomposed into the color component and the luminance component by being directly applied to the Bayer image.
- the conventional frequency decomposition is different from the present invention in that the frequency decomposition is performed after interpolating pixels of the Bayer image, and image processing is performed by paying attention only to whether it is a low frequency component or a high frequency component. It is different from the present invention in that it is only a point.
- the image processing apparatus outputs the processed feature value as it is without synthesizing the component image or outputs the calculated feature value after performing the image processing.
- FIG. 1 is a block diagram illustrating a configuration of an imaging apparatus according to a first embodiment which is an embodiment of the present invention. It is a conceptual diagram which shows the Bayer arrangement
- FIG. 4A is a graph conceptually showing the frequency distribution of the color component and the luminance component in the subject image before being sampled by the image sensor.
- FIG. 4B is a graph conceptually showing the frequency distribution of the color component and the luminance component in the subject image after being sampled by the image sensor.
- FIG. 4A is a graph conceptually showing the frequency distribution of the color component and the luminance component in the subject image before being sampled by the image sensor.
- FIG. 4B is a graph conceptually showing the frequency distribution of the color component and the luminance component in the subject image after being sampled by the image sensor.
- FIG. 4C is a graph conceptually showing the frequency distribution of the component image generated by the one-level discrete wavelet transform.
- FIG. 4D is a graph conceptually showing the frequency distribution of the component image generated by subjecting each component image of FIG. 4C to further two-level discrete wavelet transform.
- FIG. 5A is a graph conceptually showing the frequency distribution in the horizontal direction and the vertical direction in each component image.
- FIG. 5B is a graph conceptually showing the frequency distribution of the color component and the luminance component included in each component image. It is a conceptual diagram which shows the relationship between each component image produced
- FIG. 8 is a block diagram illustrating a circuit configuration for frequency-decomposing W and U images and a circuit configuration of a correction processing unit for performing filter processing on component images such as WHH1 and UHH2 generated by frequency decomposition.
- FIG. 8 is a block diagram showing a circuit configuration for frequency-decomposing V and Y images and a circuit configuration of a correction processing unit for performing filter processing on component images such as VHH1 and YHH2 generated by frequency decomposition.
- FIG. 9 is a block diagram showing a circuit configuration for generating a W ′ image and a U ′ image by combining component images such as WHH 1 ′ and ULH 2 ′ subjected to correction processing in the decomposed image combining unit in FIG. 8.
- FIG. 9 is a block diagram illustrating a circuit configuration for generating a V ′ image and a Y ′ image by combining component images such as VHH1 ′ and YLH2 ′ that have been subjected to correction processing in the decomposed image combining unit of FIG. 8. It is a block diagram which shows the structure of 2nd Embodiment which concerns on another one Embodiment of this invention.
- the imaging apparatus 1 includes an imaging optical system 2, an imaging element 3 (image acquisition means), and an image processing unit 100 (image processing means).
- the imaging optical system 2 has various lenses, guides light from a subject to the imaging device 3 and forms an image on the imaging device 3.
- the imaging device 3 converts the formed subject image into an electrical signal by photoelectric conversion, and converts the electrical signal into a digital image signal and outputs the digital image signal.
- the image processing unit 100 generates image data corresponding to the subject image by performing predetermined signal processing on the digital image signal output from the imaging device 3.
- the image data generated by the image processing unit 100 is output as an image signal related to an image to be displayed on a display that displays an image, or is output to a computer-readable recording medium.
- the imaging device 3 and the image processing unit 100 will be described in more detail.
- the image sensor 3 includes a color filter arranged in a Bayer array, a photoelectric conversion element that outputs an analog signal corresponding to the intensity of light received through each color filter, and a digital signal that is an analog signal from the photoelectric conversion element. And an AD converter for converting into As shown in FIG. 2, the Bayer array includes rows in which R (red) and G (green) are alternately arranged in the horizontal direction and rows in which G (green) and B (blue) are alternately arranged in the horizontal direction. It is an array arranged alternately in the vertical direction. In FIG. 2, G belonging to the same row as R is indicated as Gr, and G belonging to the same row as B is indicated as Gb.
- the image sensor 3 outputs an image signal indicating an image in which pixels are arranged according to the Bayer array (hereinafter referred to as a Bayer image).
- the image processing unit 100 has two frequency decomposing units that decompose the image signal from the image sensor 3 into a high frequency component and a low frequency component. These frequency decomposition units 110 and 120 (first and second frequency decomposition means) both decompose the image signal according to the discrete wavelet transform.
- CDF 5/3 wavelet is used.
- a low frequency component is generated by a low-pass filter having 5 taps (5 pixels per dimension)
- a high-frequency component is generated by a high-pass filter having 3 taps (3 pixels per dimension).
- CDF 9/7 or the like may be employed.
- any reversible multi-resolution conversion such as Haar Wavelet may be used.
- the filter coefficients in the one-dimensional case in CDF 5/3 wavelet are as follows. Low-pass filter: [-1/8, 2/8, 6/8, -1/8] High-pass filter: [ ⁇ 1/2, 1, ⁇ 1/2]
- the filter shown in FIG. 3 is a low-pass filter in both the horizontal and vertical directions of the image.
- the numerical values arranged in a matrix in the horizontal direction and the vertical direction are weights at the respective pixel positions in the horizontal direction and the vertical direction. “/ 64” indicates that the weight of each pixel position divided by 64 is the filter coefficient at each position.
- FIGS. 3B to 3D the filter in FIG. 3A is denoted as LL.
- FIG. 3 (b) is a high-pass filter for the horizontal direction of the image and a low-pass filter for the vertical direction.
- this filter is denoted as HL.
- the filter shown in FIG. 3C is a low-pass filter for the horizontal direction of the image and a high-pass filter for the vertical direction.
- this filter is denoted as LH.
- the filter in FIG. 3D is a high-pass filter in both the horizontal and vertical directions of the image.
- this filter is denoted as HH.
- each filter coefficient included in the filter is multiplied by a pixel corresponding to the position, and the sum of all the multiplication results is used as a pixel value after filtering.
- FIGS. 3A to 3D show an example in which these filter processes are performed on the Bayer image.
- each pixel included in the Bayer image is multiplied by a filter coefficient corresponding to the position of the pixel.
- Such a filter operation is performed for each of the LL filter, the HL filter, the LH filter, and the HH filter every two pixels in the horizontal direction and the vertical direction in the image.
- four types of component images composed of the pixel values after the filter processing are obtained as the filter processing results.
- a component image obtained by filter processing using an LL filter is referred to as an LL image.
- component images obtained by filter processing using an HL filter, an LH filter, and an HH filter are defined as an HL image, an LH image, and an HH image.
- the LL image is a component image corresponding to the low frequency component of the original image in both the horizontal direction and the vertical direction.
- the HL image is an image corresponding to the high frequency component of the original image in the horizontal direction and the low frequency component of the original image in the vertical direction.
- the LH image is an image corresponding to the low frequency component of the original image in the horizontal direction and the high frequency component of the original image in the vertical direction.
- the HH image is a component image corresponding to the high frequency component of the original image in both the horizontal direction and the vertical direction.
- the result of frequency decomposition of the image signal obtained from the image pickup device is simply frequency decomposition of the luminance component of the image. It is only a result.
- the component image obtained thereby is simply a luminance component. It will show different properties from the decomposition results. This will be described below.
- FIG. 4A is an example of the frequency distribution of the luminance component and the color component in the subject image before entering the image sensor 3.
- FIG. 4B is an example of the frequency distribution of the luminance component and the color component in the image signal after being sampled by the image sensor 3.
- fs is a sampling frequency.
- the luminance component and the color component in the subject image before entering the image sensor 3 are orthogonally modulated in the image sensor 3 by the color filters arranged according to the Bayer array. Thereby, the luminance component is arranged on the low frequency side and the color component is arranged on the high frequency side.
- the frequency decomposition of this embodiment is first performed on an image in which a luminance component and a color component are arranged as shown in FIG.
- the Bayer image shown in FIG. 4B is decomposed into a high-frequency component and a low-frequency component in the horizontal direction by filter processing using an LL filter and an HL filter, as shown in FIG. 4C. Therefore, the LL image includes many luminance components, and the HL image includes many color components.
- performing frequency decomposition on a Bayer image is equivalent to demodulating a luminance component and a color component that are orthogonally modulated by a color filter in a Bayer array.
- FIG. 5A shows the frequency distribution of the LL image, the HL image, the LH image, and the HH image with respect to the horizontal direction and the vertical direction.
- FIG. 5B shows the distribution of the luminance component and the color component in the LL image, the HL image, the LH image, and the HH image with respect to the horizontal frequency and the vertical frequency, as in FIG. Show.
- the LL image mainly includes a luminance component
- each of the HL, LH, and HH images mainly includes a color component. That is, the LL image indicates luminance information in the original image, and each of the HL, LH, and HH images indicates color information in the original image.
- the frequency resolving unit 110 performs one-level (one-layer) discrete wavelet transform on the image signal related to the Bayer image output from the image sensor 3 using the filter coefficient shown in FIG. That is, the frequency resolving unit 110 generates four component images of LL, HL, LH, and HH from the Bayer image signal.
- each of the Y, U, V, and W pixels takes in the elements of the R pixel, the B pixel, and the G pixel in the original image as shown in the following expression.
- the Y image mainly represents the luminance component of the image. When the subject includes many high-frequency color components, these components are mixed in the Y image.
- the U image mainly represents the color component of the image. When the subject includes many high-frequency luminance components in the horizontal direction, these components are mixed in the U image.
- the V image mainly represents the color component of the image. When the subject includes many high-frequency luminance components in the vertical direction, these components are mixed in the V image.
- the W image mainly represents the color component of the image. When the subject includes many high-frequency color components, these components are mixed in the W image.
- Y (2G + R + B) / 4
- U (RB) / 2
- the frequency resolving unit 120 performs 2-level (2 hierarchies) discrete wavelet transform on the Y, U, V, and W component images generated by the frequency resolving unit 110. That is, as shown in FIG. 6, the frequency decomposition unit 120 decomposes the Y, U, V, and W component images into LL, HL, LH, and HH component images and generates the LL image thus generated. Is further decomposed into LL, HL, LH and HH component images.
- the component images generated by the first decomposition by the frequency decomposition unit 120 are denoted as YHL1, YLH1, YHH1, and the like.
- component images generated by the second decomposition by the frequency decomposition unit 120 are denoted as YLL2, YHL2, YLH2, YHH2, and the like.
- Y, U, V, and W of the first character indicate which of the Y, U, V, and W component images each component image is derived from.
- FIG. 4D shows a frequency distribution in the horizontal direction of each component image generated by the frequency decomposition unit 120.
- the Y image and U image shown in FIG. 4C described above mainly contain a luminance component and a color component, respectively.
- the high frequency component in the original subject image. May be mixed.
- the UL2 image does not include a high-frequency luminance component and shows a low-frequency color component itself in the original subject image.
- the YLL2 image does not include a high-frequency color component and shows a low-frequency luminance component itself in the original subject.
- the image processing unit 100 includes a correction processing unit 130 that performs correction processing on each component image generated by the frequency decomposition unit 120.
- the noise component is reduced by performing filter processing on each component image.
- a bilateral filter is used as the edge preserving filter.
- ⁇ ⁇ p (N)
- the operation of the bilateral filter is performed by calculating a weighted weighted average of all the pixels q ⁇ belonging to ⁇ .
- the weight assigned to each pixel is determined by two terms: a weight based on distance and a weight based on a difference in pixel value from the target pixel.
- the pixel of interest is p
- each pixel belonging to ⁇ is q.
- ⁇ (p, q) is a distance between the pixel p and the pixel q
- the weight based on the distance is expressed as follows.
- Df (p, q) is the difference in pixel value between the pixel p and the pixel q
- the weight due to the difference in pixel value from the pixel of interest is expressed as follows.
- the weight of each pixel is expressed as the product of the weight based on the distance and the weight based on the difference between the pixel values as follows.
- the bilateral filter is applied by the following calculation. Note that u ′ is a pixel value after filtering.
- the correction processing unit 130 includes a filter coefficient acquisition unit 131 (filter acquisition unit) that acquires a filter coefficient for filter processing using a bilateral filter, and a filter based on the filter coefficient acquired by the filter coefficient acquisition unit 131.
- a filter processing unit 132 filter processing means that performs processing.
- the filter coefficient acquisition unit 131 acquires filter coefficients as follows.
- This embodiment is mainly intended to reduce noise in low frequency components (such as YLL2). Accordingly, the reference range of the filter (the size of the filter kernel) can be set to about 3 ⁇ 3 to 5 ⁇ 5. Since YLL2 images and the like are down-sampled by undergoing frequency decomposition, even if the kernel size is small, it is possible to filter to a low frequency.
- the filter coefficient acquisition unit 131 relates to the weight w s according to the distance, as shown in FIGS. 7A and 7B, the horizontal 3 ⁇ vertical 3 weights w s1 and w s2 , or FIG. 7C and FIG.
- the weights w s1 and w s2 of horizontal 5 ⁇ vertical 5 shown in (d) are held.
- FIG. 7A to FIG. 7D show filter coefficients corresponding to the respective pixel positions as in FIG. In this way, the calculation amount is reduced by setting the weights of w s1 and w s2 to integers.
- the filter coefficient group obtained based on w s1 corresponds to the first coefficient group in the present invention.
- the filter coefficient group obtained based on w s2 corresponds to the second coefficient group in the present invention.
- w s1 (first weight) has a value that decreases as the distance between the pixels increases (that is, monotonically decreases with respect to the distance), and is used when filtering is performed on a YLL2 image or the like indicating luminance information. .
- w s2 (second weight) is used for the component images of ULL2, VLL2, and WLL2 indicating color information.
- w s2 is 1 regardless of the pixel position, and does not correspond to the distance.
- the ability to reduce noise in the filter increases.
- the resolution in the image after the filter processing is lowered.
- the filter coefficient acquisition unit 131 holds the value of ⁇ 2 R with respect to the weight w R based on the pixel value difference.
- ⁇ R is set in advance according to, for example, the characteristics of the imaging optical system 2 and the imaging element 3 of the imaging device 1.
- the setting value acquisition method is as follows.
- the imaging optical system 2 and the imaging element 3 generate a Bayer image signal relating to an object having uniform brightness as a subject.
- the frequency resolving units 110 and 120 perform the above-described frequency decomposition on the Bayer image signal. Thereby, each component image of YLL1, YHL2, YLH2, YHH2, ULL1,... Is obtained.
- the standard deviation ⁇ N of the pixel value is calculated for each component image.
- ⁇ R is obtained by multiplying ⁇ N by an appropriate coefficient k N as follows.
- the magnitude of k N is adjusted in consideration of the balance between the image degradation after processing and the noise suppression capability. For example, the smaller k N is, the lower the noise suppression capability is, while the processed image is less likely to deteriorate. Conversely, the smaller k N is, the higher the noise suppression capability is, while the processed image is less likely to deteriorate.
- the deterioration of the image indicates, for example, the deterioration of the resolution with respect to the luminance information, and the strong color bleeding with respect to the color information.
- kN is set by using the fact that human vision is not sensitive to changes in color resolution (color blur, etc.) as compared to the case of changes in luminance resolution. . For example, kN is decreased for a YHL1 image or the like indicating a high-frequency luminance component, and kN is increased for a ULL2 or VLL2 component image or the like indicating a low-frequency color component.
- the filter coefficient acquisition unit 131 holds ⁇ 2 R set in advance for each image component such as YLL2 and YHL1 in association with each image component.
- the filter coefficient acquisition unit 131 uses w s1 as the weight used for the YLL2 image indicating luminance information, w s2 as the weight used for the UL2 image indicating color information, and ⁇ 2 R for each image component. Hold each one.
- the filter coefficient acquisition unit 131 uses the following filter coefficients c (p, q) used for the filter calculation shown in Equation 6 above, based on Equations 4 to 6, using ⁇ 2 R , w s1, and w s2 , Calculated for each component image.
- the filter processing unit 132 performs filter processing on each image component based on Equation 6 using the filter coefficient acquired by the filter coefficient acquisition unit 131.
- the image processing unit 100 synthesizes all the component images that have been subjected to the correction processing by the correction processing unit 130, and regenerates the Bayer image, and the decomposed image combining units 140 and 150 (first and second) Second disassembled image combining means).
- the decomposed image composition unit 140 reproduces each component image of Y, U, V, and W by performing inverse transformation of the discrete wavelet transform performed by the frequency decomposition unit 120 on each component image such as YLL2 and YHL1.
- the decomposed image composition unit 150 applies the inverse transform of the discrete wavelet transform performed by the frequency decomposition unit 110 to each of the Y, U, V, and W component images generated by the decomposed image composition unit 140, thereby converting the Bayer image. Regenerate.
- the frequency resolution unit 110 and the like are constructed as a circuit that processes an image using a line buffer.
- the circuits constituting the frequency resolving units 110 and 120 include a horizontal DWT unit 11 that decomposes an input image into a high-frequency component and a low-frequency component in the horizontal direction, and an input image in the vertical direction.
- a vertical DWT unit 12 that decomposes into a high frequency component and a low frequency component is included.
- a circuit configuration including one horizontal DWT unit 11 and two vertical DWT units 12 is provided for one level of discrete wavelet transform.
- the frequency resolving unit 110 performs one-level discrete wavelet transform as described above. Therefore, a circuit configuration for one stage is provided as the frequency resolving unit 110 as shown in FIG.
- s (m) is the pixel value of the mth pixel in the horizontal direction.
- the vertical DWT unit 12 performs the same calculation on the input image in the vertical direction as the horizontal DWT unit 11 using Equations 9 and 10. In this case, s (m) is the pixel value of the mth pixel in the vertical direction.
- the horizontal DWT unit 11 in the frequency decomposition unit 110 decomposes the Bayer image into an H component and an L component in the horizontal direction.
- the vertical DWT unit 12 decomposes the H component from the horizontal DWT unit 12 into a high frequency component and a low frequency component in the vertical direction. Thereby, a W (HH) image and a U (HL) image are generated.
- the other vertical DWT unit 12 decomposes the L component from the horizontal DWT unit 12 into a high frequency component and a low frequency component in the vertical direction. Thereby, a V (LH) image and a Y (LL) image are generated.
- the discrete wavelet transform performed on the Bayer image based on Equations 9 and 10 above is equivalent to the discrete wavelet transform shown in FIG.
- the frequency decomposition unit 120 performs two-level discrete wavelet transform on the image as described above. Therefore, as shown in FIGS. 9 and 10, the frequency resolution unit 120 has a circuit configuration including a horizontal DWT unit 11 and two vertical DWT units 12 for each of Y, U, V, and W component images. Two stages are provided. For example, for the W image, as shown in FIG. 9, the horizontal DWT unit 11 at the first stage decomposes the W image into an H component and an L component in the horizontal direction. Then, the two vertical DWT units 12 decompose the H component and the L component into a high frequency component and a low frequency component in the vertical direction, respectively.
- each component image of WHH1, WHL1, WLH1, and WLL1 is generated.
- the former three are output to the correction processing unit 130, and only the remaining WLL1 images are output to the second-stage horizontal DWT unit 11.
- the WLL1 image is decomposed into component images of WHH2, WHL2, WLH2, and WLL2 by the horizontal DWT unit 11 and the vertical DWT unit 12 in the second stage. These component images are output to the correction processing unit 130.
- Each component image is input to each filter unit 13 provided as the correction processing unit 130.
- the filter unit 13 is provided for each component image, and each has the functions of both the filter coefficient acquisition unit 131 and the filter processing unit 132.
- the filter unit 13 holds weights w s1 or w s2 and ⁇ 2 R corresponding to each component image.
- the filter unit 13 acquires a filter coefficient based on the input component image and performs a filtering process on the component image.
- the component image that has been subjected to the filter process (correction process) is output to the decomposed image composition unit 140.
- each component image after the correction processing is denoted as WHH1 ′, ULL2 ′, and the like.
- the circuits constituting the decomposed image combining units 140 and 150 include a horizontal IDWT unit 21 that combines two component images composed of a high-frequency component and a low-frequency component in the horizontal direction. And a vertical IDWT unit 22 that synthesizes two component images composed of a high-frequency component and a low-frequency component in the vertical direction.
- the horizontal IDWT unit 21 synthesizes the component image by performing inverse conversion of the conversion performed by the horizontal DWT unit 11 on the component image.
- the vertical IDWT unit 22 synthesizes the component image by performing inverse conversion of the conversion performed by the vertical DWT unit 12 on the component image.
- a circuit configuration including one horizontal IDWT unit 21 and two vertical IDWT units 22 is provided for one-level discrete wavelet inverse transform.
- the decomposed image composition unit 140 synthesizes component images such as WHH1 ′ and UHL2 ′, and generates component images of Y ′, U ′, V ′, and W ′. For this reason, as the decomposed image synthesizing unit 140, as shown in FIGS. 11 and 12, a two-stage circuit configuration corresponding to the frequency decomposing unit 120 is provided. Note that Y ′ or the like indicates a component image after correction processing.
- the circuit configuration for generating the W ′ image is as follows. As shown in FIG. 11, the one vertical IDWT unit 22 in the first stage to which the WHH2 ′ image and the WHL2 ′ image are input generates an H component by combining them. On the other hand, the other vertical IDWT unit 22 in the first stage, to which the WLH2 ′ image and the WLL2 ′ image are input, synthesizes them to generate an L component. The H component and L component generated by the first-stage vertical IDWT unit 22 are input to the first-stage horizontal IDWT unit 21. The horizontal IDWT unit 21 combines these to generate a WLL1 ′ image.
- the H component and the L component generated by the second-stage vertical IDWT unit 22 are input to the second-stage horizontal IDWT unit 21.
- the horizontal IDWT unit 21 combines these to generate a W ′ image.
- the other component images of Y ′, U ′, and V ′ are similarly generated.
- the decomposed image combining unit 150 combines the Y ′ image, the U ′ image, the V ′ image, and the W ′ image, and regenerates one Bayer image. Therefore, a circuit configuration for one stage corresponding to the frequency decomposition unit 110 is provided as the decomposed image combining unit 150 as shown in FIG.
- One vertical IDWT unit 22 combines the W ′ image and the U ′ image to generate an H component.
- the other vertical IDWT unit 22 combines the V ′ image and the Y ′ image to generate an L component.
- the horizontal IDWT unit 21 combines the H component and the L component generated by the vertical IDWT unit 22 and regenerates the Bayer image.
- the correction processing unit 130 when the correction processing unit 130 performs filter processing on an image as correction processing, the high-frequency component indicating color information and the low-frequency component indicating luminance information among the component images are different.
- Use filter coefficients For example, as a weight based on the distance between pixels, w s1 is used for a YLL2 image or the like indicating luminance information, and w s2 is used for a UL2 image or the like indicating color information. Further, different ⁇ R is used for the component image indicating the luminance information and the component image indicating the color information as the weight based on the difference between the pixel values.
- Filter processing is applied to the component image.
- the frequency resolving units 110 and 120 directly perform frequency decomposition by discrete wavelet transform on the Bayer image. Therefore, for example, the amount of data to be processed can be reduced as compared with the case where the image is subjected to color interpolation.
- the frequency resolution unit 110 and the frequency resolution unit 120 perform discrete wavelet transform at each level using the same filter coefficient. That is, the frequency resolving units 110 and 120 have the same frequency characteristics. Accordingly, the decomposed image composition units 140 and 150 also have the same frequency characteristics. As a result, as shown in FIGS. 8 to 12, each stage of the frequency decomposing units 110 and 120 can be constructed with the same circuit configuration, and the decomposed image synthesizing units 140 and 150 also have the same circuit configuration. It can be constructed.
- the frequency decomposition units 110 and 120 and at least a part or all of the decomposed image synthesis units 140 and 150 are realized not by hardware alone but by a combination of hardware and software. It is also possible. In this case, for example, the CPU is caused to execute a program according to a predetermined algorithm of discrete wavelet transform. At this time, the frequency decomposition units 110 and 120 and the decomposed image composition units 140 and 150 have the same frequency characteristics, so that programs for performing each level of discrete wavelet transform can be constructed according to the same algorithm.
- the imaging apparatus 201 includes an imaging optical system 2, an imaging element 3, a frequency resolution unit 110, a frequency resolution unit 120, and an image processing unit 240.
- Signal processing or image processing by the imaging optical system 2, the imaging device 3, the frequency resolving unit 110, and the frequency resolving unit 120 is the same as that in the first embodiment.
- the image processing unit 240 includes a correction processing unit 130 and a compression processing unit 210.
- the correction processing of the correction processing unit 130 is the same as that in the first embodiment.
- the compression processing unit 210 performs compression processing on each component image after the correction processing unit 130 performs correction processing.
- the compression process is performed according to various compression methods related to digital data.
- the compression processing unit 210 preferably performs different compression processing for the high-frequency component and the low-frequency component in the component image. For example, a reversible compression method is used for YLL2 images, but taking into account the visual characteristics of people who are not sensitive to color resolution degradation, an irreversible compression method with a high compression ratio is used for UL2 images and the like. May be.
- the imaging apparatus 201 outputs the compressed data compressed by the compression processing unit 210 as it is.
- the compressed data is transmitted to the image display device 202 via the network 203 such as the Internet.
- the image display device 202 includes an expansion processing unit 221, a decomposed image combining unit 140, a decomposed image combining unit 150, a color interpolation unit 222, and a display 223.
- the decompression processing unit 221 performs decompression processing on the compressed data transmitted from the imaging device 201.
- the decompression process is a process for restoring the compressed data to the data before compression according to the decompression system that is paired with the compression system used in the compression process.
- the data restored by the decompression processing unit 221 is data related to a component image such as a YHH1 ′ image.
- the component images indicated by these data are combined by the decomposed image combining units 140 and 150, and a Bayer image is regenerated.
- the color interpolation unit 222 generates an image having R, G, and B component pixel values for each pixel by interpolating the pixel values of the Bayer image.
- the display 223 displays the image generated by the color interpolation unit 222.
- the image captured by the imaging device 201 is transmitted to the image display device 202 as the component image. This reduces the amount of data transmitted. Moreover, since the component image is compressed and transmitted, the amount of data transmission can be reduced.
- the image processing unit 240 may not be provided with the correction processing unit 130 but only the compression processing unit 210 may be provided.
- the image display device 202 may be provided with the correction processing unit 130, and the correction processing unit 130 may perform correction processing on the data restored by the decompression processing unit 221.
- the compressed data may be output to another device via some other wired / wireless interface instead of the network 203, or may be output to a computer-readable recording medium.
- the image processing unit 100 or 240 is provided with the correction processing unit 130 and the compression processing unit 210.
- a feature amount calculation unit described below may be provided in the image processing unit.
- the feature amount calculation unit calculates the feature amount of the image based on the component images such as YHH1 ′ and VLL2 ′.
- the image feature amount is, for example, a parameter used for various types of image processing. Such parameters include parameters used for correction processing and filter processing for enhancing the edges of an image.
- the image feature amount may be a statistical value obtained by performing various statistical calculations based on the component image. ⁇ N in the above-described embodiment is an example of such a statistical value. It is preferable that the feature amount calculation unit calculates the feature amount so that different processing is performed for the high-frequency component and the low-frequency component in the component image.
- the calculation is performed based on the ULL2 image or the like, but the YLL2 image is not calculated.
- the calculation is performed based on the YLL2 image, but the other component images are not calculated.
- the calculation result by the feature amount calculation unit may be used for image processing in the imaging apparatus itself, or may be output to another apparatus or to a recording medium.
- the image processing unit 100 performs image processing on the component image generated by the frequency resolution unit 110. At this time, it is only necessary to perform different image processing on the high-frequency component (U image or the like) and the low-frequency component (Y image) among the component images according to the difference in color information and luminance information.
- the frequency resolving unit 120 performs two-level (two-layer) discrete wavelet transform.
- the frequency resolving unit 120 may perform discrete wavelet transform of one level or three levels or more.
- the frequency resolving units 110 and 120 may have the same frequency characteristics, they may have different frequency characteristics.
- the weight used for the filter coefficient in the correction processing differs depending on whether the component image is a low frequency component or a high frequency component.
- w s1 is used for the YLL2 image
- w s2 is used for the ULL2 image or the like.
- a combination of these two types of weights at a predetermined ratio may be used.
- the correction processing unit 130 acquires filter coefficients based on preset w s1 , w s2 , and ⁇ R.
- the filter coefficient may be further adjusted based on the shooting conditions and the like. For example, when the amount of noise included in an image changes depending on shooting conditions such as ISO sensitivity and temperature at the time of shooting, the correction processing unit 130 may adjust the filter coefficient according to the shooting conditions.
- imaging conditions such as ISO sensitivity may be input from the outside of the imaging apparatus, and this is required when a control unit for controlling the imaging optical system 2 and the imaging element 3 is provided in the imaging apparatus. The imaging conditions may be output from the control unit to the correction processing unit 130.
- the correction processing unit 130 performs the filtering process once on the component image.
- the filtering process may be performed a plurality of times on the component image.
- a bilateral filter is employed, other filters such as a trilateral filter may be employed.
- the present invention is implemented as an imaging apparatus including the imaging optical system 2 and the imaging element 3.
- the present invention may be implemented as an image processing apparatus that performs image processing on a Bayer image generated in another imaging apparatus including an imaging optical system, an imaging element, and the like.
- the imaging optical system 2 and the imaging element 3 may not be provided in the image processing apparatus.
- Bayer images from other devices may be input from the imaging device to the image processing device via a network or various wired / wireless interfaces, or may be input via a recording medium.
- an interface for acquiring a Bayer image from the outside and a data reading unit from a recording medium correspond to the image acquisition means of the present invention.
- the compression processing unit 210 and the decompression processing unit 221 in the second embodiment described above may be provided in the image processing unit 100 in the first embodiment.
- a functional unit that compresses and decompresses data for example, data can be temporarily compressed and stored, so that the capacity of the storage unit that temporarily stores data can be reduced.
- the above-described embodiments and modification examples include at least one of correction processing, information compression / decompression processing, and feature amount calculation processing as image processing. It is within the scope of the present invention as long as at least one of the processes is different image processing between the high frequency component and the low frequency component.
- the three-dimensional noise reduction is a process for removing noise based on a comparison between frames in the case of a moving image including a plurality of frames of continuous captured images.
- the required buffer amount and bandwidth can be greatly reduced. For example, it can be reduced to 1/3 compared to RGB4: 4: 4 or YCbCr4: 4: 4, and to 2/3 compared to YCbCr4: 2: 0.
- the circuit scale can be suppressed.
- the image processing only the low-frequency components (ULL2, VLL2, and WLL2 images) among the component images indicating the color information may be corrected with a gain that cancels the color shading. Thereby, it is possible to efficiently secure the luminance shading correction effect for the entire image.
- the circuit scale can be suppressed.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Image Processing (AREA)
- Color Television Image Signal Generators (AREA)
- Studio Devices (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Abstract
Description
本発明の一実施の形態である第1の実施形態に係る撮像装置1について、図面を参照しつつ説明する。撮像装置1は、図1に示すように、撮像光学系2、撮像素子3(画像取得手段)及び画像処理部100(画像処理手段)を有している。撮像光学系2は、各種のレンズを有し、被写体からの光を撮像素子3へと導いて、撮像素子3において結像させる。撮像素子3は、結像された被写体の像を光電変換により電気信号に変換すると共に、その電気信号をデジタル画像信号に変換して出力する。画像処理部100は、撮像素子3から出力されたデジタル画像信号に所定の信号処理を施すことにより、被写体像に対応する画像データを生成する。画像処理部100が生成した画像データは、画像を表示するディスプレイに、表示すべき画像に係る画像信号として出力されたり、コンピュータ読み取り可能な記録媒体へと出力されたりする。以下、撮像素子3及び画像処理部100についてより詳細に説明する。
ローパスフィルタ:[-1/8,2/8,6/8,-1/8]
ハイパスフィルタ:[-1/2,1,-1/2]
Y=(2G+R+B)/4
U=(R-B)/2
V=(B-R)/2
W=2G-(R+B)
Ω=Ωp(N)
次に、周波数分解部110及び120、並びに、分解画像合成部140及び150の一実施例に係る回路構成について、図8~図12を参照しつつ説明する。本実施例において、周波数分解部110等は、ラインバッファを用いて画像を処理する回路として構築される。周波数分解部110及び120を構成する回路には、図8~図10に示すように、入力画像を横方向に関して高周波成分と低周波成分に分解する水平DWT部11と、入力画像を縦方向に関して高周波成分と低周波成分に分解する垂直DWT部12とが含まれる。1レベルの離散ウェーブレット変換につき、1つの水平DWT部11と2つの垂直DWT部12とからなる回路構成が設けられる。周波数分解部110は、上記の通り1レベルの離散ウェーブレット変換を施す。したがって、周波数分解部110として、図8に示すように、1段分の回路構成が設けられる。
次に、本発明に係る別の一実施の形態である第2の実施形態について説明する。なお、第2の実施形態において、第1の実施形態と共通する構成には同じ符号を付し、その説明を適宜省略する。第2の実施形態に係る撮像装置201は、図13に示すように、撮像光学系2、撮像素子3、周波数分解部110、周波数分解部120及び画像処理部240を有している。撮像光学系2、撮像素子3、周波数分解部110及び周波数分解部120による信号処理又は画像処理は、第1の実施形態と同様である。画像処理部240は、補正処理部130及び圧縮処理部210を有している。補正処理部130の補正処理は第1の実施形態と同様である。圧縮処理部210は、補正処理部130が補正処理を施した後の各成分画像に圧縮処理を施す。圧縮処理は、デジタルデータに関する各種の圧縮方式に従って行われる。圧縮処理部210は、成分画像のうち、高周波成分と低周波成分とで異なる圧縮処理を行うことが好ましい。例えば、YLL2画像には可逆性の圧縮方式が用いられるが、色の解像度の劣化には敏感でない人の視覚特性を考慮に入れ、ULL2画像等には圧縮率の高い不可逆性の圧縮方式が用いられてもよい。
以下、上述の実施形態に係る変形例について説明する。上述の実施形態では、画像処理部100又は240に、補正処理部130や圧縮処理部210が設けられている。これらに加え、又は、代えて、以下に説明する特徴量演算部が画像処理部に設けられてもよい。
Claims (10)
- R(赤)の画素及びG(緑)の画素が横方向に交互に並んだ行とG(緑)の画素及びB(青)の画素が横方向に交互に並んだ行とが縦方向に交互に並んだベイヤー配列に従ったベイヤー画像を取得する画像取得手段と、
前記ベイヤー画像を縦方向及び横方向の各方向について、輝度情報を示す低周波成分と色情報を示す高周波成分とに分離して、複数の成分画像を生成する第1の周波数分解手段と、
前記第1の周波数分解手段による周波数分解後の成分画像に基づいてその画像に関する特徴量を演算する特徴量演算処理、前記周波数分解後の成分画像に対する補正処理、並びに、前記周波数分解後の成分画像に対する情報圧縮処理及び情報伸長処理からなる一対の処理の少なくともいずれかの画像処理を、前記高周波成分と前記低周波成分とで異なる処理となるように実行する画像処理手段と、
前記画像処理手段が処理を実行した後の前記成分画像を合成して、1つのベイヤー画像を再生成する第1の分解画像合成手段と、
を備えていることを特徴とする画像処理装置。 - 前記第1の周波数分解手段が生成した前記成分画像のそれぞれに対して、縦方向及び横方向の各方向について1階層又は複数階層の周波数分解処理を施すことにより、当該成分画像のそれぞれをさらに複数の周波数帯域の成分画像に分解する第2の周波数分解手段と、
前記第2の周波数分解手段が生成した前記成分画像を合成する第2の分解画像合成手段とをさらに備えており、
前記画像処理手段が、前記第2の周波数分解手段が生成した前記成分画像に関して前記画像処理を実行し、
前記第1及び第2の分解画像合成手段が、前記画像処理手段が前記画像処理を実行した後の前記成分画像を合成して、1つのベイヤー画像を再生成することを特徴とする請求項1に記載の画像処理装置。 - 前記第1の周波数分解手段と前記第2の周波数分解手段とが、同一の周波数特性であると共に、
前記第1の分解画像合成手段と前記第2の分解画像合成手段とが、同一の周波数特性であることを特徴とする請求項2に記載の画像処理装置。 - 前記第1及び第2の周波数分解手段が、いずれも、前記ベイヤー画像を離散ウェーブレット変換により成分画像に分解することを特徴とする請求項2又は3に記載の画像処理装置。
- 前記画像処理手段が、
前記画像処理を施す前の画像を構成する各画素の画素値に乗算するフィルタ係数を複数含む第1及び第2の係数群を取得するフィルタ取得手段と、
前記フィルタ取得手段が取得した前記第1の係数群を使用して前記低周波数成分にフィルタを掛けると共に、前記フィルタ取得手段が取得した前記第2の係数群を使用して前記高周波数成分にフィルタを掛けるフィルタ処理手段とを有しており、
前記第1の係数群が、注目画素からの距離に関して単調に減少する第1の重みに基づいて取得され、
前記第2の係数群が、注目画素からの距離に関して前記第1の重みより小さい減少率で単調に減少する、又は、注目画素からの距離に関して変化しない第2の重みに基づいて取得されることを特徴とする請求項1~4のいずれか1項に記載の画像処理装置。 - 前記第1の係数群における各フィルタ係数が、当該係数に対応する位置の画素と注目画素との間の画素値の差分Df1に関する下記の数式1によって表される関数wR1と前記第1の重みとの積であり、
前記第2の係数群における各フィルタ係数が、当該係数に対応する位置の画素と注目画素との間の画素値の差分Df2に関する下記の数式2によって表される関数wR2と前記第2の重みとの積であり、
一様な明るさに対する画素値のばらつきを示す基準値を、前記低周波数成分の成分に関してσN1とし前記高周波数成分に関してσN2とするとき、数式1、2のσR1、σR2が、係数kN1及びkN2(kN1<kN2)を用いて、σR1=kN1*σN1、σR2=kN2*σN2と表されることを特徴とする請求項5に記載の画像処理装置。
- R(赤)の画素及びG(緑)の画素が横方向に交互に並んだ行とG(緑)の画素及びB(青)の画素が横方向に交互に並んだ行とが縦方向に交互に並んだベイヤー配列に従ったベイヤー画像を取得する画像取得手段と、
前記ベイヤー画像を縦方向及び横方向の各方向について、輝度情報を示す低周波成分と色情報を示す高周波成分とに分離して、複数の成分画像を生成する周波数分解手段と、
前記周波数分解手段による周波数分解後の成分画像に基づいてその画像に関する特徴量を演算する特徴量演算処理、前記周波数分解後の成分画像に対する補正処理、及び、前記周波数分解後の成分画像に対する情報圧縮処理の少なくともいずれかの画像処理を、前記高周波成分と前記低周波成分とで異なる処理となるように実行する画像処理手段と、
前記画像処理手段が前記画像処理を実行した後の前記成分画像及び前記特徴量の少なくともいずれかを出力する出力手段とを備えていることを特徴とする画像処理装置。 - 前記第1の周波数分解手段が生成した前記成分画像のそれぞれに対して、縦方向及び横方向の各方向について1階層又は複数階層の周波数分解処理を施すことにより、当該成分画像のそれぞれをさらに複数の周波数帯域の成分画像に分解する第2の周波数分解手段をさらに備えており、前記画像処理手段が、前記第2の周波数分解手段が生成した前記成分画像に関して前記画像処理を実行する請求項7に記載の画像処理装置。
- R(赤)の画素及びG(緑)の画素が横方向に交互に並んだ行とG(緑)の画素及びB(青)の画素が横方向に交互に並んだ行とが縦方向に交互に並んだベイヤー配列に従ったベイヤー画像を取得する画像取得ステップと、
前記ベイヤー画像を縦方向及び横方向の各方向について、輝度情報を示す低周波成分と色情報を示す高周波成分とに分離して、複数の成分画像を生成する周波数分解ステップと、
前記周波数分解ステップにおける周波数分解後の成分画像に基づいてその画像に関する特徴量を演算する特徴量演算処理、前記周波数分解後の成分画像に対する補正処理、並びに、前記周波数分解後の成分画像に対する情報圧縮処理及び情報伸長処理からなる一対の処理の少なくともいずれかの画像処理を、前記高周波成分と前記低周波成分とで異なる処理となるように実行する画像処理ステップと、
前記画像処理ステップ後の前記成分画像を合成して、1つのベイヤー画像を再生成する分解画像合成ステップとを備えていることを特徴とする画像処理方法。 - R(赤)の画素及びG(緑)の画素が横方向に交互に並んだ行とG(緑)の画素及びB(青)の画素が横方向に交互に並んだ行とが縦方向に交互に並んだベイヤー配列に従ったベイヤー画像を取得する画像取得ステップと、
前記ベイヤー画像を縦方向及び横方向の各方向について、輝度情報を示す低周波成分と色情報を示す高周波成分とに分離して、複数の成分画像を生成する周波数分解ステップと、
前記周波数分解ステップにおける周波数分解後の成分画像に基づいてその画像に関する特徴量を演算する特徴量演算処理、前記周波数分解後の成分画像に対する補正処理、並びに、前記周波数分解後の成分画像に対する情報圧縮処理及び情報伸長処理からなる一対の処理の少なくともいずれかの画像処理を、前記高周波成分と前記低周波成分とで異なる処理となるように実行する画像処理ステップと、
前記画像処理ステップ後の前記成分画像を合成して、1つのベイヤー画像を再生成する分解画像合成ステップとをコンピュータに実行させることを特徴とする画像処理プログラム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020167001240A KR101685885B1 (ko) | 2013-06-25 | 2014-06-19 | 화상 처리 장치, 화상 처리 방법 및 화상 처리 프로그램 |
US14/901,174 US9542730B2 (en) | 2013-06-25 | 2014-06-19 | Image processing device, image processing method, and image processing program |
CN201480035808.1A CN105340268B (zh) | 2013-06-25 | 2014-06-19 | 图像处理装置、图像处理方法以及图像处理程序 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-132987 | 2013-06-25 | ||
JP2013132987A JP5668105B2 (ja) | 2013-06-25 | 2013-06-25 | 画像処理装置、画像処理方法及び画像処理プログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014208434A1 true WO2014208434A1 (ja) | 2014-12-31 |
Family
ID=52141774
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2014/066233 WO2014208434A1 (ja) | 2013-06-25 | 2014-06-19 | 画像処理装置、画像処理方法及び画像処理プログラム |
Country Status (5)
Country | Link |
---|---|
US (1) | US9542730B2 (ja) |
JP (1) | JP5668105B2 (ja) |
KR (1) | KR101685885B1 (ja) |
CN (1) | CN105340268B (ja) |
WO (1) | WO2014208434A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018093295A (ja) * | 2016-11-30 | 2018-06-14 | 日本放送協会 | コントラスト補正装置及びプログラム |
WO2022107417A1 (ja) * | 2020-11-20 | 2022-05-27 | 株式会社日立製作所 | 画像処理装置、画像処理方法および画像処理プログラム |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10163192B2 (en) * | 2015-10-27 | 2018-12-25 | Canon Kabushiki Kaisha | Image encoding apparatus and method of controlling the same |
CN106067164B (zh) * | 2016-05-26 | 2018-11-20 | 哈尔滨工业大学 | 基于自适应小波域处理的彩色图像对比度增强算法 |
CN106651815B (zh) * | 2017-01-18 | 2020-01-17 | 聚龙融创科技有限公司 | 用于处理Bayer格式视频图像的方法及装置 |
US11593918B1 (en) * | 2017-05-16 | 2023-02-28 | Apple Inc. | Gradient-based noise reduction |
KR102349376B1 (ko) | 2017-11-03 | 2022-01-11 | 삼성전자주식회사 | 전자 장치 및 그의 영상 보정 방법 |
US11113796B2 (en) | 2018-02-09 | 2021-09-07 | Delta Electronics, Inc. | Image enhancement circuit and method thereof |
TWI707583B (zh) * | 2019-07-17 | 2020-10-11 | 瑞昱半導體股份有限公司 | 應用於影像感測電路的像素通道不平衡補償方法與系統 |
US20220188985A1 (en) * | 2020-12-11 | 2022-06-16 | Samsung Electronics Co., Ltd. | Method and apparatus for adaptive hybrid fusion |
KR20220096624A (ko) * | 2020-12-31 | 2022-07-07 | 엘지디스플레이 주식회사 | 표시 장치 |
CN113191210B (zh) * | 2021-04-09 | 2023-08-29 | 杭州海康威视数字技术股份有限公司 | 一种图像处理方法、装置及设备 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008242696A (ja) * | 2007-03-27 | 2008-10-09 | Casio Comput Co Ltd | 画像処理装置及びカメラ |
JP2011130112A (ja) * | 2009-12-16 | 2011-06-30 | Sony Corp | 表示支援装置及び撮像装置 |
JP2011239231A (ja) * | 2010-05-11 | 2011-11-24 | Canon Inc | 画像処理装置、および、画像処理装置の制御方法 |
JP2013088680A (ja) * | 2011-10-19 | 2013-05-13 | Olympus Corp | 顕微鏡システム |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE602006021728D1 (de) | 2005-03-31 | 2011-06-16 | Nippon Kogaku Kk | Bildverarbeitungsverfahren |
JP5163489B2 (ja) * | 2006-03-31 | 2013-03-13 | 株式会社ニコン | 画像処理方法、画像処理プログラム、および画像処理装置 |
JP4182446B2 (ja) * | 2006-07-14 | 2008-11-19 | ソニー株式会社 | 情報処理装置および方法、プログラム、並びに記録媒体 |
US8009935B2 (en) * | 2007-07-30 | 2011-08-30 | Casio Computer Co., Ltd. | Pixel interpolation circuit, pixel interpolation method, and recording medium |
KR100891825B1 (ko) * | 2007-11-27 | 2009-04-07 | 삼성전기주식회사 | 디지털 영상의 컬러 노이즈 제거 장치 및 방법 |
US8280185B2 (en) * | 2008-06-27 | 2012-10-02 | Microsoft Corporation | Image denoising techniques |
CN101783940A (zh) * | 2009-05-11 | 2010-07-21 | 北京航空航天大学 | 一种基于小波框架变换的联合信源信道编码方法 |
CN102646272A (zh) * | 2012-02-23 | 2012-08-22 | 南京信息工程大学 | 基于局部方差和加权相结合的小波气象卫星云图融合方法 |
CN102663426B (zh) * | 2012-03-29 | 2013-12-04 | 东南大学 | 一种基于小波多尺度分析和局部三值模式的人脸识别方法 |
CN102708553A (zh) * | 2012-05-29 | 2012-10-03 | 飞依诺科技(苏州)有限公司 | 一种实时超声图像斑点噪声抑制方法 |
JP2014090359A (ja) * | 2012-10-31 | 2014-05-15 | Sony Corp | 画像処理装置、および画像処理方法、並びにプログラム |
-
2013
- 2013-06-25 JP JP2013132987A patent/JP5668105B2/ja active Active
-
2014
- 2014-06-19 CN CN201480035808.1A patent/CN105340268B/zh active Active
- 2014-06-19 US US14/901,174 patent/US9542730B2/en active Active
- 2014-06-19 KR KR1020167001240A patent/KR101685885B1/ko active IP Right Grant
- 2014-06-19 WO PCT/JP2014/066233 patent/WO2014208434A1/ja active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008242696A (ja) * | 2007-03-27 | 2008-10-09 | Casio Comput Co Ltd | 画像処理装置及びカメラ |
JP2011130112A (ja) * | 2009-12-16 | 2011-06-30 | Sony Corp | 表示支援装置及び撮像装置 |
JP2011239231A (ja) * | 2010-05-11 | 2011-11-24 | Canon Inc | 画像処理装置、および、画像処理装置の制御方法 |
JP2013088680A (ja) * | 2011-10-19 | 2013-05-13 | Olympus Corp | 顕微鏡システム |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018093295A (ja) * | 2016-11-30 | 2018-06-14 | 日本放送協会 | コントラスト補正装置及びプログラム |
WO2022107417A1 (ja) * | 2020-11-20 | 2022-05-27 | 株式会社日立製作所 | 画像処理装置、画像処理方法および画像処理プログラム |
Also Published As
Publication number | Publication date |
---|---|
US20160140696A1 (en) | 2016-05-19 |
JP2015008414A (ja) | 2015-01-15 |
KR20160011239A (ko) | 2016-01-29 |
KR101685885B1 (ko) | 2016-12-12 |
US9542730B2 (en) | 2017-01-10 |
JP5668105B2 (ja) | 2015-02-12 |
CN105340268B (zh) | 2017-06-16 |
CN105340268A (zh) | 2016-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5668105B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
US9792672B2 (en) | Video capture devices and methods | |
JP6352617B2 (ja) | 映像処理装置及びその方法 | |
CN109889800B (zh) | 图像增强方法和装置、电子设备、存储介质 | |
KR101872172B1 (ko) | 화상 처리 장치, 화상 처리 방법, 기록 매체, 및 프로그램 | |
TWI387312B (zh) | 影像雜訊消除方法以及影像處理裝置 | |
US11388355B2 (en) | Multispectral image processing system and method | |
WO2006068025A1 (ja) | 画像処理方法 | |
JP7132364B2 (ja) | マルチレベル・バイラテラル・ノイズフィルタリングのための方法、装置、およびシステム | |
WO2012147523A1 (ja) | 撮像装置及び画像生成方法 | |
JP5765893B2 (ja) | 画像処理装置、撮像装置および画像処理プログラム | |
CN113168671A (zh) | 噪点估计 | |
WO2014077245A1 (ja) | ノイズ除去システムとノイズ除去方法及びプログラム | |
JP2009224901A (ja) | 画像のダイナミックレンジ圧縮方法、画像処理回路、撮像装置およびプログラム | |
JP6807795B2 (ja) | 雑音除去装置及びプログラム | |
US20140118580A1 (en) | Image processing device, image processing method, and program | |
JP5654860B2 (ja) | 画像処理装置及びその制御方法、プログラム | |
JP5682443B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
JP4866756B2 (ja) | 撮像装置 | |
JP6730916B2 (ja) | コントラスト補正装置及びプログラム | |
JP4339671B2 (ja) | 撮像装置 | |
JP2011043901A (ja) | 画像処理装置、画像処理方法、画像処理プログラム、および、電子機器 | |
JP6417938B2 (ja) | 画像処理装置、画像処理方法、プログラム、及び記録媒体 | |
JP2016218502A (ja) | 画像処理装置、制御方法およびプログラム | |
JP2014107611A (ja) | 画像処理装置及び画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201480035808.1 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 14818519 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14901174 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 20167001240 Country of ref document: KR Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 14818519 Country of ref document: EP Kind code of ref document: A1 |