WO2013100032A1 - 画像処理装置及び方法並びに撮像装置 - Google Patents
画像処理装置及び方法並びに撮像装置 Download PDFInfo
- Publication number
- WO2013100032A1 WO2013100032A1 PCT/JP2012/083837 JP2012083837W WO2013100032A1 WO 2013100032 A1 WO2013100032 A1 WO 2013100032A1 JP 2012083837 W JP2012083837 W JP 2012083837W WO 2013100032 A1 WO2013100032 A1 WO 2013100032A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- color
- pixel
- filter
- pixels
- correction
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/68—Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
- H04N9/69—Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits for modifying the colour signals by gamma correction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
Definitions
- the present invention relates to an image processing apparatus and method, and an imaging apparatus, and more particularly to a technique for eliminating the influence of color mixing between pixels of a mosaic image corresponding to a color filter array arranged in a single-plate imaging element.
- color mixture occurs due to light leakage from adjacent pixels.
- Patent Documents 1 and 2 there are techniques described in Patent Documents 1 and 2 as techniques for removing a color mixture component from a color signal including the color mixture component.
- the signal processing apparatus described in Patent Document 1 sets correction parameters Ka, Kb, Kc, and Kd based on the signals of four surrounding pixels that are adjacent to the correction target pixel on the side. It is used to perform color mixture correction processing.
- the four correction parameters Ka, Kb, Kc, and Kd can be set independently. As a result, even if the color mixture from surrounding pixels with respect to the target pixel has a directionality, color mixture correction according to the directionality can be realized.
- a correction coefficient related to a signal component mixed from a peripheral pixel to a pixel is associated with each pixel position in a pixel array in which a plurality of pixels are arranged in a row direction and a column direction.
- a coefficient table is stored.
- the imaging device described in Patent Literature 2 reads out a corresponding correction coefficient from the coefficient table in accordance with the position of the correction target pixel, and uses a signal and a correction coefficient of a peripheral pixel of the correction target pixel, thereby correcting the correction target pixel. The signal is corrected.
- Patent Document 1 The invention described in Patent Document 1 is characterized in that the four correction parameters Ka, Kb, Kc, and Kd for the four surrounding pixels that are adjacent to the correction target pixel on the side can be set independently. When color mixing occurs isotropically (when there is no directionality), the correction parameters Ka, Kb, Kc, and Kd are set to the same value. It can be set to a value. These correction parameters Ka, Kb, Kc, Kd can be controlled in real time from the outside (camera control unit). However, Patent Document 1 does not describe how the correction parameters Ka, Kb, Kc, and Kd are controlled with respect to the position of each correction target pixel.
- a coefficient table that stores a correction coefficient related to a signal component mixed from a peripheral pixel to each pixel in association with each pixel position in the sensor plane.
- An appropriate correction coefficient can be used for each pixel position.
- Japanese Patent Application Laid-Open No. 2004-228561 describes that a relational expression is stored instead of the coefficient table to reduce the data amount.
- an appropriate correction coefficient cannot be calculated when a change in the correction coefficient within the sensor plane does not apply to a specific relational expression.
- the present invention has been made in view of such circumstances, and it is possible to minimize the data amount of the color mixture rate stored in advance without depending on the type of the mosaic image (color filter array), and to perform color mixture correction. It is an object of the present invention to provide an image processing apparatus and method and an imaging apparatus that can be performed satisfactorily.
- an image processing apparatus includes an image acquisition unit that acquires a mosaic image including pixels of a plurality of colors, and each peripheral adjacent to a target pixel for color mixture correction of the mosaic image.
- Storage means for storing a color mixture ratio from a pixel, and storing means for storing a color mixture ratio corresponding to a combination of a first parameter indicating the azimuth direction of the peripheral pixel and a second parameter indicating the color of the peripheral pixel
- a color mixing correction unit that removes a color mixing component from surrounding pixels included in the color signal from the color signal of each pixel of the mosaic image acquired by the image acquisition unit.
- the color signal of the target pixel and the color signals of the surrounding pixels are acquired, and the corresponding color mixture rate is read from the storage unit based on the azimuth direction and color of the peripheral pixels, and the color mixture correction target image Followed by removal of color mixing component included in any of the target pixel on the basis of the color mixing rate of each peripheral pixel read color signals of each peripheral pixels adjacent to the color signal and the target pixel.
- the pixel is arbitrarily included in the target pixel based on the color signals of a plurality of peripheral pixels adjacent to the target pixel for color mixture correction and the influence (color mixing rate) of the color mixture caused by the color signal.
- the color mixing ratio of each peripheral pixel is determined according to which direction pixel (for example, the vertical and horizontal directions) the peripheral pixel is relative to the target pixel and the color of the peripheral pixel. . This is because the influence of color mixing depends on the azimuth direction and color of surrounding pixels.
- the storage unit stores a color mixture ratio corresponding to a combination of a first parameter indicating the azimuth direction of the peripheral pixel and a second parameter indicating the color of the peripheral pixel.
- the corresponding color mixture rate is read from the storage unit based on the azimuth direction and color of the peripheral pixel.
- the storage means may store color mixing ratios as many as a combination of the number of peripheral pixels in the azimuth direction and the number of colors of peripheral pixels, and store them in advance. The data amount of the color mixture rate can be minimized.
- the mosaic image includes a pixel group of a basic array pattern including M ⁇ N (M, N: integers of 2 or more, at least one is 3 or more) pixels, It is preferable that the pixel pattern of the array pattern is an image that is repeatedly arranged in the horizontal and vertical directions.
- the color mixture ratio stored in the storage means in advance does not depend on the type of the mosaic image, so that the pixel size of the basic array pattern is large, and a more complicated mosaic image can obtain a greater effect.
- the color mixture correcting unit determines the first and second parameters of each peripheral pixel corresponding to the target pixel based on the position of an arbitrary target pixel on the mosaic image. It includes a parameter acquisition unit to acquire, and reads out the corresponding color mixing ratio from the storage unit based on the acquired first and second parameters. That is, if the position of the target pixel is specified, the first and second parameters relating to the peripheral pixels of the target pixel can also be acquired, and the color mixing ratio corresponding to these first and second parameters is stored in the storage unit. Can be read from.
- the mosaic image is an image output from an image sensor having an element structure sharing an amplifier for each predetermined pixel group
- the storage unit is a target for color mixture correction.
- the positional information indicating which pixel in the pixel group sharing the amplifier is the third parameter, the color mixing rate corresponding to the combination of the first, second and third parameters is stored, and the parameter
- the acquisition unit acquires the first, second, and third parameters for each peripheral pixel based on the position of an arbitrary target pixel on the mosaic image, and the color mixture correction unit acquires the acquired first, second, and second parameters. Based on the parameter 3, the corresponding color mixing rate is read from the storage means for each peripheral pixel.
- An image sensor having an element structure that shares an amplifier for each predetermined pixel group has a difference in output characteristics depending on the positional relationship between the amplifier and each pixel.
- the storage means stores the color mixture rate considering this difference. That is, the storage unit uses the position information indicating the position of the target pixel (own pixel position) in the pixel group sharing the amplifier as the third parameter, and the color mixture corresponding to the combination of the first, second, and third parameters. I remember the rate.
- the color mixing correction unit reads out the corresponding color mixing rate from the storage unit for each peripheral pixel of the target pixel based on the first, second, and third parameters acquired in relation to the target pixel, and performs color mixing correction on the target pixel. use.
- the storage unit stores a color mixture rate for each divided region when the entire region of the mosaic image is divided into a plurality of divided regions, and the color mixture correcting unit includes the target pixel. It is preferable to read out the corresponding color mixing ratio from the storage unit according to which of the plurality of divided areas is included in the divided area.
- the central portion and the peripheral portion of the mosaic image have different color mixing ratios because the incident angle of subject light to each pixel of the image sensor is different. Therefore, the entire area of the mosaic image is divided into a plurality of divided areas so that the color mixture rate can be changed for each divided area.
- the color mixture correction unit includes a color signal of each peripheral pixel adjacent to the target pixel for color mixture correction and a color mixture set for each position of the peripheral pixel read from the storage unit.
- the color mixture component is calculated by multiplying the rate and the calculated color mixture component is subtracted from the color signal of the target pixel.
- a white balance gain calculating unit that calculates a white balance gain based on a color signal of each pixel of the mosaic image from which the mixed color component has been removed by the mixed color correcting unit, and a white balance It is preferable to further include a white balance correction unit that performs white balance correction on the color signal of each pixel of the mosaic image from which the mixed color component has been removed based on the white balance gain calculated by the gain calculation unit. Since the WB gain for white balance (WB) correction is calculated based on the mosaic image after the color mixture correction, it is possible to calculate an appropriate WB gain that excludes the influence of color mixing and the like. Thereby, white balance correction can be performed satisfactorily.
- WB gain for white balance (WB) correction is calculated based on the mosaic image after the color mixture correction, it is possible to calculate an appropriate WB gain that excludes the influence of color mixing and the like. Thereby, white balance correction can be performed satisfactorily.
- An image processing method includes an image acquisition step of acquiring a mosaic image including pixels of a plurality of colors, and a color mixture ratio from each peripheral pixel adjacent to a target pixel for color mixture correction of the mosaic image.
- a color mixture correction step of removing a color mixture component from surrounding pixels included in the color signal from the color signal of each pixel of the mosaic image acquired by the image acquisition step, and the color mixture correction step includes color mixture correction of an arbitrary target pixel.
- the color signal of the target pixel and the color signal of the peripheral pixel are obtained, and the corresponding color mixture rate is read from the storage means based on the azimuth direction and color of the peripheral pixel, and the color of the target pixel for color mixture correction
- the color of the target pixel for color mixture correction
- An imaging apparatus acquires an imaging unit including an imaging optical system and an imaging element on which a subject image is formed via the imaging optical system, and obtains a mosaic image output from the imaging unit. And an image processing device described above.
- the imaging element is provided with a color filter having a predetermined color filter arrangement on a plurality of pixels including photoelectric conversion elements arranged in the horizontal direction and the vertical direction.
- the color filter array includes a first filter corresponding to one or more first colors and a second filter corresponding to two or more second colors having a contribution ratio for obtaining a luminance signal lower than that of the first color.
- 2 includes a predetermined basic arrangement pattern in which two filters are arranged, the basic arrangement pattern is repeatedly arranged in the horizontal and vertical directions, and the basic arrangement pattern is an integer of M ⁇ N (M, N: 2 or more, at least One is preferably an array pattern corresponding to 3 or more pixels.
- the mosaic image output from the image sensor having the color filter array becomes a mosaic image with a complex combination of colors.
- the color mixture ratio stored in the storage unit in advance does not depend on the type of the mosaic image. There is no problem even if the pixel size is large.
- one or more first filters are arranged in each of the horizontal, vertical, diagonal upper right, and diagonal lower right directions of the color filter array, and the second color
- One or more second filters corresponding to the respective colors are arranged in the horizontal and vertical lines of the color filter array in the basic array pattern, and the number of pixels of the first color corresponding to the first filter
- the ratio is preferably larger than the ratio of the number of pixels of each color of the second color corresponding to the second filter.
- the first filter corresponding to the first color that contributes most to obtain the luminance signal is arranged in the horizontal, vertical, diagonal upper right, and diagonal lower right directions of the color filter array. Since it is arranged in each line, it is possible to improve the reproduction accuracy of the synchronization process in the high frequency region. Also, one or more second filters corresponding to two or more second colors other than the first color are arranged in the horizontal and vertical lines of the color filter array in the basic array pattern. Therefore, the generation of color moire (false color) can be reduced and high resolution can be achieved.
- the color filter array can be processed according to the repeated pattern when performing the synchronization process in the subsequent stage. Furthermore, the ratio of the number of pixels of the first color corresponding to the first filter and the number of pixels of each color of the second color corresponding to the second filter is made different, and this contributes most particularly to obtaining a luminance signal. Since the ratio of the number of pixels of the first color is made larger than the ratio of the number of pixels of each color of the second color corresponding to the second filter, aliasing can be suppressed and high frequency reproducibility is also good. .
- the basic array pattern is a square array pattern corresponding to 3 ⁇ 3 pixels, and the first filter is disposed at the center and at the four corners.
- the first color is green (G)
- the second color is red (R) and blue (B)
- the pattern is a square array pattern corresponding to 6 ⁇ 6 pixels
- the filter array is a first array corresponding to 3 ⁇ 3 pixels
- G filters are arranged at the center and four corners.
- the second array in which R filters are arranged above and below the central G filter and B filters are arranged on the left and right are alternately arranged in the horizontal direction and the vertical direction.
- the imaging element has an element structure that shares an amplifier for each predetermined pixel group, and the predetermined pixel group has K ⁇ L (K ⁇ M, L ⁇ N). , K, and L are preferably natural numbers) pixels.
- the color mixture rate corresponding to the combination of a plurality of parameters that affect the size of the color mixture component included in the color signal of the target pixel for color mixture correction is stored in the storage unit in advance, and an arbitrary target pixel is stored.
- the color mixture rate of each adjacent peripheral pixel is read from the storage means and used for color mixture correction. For this reason, the data amount of the color mixture rate stored in advance can be minimized without depending on the type of the mosaic image (color filter array), and the color mixture correction can be performed satisfactorily.
- FIG. 1 is a block diagram showing an embodiment of an imaging apparatus according to the present invention.
- positioned at an image pick-up element The figure shown regarding the state which divided the basic arrangement pattern shown in FIG. 2 into 3 * 3 pixels into 4 1 is a principal block diagram showing the internal configuration of the image processing unit shown in FIG. Diagram used to explain color mixture correction
- FIG. 4 is a block diagram showing an embodiment of the internal configuration of the color mixing correction unit shown in FIG.
- the flowchart which shows embodiment of the image processing method which concerns on this invention
- the graph which shows the spectral sensitivity characteristic of an image pick-up element provided with R filter (red filter), G1 filter (1st green filter), G2 filter (2nd green filter), and B filter (blue filter).
- the graph which shows the spectral sensitivity characteristic of an image pick-up element provided with R filter, G filter, B filter, and W filter (transparent filter)
- FIG. 1 is a block diagram showing an embodiment of an imaging apparatus according to the present invention.
- the imaging apparatus 10 is a digital camera that records captured images in an internal memory (memory unit 26) or an external recording medium (not shown). The operation of the entire apparatus is performed by a central processing unit (CPU) 12. Overall control.
- CPU central processing unit
- the imaging apparatus 10 is provided with an operation unit 14 including a shutter button (shutter switch), a mode dial, a playback button, a MENU / OK key, a cross key, a zoom button, a BACK key, and the like.
- a signal from the operation unit 14 is input to the CPU 12, and the CPU 12 controls each circuit of the imaging apparatus 10 based on the input signal.
- the lens unit 18, the shutter 20, and the image acquisition unit are provided via the device control unit 16.
- it In addition to controlling the functioning image sensor 22, it performs shooting operation control, image processing control, image data recording / playback control, display control of the display unit 25, and the like.
- the lens unit 18 includes a focus lens, a zoom lens, a diaphragm, and the like.
- the light beam that has passed through the lens unit 18 and the shutter 20 is imaged on the light receiving surface of the image sensor 22.
- the image sensor 22 is a CMOS (Complementary Metal-Oxide Semiconductor) type, XY address type, or CCD (Charge Coupled Device) type color image sensor.
- CMOS Complementary Metal-Oxide Semiconductor
- XY address type XY address type
- CCD Charge Coupled Device
- a large number of light receiving elements (photodiodes) are two-dimensionally arranged on the light receiving surface of the image sensor 22.
- the subject image formed on the light receiving surface of each photodiode is converted into a signal voltage (or charge) of an amount corresponding to the amount of incident light.
- FIG. 2 is a diagram showing an embodiment of the image pickup device 22, and particularly shows a new color filter array arranged on the light receiving surface of the image pickup device 22.
- the color filter array of the image sensor 22 includes a basic array pattern P (pattern indicated by a thick frame) composed of a square array pattern corresponding to M ⁇ N (6 ⁇ 6) pixels, and the basic array pattern P is in the horizontal direction. And it is configured to be repeatedly arranged in the vertical direction. That is, in this color filter array, filters of each color (R filter, G filter, B filter) of red (R), green (G), and blue (B) are arranged with a predetermined periodicity. Since the R filter, G filter, and B filter are arranged with a predetermined periodicity in this way, when performing image processing of RGB RAW data (mosaic image) read from the image sensor 22, processing is performed according to a repetitive pattern. It can be performed.
- RGB RAW data mosaic image
- the G filter corresponding to the color that contributes most to obtain the luminance signal is horizontal, vertical, diagonally upper right (NE), And one or more are arranged in each line in the diagonally upper left (NW) direction.
- NE means diagonally upper right direction
- NW means diagonally lower right direction.
- the diagonally upper right and diagonally lower right directions are directions of 45 ° with respect to the horizontal direction.
- the angle is the direction of the diagonal line of the rectangle, and the angle can be changed according to the length of the long side and the short side.
- the G filter corresponding to the luminance system pixel is arranged in each line in the horizontal, vertical, and diagonal (NE, NW) directions of the color filter array, the synchronization processing in the high frequency region is performed regardless of the direction of high frequency. The reproduction accuracy can be improved.
- the color filter array shown in FIG. 2 has an R filter and a B filter corresponding to two or more other colors (in this embodiment, R and B colors) other than the G color, respectively.
- One or more horizontal and vertical lines are arranged.
- the R filter and B filter are arranged in the horizontal and vertical lines of the color filter array, the occurrence of false colors (color moire) can be reduced. Thereby, an optical low-pass filter for reducing (suppressing) the occurrence of false colors can be omitted. Even when an optical low-pass filter is applied, a filter having a weak function of cutting high-frequency components for preventing the occurrence of false colors can be applied, and resolution can be maintained.
- the basic array pattern P of the color filter array shown in FIG. 2 has 8 pixels and 20 pixels, respectively, of R pixels, G pixels, and B pixels corresponding to the R, G, and B filters in the basic array pattern. , 8 pixels. That is, the ratio of the number of pixels of RGB pixels is 2: 5: 2, and the ratio of the number of G pixels that contributes most to obtain a luminance signal is the ratio of R pixels and B pixels of other colors. It is larger than the ratio of the number of pixels.
- the ratio between the number of G pixels and the number of R and B pixels is different, and in particular, the ratio of the number of G pixels that contributes most to obtain a luminance signal is equal to the number of R and B pixels. Since the ratio is larger than the ratio, aliasing at the time of the synchronization process can be suppressed and high frequency reproducibility can be improved.
- FIG. 3 shows a state in which the basic array pattern P shown in FIG. 1 is divided into 4 ⁇ 3 ⁇ 3 pixels.
- the basic array pattern P includes a 3 ⁇ 3 pixel A array surrounded by a solid frame and a 3 ⁇ 3 pixel B array surrounded by a broken frame alternately in the horizontal and vertical directions. It can also be understood that the array is arranged.
- the G filters are arranged at the four corners and the center, respectively, and arranged on both diagonal lines.
- the R filter is arranged in the horizontal direction and the B filter is arranged in the vertical direction with the central G filter interposed therebetween.
- the B filter is arranged in the horizontal direction and the R filter is arranged in the vertical direction with the central G filter interposed therebetween. That is, in the A array and the B array, the positional relationship between the R filter and the B filter is reversed, but the other arrangements are the same.
- the G filters at the four corners of the A array and the B array become a square array G filter corresponding to 2 ⁇ 2 pixels by alternately arranging the A array and the B array in the horizontal and vertical directions.
- the signal charge accumulated in the image pickup device 22 having the above configuration is read out as a voltage signal corresponding to the signal charge based on a read signal applied from the device control unit 16.
- the voltage signal read from the image sensor 22 is applied to the A / D converter 24, where it is sequentially converted into digital R, G, and B signals corresponding to the color filter array, and temporarily stored in the memory unit 26. Saved.
- the memory unit 26 includes SDRAM (Synchronous Dynamic Random Access Memory) that is a volatile memory, EEPROM (Electrically Erasable Programmable Read-Only Memory; storage means) that is a rewritable nonvolatile memory, and the like.
- SDRAM Serial Dynamic Random Access Memory
- EEPROM Electrically Erasable Programmable Read-Only Memory
- storage means that is a rewritable nonvolatile memory, and the like.
- the SDRAM is used as a work area when the CPU 12 executes a program, and as a storage area that temporarily holds a digital image signal that has been captured and acquired.
- the EEPROM stores a camera control program including an image processing program, pixel defect information of the image sensor 22, various parameters and tables used for image processing including color mixture correction, and the like.
- the image processing unit 28 performs predetermined signals such as color mixture correction, white balance correction, gamma correction processing, synchronization processing (demosaic processing), and RGB / YC conversion on the digital image signal once stored in the memory unit 26.
- the synchronization process is a process for calculating all color information for each pixel from a mosaic image corresponding to the color filter array of the single-plate color image sensor, and is also referred to as a color interpolation process or a demosaicing process. For example, in the case of an image sensor made up of three color filters of RGB, this is a process of calculating color information for all RGB for each pixel from a mosaic image made of RGB.
- the details of the image processing apparatus (image processing unit 28) according to the present invention will be described later.
- the image data processed by the image processing unit 28 is encoded into data for image display by the encoder 30 and is output to the display unit 25 provided on the back of the camera via the driver 32. As a result, the subject image is continuously displayed on the display screen of the display unit 25.
- the CPU 12 starts an AF (Automatic Focus) operation and an AE (Automatic Exposure Adjustment) operation, and the lens unit via the device control unit 16.
- the 18 focus lenses are moved in the optical axis direction, and the focus lens is controlled to be in the in-focus position.
- the CPU 12 calculates the brightness of the subject (shooting Ev value) based on the image data output from the A / D converter 24 when the shutter button is half-pressed, and the exposure condition (F value, shutter speed) based on this shooting Ev value. ).
- the aperture is controlled according to the determined exposure condition, and the charge accumulation time in the shutter 20 and the image sensor 22 is controlled to perform the main imaging. Is done.
- Image data of an RGB mosaic image (image corresponding to the color filter array shown in FIG. 2) read from the image sensor 22 during the main imaging and A / D converted by the A / D converter 24 is stored in the memory unit 26. Temporarily stored.
- the image data temporarily stored in the memory unit 26 is appropriately read out by the image processing unit 28, where color mixing correction, white balance correction, gamma correction, synchronization processing (color interpolation processing), RGB / YC conversion.
- the predetermined signal processing including the above is performed.
- RGB / YC converted image data (YC data) is compressed according to a predetermined compression format (for example, JPEG (Joint Photographic Experts Group) method).
- the compressed image data is recorded in an internal memory or an external memory in a predetermined image file (for example, an Exif (Exchangeable image file format) file) format.
- FIG. 4 is a principal block diagram showing the internal configuration of the image processing unit 28 shown in FIG.
- the image processing unit 28 includes a color mixing correction unit (color mixing correction unit) 100, a white balance (WB) correction unit (white balance correction unit) 200, gamma correction, synchronization processing, RGB / YC conversion, and the like.
- a signal processing unit 300 that performs the signal processing, an RGB integration unit 400, and a white balance (WB) gain calculation unit (white balance gain calculation means) 500.
- the raw data (mosaic image) in the color filter array output from the image sensor 22 at the time of shooting is temporarily stored in the memory unit 26.
- the image processing unit 28 acquires a mosaic image (RGB color signal) from the memory unit 26.
- the acquired RGB color signals are added to the color mixture correction unit 100 in a dot sequence.
- the color mixing correction unit 100 removes color mixing components from peripheral pixels included in the color signal of the target pixel for color mixing correction that is input dot-sequentially. Details of the color mixing correction unit 100 will be described later.
- the color signal of each pixel of the mosaic image from which the color mixture component has been removed by the color mixture correction unit 100 is added to the WB correction unit 200 and also to the RGB integration unit 400.
- the RGB integration unit 400 calculates an integrated average value for each RGB color signal for each divided region obtained by dividing one screen into 8 ⁇ 8 or 16 ⁇ 16, and a ratio (R / G, B / G) is calculated. For example, when dividing one screen into 64 divided areas of 8 ⁇ 8, 64 pieces of color information (R / G, B / G) are calculated.
- the WB gain calculation unit 500 calculates the WB gain based on the color information (R / G, B / G) for each divided region input from the RGB integration unit 400. Specifically, the centroid position of the distribution of the color information for each of the 64 divided areas in the color space of the R / G, B / G axis coordinates is calculated, and the color of the ambient light is calculated from the color information indicated by the centroid position. Estimate temperature. In addition, instead of the color temperature, a light source type having color information indicated by the position of the center of gravity, for example, blue sky, shade, clear, fluorescent lamp (daylight color, day white color, white color, warm white color), tungsten, low tungsten, etc. is obtained. The light source type at the time of shooting may be estimated (see Japanese Patent Application Laid-Open No. 2007-53499), or the color temperature may be estimated from the estimated light source type.
- a WB gain for each RGB or RB for performing an appropriate white balance correction corresponding to the color temperature or light source type of the ambient light is prepared in advance.
- the WB gain calculation unit 500 reads out the corresponding WB gain for each RGB or RB based on the estimated color temperature or light source type of the ambient light, and outputs the read WB gain to the WB correction unit 200.
- the WB correction unit 200 performs white balance correction by applying the WB gain for each color input from the WB gain calculation unit 500 to each of the R, G, and B color signals input from the color mixture correction unit 100.
- the R, G, and B color signals output from the WB correction unit 200 are added to the signal processing unit 300, where the R, G, and B color signals associated with the gamma correction and the color filter array of the image sensor 22 are output.
- Signal processing such as RGB / YC conversion is performed, and the processed luminance signal Y and color difference signals Cr and Cb are output.
- the luminance data Y and the color difference data Cr and Cb output from the image processing unit 28 are compressed and then recorded in an internal memory or an external memory.
- FIG. 5 shows a G pixel (target pixel for color mixture correction) corresponding to the G filter at the upper right of the 2 ⁇ 2 four G filters in the color filter array shown in FIGS. 2 and 3, and the target pixel (own pixel). ) Adjacent pixels (upper pixel (B pixel), lower pixel (G pixel), left pixel (G pixel), and right pixel (R pixel)) adjacent to each other.
- the colors of the adjacent pixels in each of the vertical and horizontal directions with reference to the target pixel are B, G, G, and R, respectively.
- 9 pixels of the 3 ⁇ 3 pixel A array and 9 pixels of the B array are the target pixels regardless of which pixel is the target pixel.
- the color combinations of the four pixels adjacent to each other in the vertical and horizontal directions are different.
- the influence of the color mixture from the surrounding pixels on the own pixel differs depending on the azimuth direction (up / down / left / right) of the surrounding pixel and the color (RGB) of the surrounding pixel.
- the data amount of the color mixture ratio increases.
- the number of color arrangement combinations further increases, and the data amount of the color mixture rate increases.
- the image pickup device 22 of this embodiment is a CMOS type image pickup device, and as shown in FIG. 6, the pixel sharing amplifier A is embedded in the base of the CMOS, and K ⁇ L (2 ⁇ 2) pixels are one. Two amplifiers A are shared. Due to the element structure of the image pickup element 22, the output level differs depending on the positions 1 to 4 of the pixel (self pixel) with respect to the amplifier A (upper left, upper right, lower left, and lower right positions with respect to the amplifier A).
- the memory unit 26 stores a correction table shown in FIG. This correction table, four azimuth directions of the surrounding pixels for the target pixel (own pixel) a (vertically and horizontally) as the first parameter P 1, and the three colors of the peripheral pixels (RGB) and the second parameter P 2, amplifier Among the 2 ⁇ 2 pixels sharing A, the position of the own pixel (positions 1 to 4 in FIG. 6) is set as the third parameter P 3, and the color mixture ratio corresponding to the combination of these parameters P 1 to P 3 A total of 48 color mixing ratios of A 1 to A 12 , B 1 to B 12 , C 1 to C 12 , and D 1 to D 12 are stored in association with the parameters P 1 to P 3 .
- the incident angle of the subject light to each pixel of the image sensor 22 is different between the central portion and the peripheral portion of the mosaic image, the color mixing ratio is different. Therefore, as shown in FIG. 8, the entire region of the mosaic image is divided into, for example, 8 ⁇ 8 divided regions, and the divided regions [0] [0] to [7] [7] are shown in FIG.
- the correction table is stored in the memory unit 26.
- FIG. 9 is a block diagram showing an embodiment of the internal configuration of the color mixing correction unit 100 shown in FIG.
- the color mixing correction unit 100 includes a delay processing unit 110, a subtractor 112, multipliers 114 to 120, an adder 122, a parameter acquisition unit (parameter acquisition means) 130, and a color mixing rate setting unit 140. Yes.
- the mosaic image (RGB color signal) acquired via the image sensor 22 is added to the delay processing unit 110 in a dot sequence.
- the delay processor 110 includes 1H (horizontal line) line memories 110a to 110c.
- the RGB color signals input dot-sequentially are sequentially shifted in the line memories 110a to 110c at intervals for processing one pixel. If the color signal at the position indicated by the oblique lines in the line memory 110b is the color signal of the target pixel for color mixture correction, the color signals at the same position on the line memories 110a and 110c become the color signals of the upper pixel and the lower pixel, respectively.
- the color signals at the left and right positions indicated by the diagonal lines in the line memory 110b are the color signals of the left pixel and the right pixel, respectively.
- the delay processing unit 110 appropriately delays the RGB color signal input in a dot-sequential manner, and corrects the mixed color correction target pixel and the upper, lower, left and right peripheral pixels (upper pixel, lower pixel, left pixel, (Right pixel) at the same time.
- the color signal of the target pixel output from the delay processing unit 110 is added to the subtractor 112, and the color signals of the upper pixel, the lower pixel, the left pixel, and the right pixel are added to the multipliers 114 to 120, respectively.
- Information indicating the position (x, y) of the target pixel in the mosaic image output from the delay processing unit 110 is added to the parameter acquisition unit 130.
- the parameter acquisition unit 130 acquires the first to fourth parameters P 1 to P 4 based on information indicating the position (x, y) of the target pixel.
- Information indicating the position (x, y) of the target pixel can be acquired from the CPU 12 or the image processing unit 28 that instructs signal processing for each pixel of the mosaic image.
- the third parameter P 3 indicating the position of the target pixel (own pixel) (positions 1 to 4 in FIG. 6) and the own pixel belong.
- a fourth parameter P 4 indicating the divided area can be determined.
- the colors of the peripheral pixels upper pixel, lower pixel, left pixel, and right pixel
- the parameter acquisition unit 130 determines the first to fourth parameters P 1 to P 4 based on the information on the position (x, y) of the target pixel in the mosaic image as described above, and sends it to the color mixture rate setting unit 140. Output. Note that four sets of the first parameter P 1 and the second parameter P 2 are output corresponding to the azimuth direction of the peripheral pixels.
- the color mixing rate setting unit 140 Based on the first to fourth parameters P 1 to P 4 input from the parameter acquisition unit 130, the color mixing rate setting unit 140 reads the corresponding four color mixing rates A to D from the memory unit 26, and these color mixing rates A Add D to the other inputs of multipliers 114-120, respectively. That is, the color mixing rate setting unit 140 selects a correction table corresponding to the divided area subject pixel belongs based on the fourth parameter P 4, based from the correction table selected in the first to third parameters P 1 to P 3 Thus, the four color mixing ratios A to D (see FIG. 7) for the azimuth directions of the peripheral pixels are read out.
- Multipliers 114 to 120 multiply the input color signals of the upper pixel, the lower pixel, the left pixel, and the right pixel, respectively, and the color mixture ratios A to D, and output the multiplied values to the adder 122.
- the adder 122 adds the four multiplied values that are input, and adds the added value to the other input of the subtractor 112. This added value corresponds to the color mixture component included in the color signal of the target pixel for color mixture correction.
- the color signal of the target pixel for color mixture correction is added to one input of the subtractor 112, and the subtractor 112 subtracts the addition value (color mixture component) input from the adder 122 from the color signal of the target pixel.
- the color signal of the target pixel from which the color mixture component has been removed (color mixture correction) is output.
- the calculation by the subtractor 112, the multipliers 114 to 120, and the adder 122 can be expressed by the following equations.
- Color signal after correction color signal before correction ⁇ (upper pixel ⁇ color mixing ratio A + lower pixel ⁇ color mixing ratio B + left pixel ⁇ color mixing ratio C + right pixel ⁇ color mixing ratio D)
- the color signal subjected to the color mixture correction by the color mixture correction unit 100 as described above is output to the subsequent WB correction unit 200 and the RGB integration unit 400 (FIG. 4).
- FIG. 10 is a flowchart showing an embodiment of the image processing method according to the present invention.
- the color mixing correction unit 100 first sets the position (x, y) of the target pixel for color mixing correction to an initial value (0, 0) before starting the color mixing correction (step S10).
- the color signal (pixel value) of the target pixel (x, y) and the color signals (pixel values) of the peripheral pixels above, below, left, and right of the target pixel (x, y) are acquired (step S12).
- the parameter acquisition unit 130 acquires the first to fourth parameters P 1 to P 4 as described above based on the position (x, y) of the target pixel (step S14).
- the color mixture ratio setting unit 140 reads the corresponding color mixture ratios A to D from the memory unit 26 based on the first to fourth parameters P 1 to P 4 acquired by the parameter acquisition unit 130 (step S16).
- step S18 based on the pixel value of the target pixel and the pixel values of the surrounding pixels acquired in step S12 and the color mixing ratios A to D read in step S16, the arithmetic processing shown in [Formula 1] is performed, and the target Color mixture correction is performed to remove the color mixture component from the pixel value of the pixel (step S18).
- step S20 it is determined whether or not the color mixture correction of all target pixels has been completed. If it has not been completed (in the case of “No”), the process proceeds to step S22.
- step S22 the position (x, y) of the target pixel is moved by one pixel, and when the position (x, y) of the target pixel reaches the left end in the horizontal direction, the position is returned to the right end in the horizontal direction.
- One pixel is moved in the vertical direction, the process proceeds to step S14, and the processes from step S12 to step S20 are repeatedly executed.
- step S20 determines whether the color mixture correction has been completed for all target pixels (in the case of “Yes”), the process of the color mixture correction is terminated.
- the present invention can be applied not only to the mosaic image of the color filter array shown in FIG. 2, but also to the mosaic image of various color filter arrays, and the mosaic of only the A array or B array shown in FIG. It may be an image.
- the size N ⁇ M of the basic array pattern to which the present invention is applied is preferably 5 ⁇ 5 or more, more preferably 10 ⁇ 10 or less. It should be noted that the present invention can be applied even when RGB pixels are randomly arranged without having a basic arrangement pattern. In this case, the present invention can be applied without changing the hardware for color mixture correction.
- the third parameter P 3 indicating the position of the own pixel with respect to the amplifier is not necessary, and the center portion and the peripheral portion of the mosaic image
- a filter that satisfies any of the following conditions (1) to (4) may be used instead of the G filter or instead of a part of the G filter.
- Condition (1) is that the contribution rate for obtaining the luminance signal is 50% or more.
- the contribution rate 50% is a value determined to distinguish the first color (G color, etc.) and the second color (R, B color, etc.) according to each of the above embodiments, and the luminance data
- the value that is determined so that the “first color” includes colors that have a relatively higher contribution rate than the R color, B color, and the like.
- the G color has a higher contribution rate for obtaining the luminance (Y) signal (luminance data) than the R and B colors. That is, the contribution ratio of the R color and the B color is lower than that of the G color.
- the above-described image processing unit 28 generates a luminance signal (Y signal) from an RGB pixel signal having all RGB color information for each pixel according to the following equation (1).
- the following formula (1) is a formula generally used for generating a Y signal in the color image sensor 22.
- the contribution rate of the G color to the luminance signal is 60%, so that the G color has a higher contribution rate than the R color (contribution rate 30%) and the B color (contribution rate 10%). . Therefore, G color is the color that contributes most to the luminance signal among the three primary colors.
- Y 0.3R + 0.6G + 0.1B Formula (1) Since the contribution rate of G color is 60% as shown in the above formula (1), the condition (1) is satisfied. Further, the contribution ratio of colors other than the G color can also be obtained by experiments and simulations. Therefore, a filter having a color with a contribution ratio of 50% or more other than the G color can also be used as the first filter in each of the above embodiments.
- the color having a contribution rate of less than 50% is the second color (R color, B color, etc.) in each of the above embodiments, and the filter having this color is the second filter in each of the above embodiments.
- Condition (2) is that the peak of the transmittance of the filter is in the range of wavelengths from 480 nm to 570 nm.
- a value measured with a spectrophotometer is used as the transmittance of the filter.
- This wavelength range is a range determined to distinguish between the first color (G color, etc.) and the second color (R, B color, etc.) according to each of the above embodiments, and the above-described contribution. This is a range that does not include peaks such as R color and B color that have a relatively low rate and includes peaks such as G color that has a relatively high contribution rate.
- a filter having a transmittance peak within a wavelength range of 480 nm to 570 nm can be used as the first filter.
- a filter whose transmittance peak is outside the wavelength range of 480 nm to 570 nm is the second filter (R filter, B filter) according to each of the above embodiments.
- Condition (3) is that the transmittance within the wavelength range of 500 nm to 560 nm is higher than the transmittance of the second filter (R filter or B filter). Also in this condition (3), the value measured with a spectrophotometer, for example, is used as the transmittance of the filter.
- the wavelength range of the condition (3) is also a range determined to distinguish the first color (G color, etc.) and the second color (R, B color, etc.) according to the above embodiments.
- the transmittance of a filter having a color in which the above-described contribution rate is relatively higher than that of R color, B color, or the like is in a range in which the transmittance is higher than that of the RB filter or the like. Therefore, a filter having a relatively high transmittance within a wavelength range of 500 nm to 560 nm can be used as the first filter, and a filter having a relatively low transmittance can be used as the second filter.
- Condition (4) is that a filter of two or more colors including a color that contributes most to the luminance signal (for example, G color of RGB) among the three primary colors and a color different from the three primary colors is used as the first filter. Is to use. In this case, a filter corresponding to a color other than each color of the first filter is the second filter.
- the G color G filter as the first filter is not limited to one type.
- a plurality of types of G filters may be used as the first filter. That is, the G filter of the color filter (basic arrangement pattern) according to each of the above embodiments may be appropriately replaced with the first G filter G1 or the second G filter G2.
- the first G filter G1 transmits G light in the first wavelength band
- the second G filter G2 transmits G light in the second wavelength band having a high correlation with the first G filter G1 (see FIG. 11).
- an existing G filter for example, the G filter G of the first embodiment
- a filter having a high correlation with the first G filter G1 can be used.
- the peak value of the spectral sensitivity curve of the light receiving element in which the second G filter G2 is disposed is, for example, in the wavelength range of 500 nm to 535 nm (near the peak value of the spectral sensitivity curve of the light receiving element in which the existing G filter is disposed).
- a method described in Japanese Patent Application Laid-Open No. 2003-284084 is used as a method of determining four color (R, G1, G2, B) color filters.
- the color of the image acquired by the color imaging device is set to four types, and the color information acquired is increased, so that the color can be more accurately compared with the case where only three types of colors (RGB) are acquired.
- RGB three types of colors
- the transmittance of the first and second G filters G1 and G2 is basically the same as the transmittance of the G filter G of the first embodiment, the contribution rate for obtaining the luminance signal is more than 50%. Get higher. Accordingly, the first and second G filters G1 and G2 satisfy the above condition (1).
- each G filter G1, G2 (sensitivity peak of each G pixel) is in the wavelength range of 480 nm to 570 nm.
- the transmittances of the G filters G1 and G2 are higher than the transmittances of the RB filters R and B within a wavelength range of 500 nm to 560 nm. For this reason, each G filter G1, G2 also satisfies the above-mentioned conditions (2), (3).
- G filters G1 and G2 may be changed as appropriate. Moreover, you may increase the kind of G filter G to three or more types.
- Transparent filter (W filter)>
- a color filter mainly composed of color filters corresponding to RGB colors is shown, but some of these color filters may be transparent filters W (white pixels).
- W transparent filters
- the transparent filter W is a transparent color (first color) filter.
- the transparent filter W is a filter that can transmit light corresponding to the wavelength range of visible light, and has a transmittance of 50% or more for each color of RGB, for example. Since the transmittance of the transparent filter W is higher than that of the G filter G, the contribution rate for obtaining the luminance signal is also higher than that of the G color (60%), and satisfies the above condition (1).
- the transmittance peak of the transparent filter W (the peak of the sensitivity of the white pixel) is in the wavelength range of 480 nm to 570 nm. Further, the transmittance of the transparent filter W is higher than the transmittance of the RB filters R and B within the wavelength range of 500 nm to 560 nm. For this reason, the transparent filter W also satisfies the above-described conditions (2) and (3). Note that the G filter G also satisfies the above-described conditions (1) to (3) as with the transparent filter W.
- the transparent filter W satisfies the above-described conditions (1) to (3), it can be used as the first filter in each of the above embodiments.
- the color filter array a part of the G filter G corresponding to the G color that contributes most to the luminance signal among the three primary colors of RGB is replaced with the transparent filter W, so the above condition (4) is also satisfied. .
- E filter a color filter mainly composed of color filters corresponding to RGB colors is shown. However, some of these color filters may be other color filters, for example, corresponding to emerald (E) colors.
- a filter E emerald pixel
- the peak of the transmittance of the emerald filter E (peak of the sensitivity of the E pixel) is in the range of 480 nm to 570 nm. Further, the transmittance of the emerald filter E is higher than the transmittance of the RB filters R and B within the wavelength range of 500 nm to 560 nm. For this reason, the emerald filter E satisfies the above-mentioned conditions (2) and (3). Further, in the color filter array, part of the G filter G corresponding to the G color that contributes most to the luminance signal among the three primary colors of RGB is replaced with the emerald filter E, so the above condition (4) is also satisfied. .
- the emerald filter E has a peak on the shorter wavelength side than the G filter G, but has a peak on the longer wavelength side than the G filter G (appears slightly yellowish). In some cases.
- those satisfying the above conditions can be selected.
- an emerald filter E that satisfies the condition (1) can be selected.
- the color filter array including the primary color RGB color filters has been described.
- G is added to C (cyan), M (magenta), and Y (yellow) which are complementary colors of the primary color RGB.
- the present invention can also be applied to a color filter array of four complementary color filters.
- the color filter satisfying any one of the above conditions (1) to (4) is set as the first filter according to each of the above embodiments, and the other color filter is set as the second filter.
- Each color filter array of each of the above embodiments includes a basic array pattern in which color filters of each color are two-dimensionally arrayed in the horizontal direction (H) and the vertical direction (V), and this basic array pattern is in the horizontal direction (H ) And the vertical direction (V), but the present invention is not limited to this.
- the basic arrangement pattern is repeatedly arranged in an oblique direction (NE, NW) using a so-called honeycomb arrangement basic arrangement pattern obtained by rotating the basic arrangement pattern of each of the above-described embodiments by 45 ° around the optical axis.
- a color filter may be configured by an array pattern.
- SYMBOLS 10 ... Imaging device, 12 ... Central processing unit (CPU), 14 ... Operation part, 18 ... Lens part, 22 ... Imaging element, 26 ... Memory part, 28 ... Image processing part, 100 ... Color mixing correction part, 110 ... Delay process , 112... Subtractor, 114 to 120... Multiplier, 122... Adder, 130 .. Parameter acquisition unit, 140... Color mixing ratio setting unit, 200 ... WB correction unit, 400 ... RGB integration unit, 500 ... WB gain calculation unit
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Color Television Image Signal Generators (AREA)
- Image Processing (AREA)
Abstract
Description
図1は、本発明に係る撮像装置の実施形態を示すブロック図である。
図2は、上記撮像素子22の実施形態を示す図であり、特に撮像素子22の受光面上に配置されている新規のカラーフィルタ配列に関して示している。
図4は、図1に示した画像処理部28の内部構成を示す要部ブロック図である。
図5は、図2、図3に示したカラーフィルタ配列内の2×2の4つのGフィルタの右上のGフィルタに対応するG画素(混色補正の対象画素)と、この対象画素(自画素)に隣接する上下左右の周辺画素(上画素(B画素)、下画素(G画素)、左画素(G画素)、右画素(R画素))を示している。
補正後の色信号=補正前の色信号
-(上画素×混色率A+下画素×混色率B+左画素×混色率C+右画素×混色率D)
上記のようにして混色補正部100により混色補正された色信号は、後段のWB補正部200及びRGB積算部400に出力される(図4)。
図10は本発明に係る画像処理方法の実施形態を示すフローチャートである。
本発明は、図2に示したカラーフィルタ配列のモザイク画像に限らず、種々のカラーフィルタ配列のモザイク画像に対して適用することができ、図3に示したA配列、又はB配列のみのモザイク画像であってもよい。本発明が適用される基本配列パターンのサイズN×Mは、好ましくは5×5以上、より好ましくは10×10以下が望ましい。尚、基本配列パターンを有さず、RGBの画素がランダムに配列されたものでも本発明は適用される。また、この場合、混色補正のハードウエアを変更することなく本発明を適用することができる。
また、上述の各実施形態では、第1の色として緑(G)を採用し、第2の色として赤(R)及び青(B)を採用した例について説明したが、カラーフィルタで使用しうる色はこれらの色に限定されるものではない。上記各実施形態では、以下の条件を満たす色に対応するカラーフィルタを用いることもできる。
例えば、上記各実施形態において、Gフィルタの代わりに、あるいはGフィルタの一部に代えて、下記条件(1)から条件(4)のいずれかを満たすフィルタを用いてもよい。
条件(1)は、輝度信号を得るための寄与率が50%以上であることである。この寄与率50%は、上記各実施形態に係る第1の色(G色など)と、第2の色(R、B色など)とを区別するために定めた値であって、輝度データを得るための寄与率がR色、B色などよりも相対的に高くなる色が「第1の色」に含まれるように定めた値である。
G色の寄与率は上記式(1)に示したように60%となるので条件(1)を満たす。また、G色以外の色の寄与率についても実験やシミュレーションにより取得可能である。したがって、G色以外で寄与率が50%以上となる色を有するフィルタについても、上記各実施形態において第1のフィルタとして用いることができる。尚、寄与率が50%未満となる色は、上記各実施形態において第2の色(R色、B色など)となり、この色を有するフィルタが上記各実施形態において第2のフィルタとなる。
条件(2)は、フィルタの透過率のピークが波長480nm以上570nm以下の範囲内にあることである。フィルタの透過率は、例えば、分光光度計で測定された値が用いられる。この波長範囲は、上記各実施形態に係る第1の色(G色など)と、第2の色(R、B色など)とを区別するために定められた範囲であって、前述の寄与率が相対的に低くなるR色、B色などのピークが含まれず、かつ寄与率が相対的に高くなるG色などのピークが含まれるように定められた範囲である。したがって、透過率のピークが波長480nm以上570nm以下の範囲内にあるフィルタを第1のフィルタとして用いることができる。尚、透過率のピークが波長480nm以上570nm以下の範囲外となるフィルタが上記各実施形態に係る第2のフィルタ(Rフィルタ、Bフィルタ)となる。
条件(3)は、波長500nm以上560nm以下の範囲内での透過率が第2のフィルタ(RフィルタやBフィルタ)の透過率よりも高いことである。この条件(3)においても、フィルタの透過率は例えば分光光度計で測定された値が用いられる。この条件(3)の波長範囲も、上記各実施形態に係る第1の色(G色など)と、第2の色(R、B色など)とを区別するために定められた範囲であって、R色やB色などよりも前述の寄与率が相対的に高くなる色を有するフィルタの透過率が、RBフィルタなどの透過率よりも高くなる範囲である。したがって、透過率が波長500nm以上560nm以下の範囲内で相対的に高いフィルタを第1のフィルタとして用い、透過率が相対的に低いフィルタを第2のフィルタとして用いることができる。
条件(4)は、3原色のうち最も輝度信号に寄与する色(例えばRGBのうちのG色)と、この3原色とは異なる色とを含む2色以上のフィルタを、第1のフィルタとして用いることである。この場合には、第1のフィルタの各色以外の色に対応するフィルタが第2のフィルタとなる。
したがって、第1のフィルタとしてのG色のGフィルタは一種類に限定されるものではなく、例えば複数種類のGフィルタを第1のフィルタとして用いることもできる。すなわち上述の各実施形態に係るカラーフィルタ(基本配列パターン)のGフィルタが、第1GフィルタG1または第2GフィルタG2に適宜置き換えられてもよい。第1GフィルタG1は第1の波長帯域のG光を透過し、第2GフィルタG2は第1GフィルタG1と相関の高い第2の波長帯域のG光を透過する(図11参照)。
上述の実施形態では、主としてRGB色に対応する色フィルタからなるカラーフィルタが示されているが、これらの色フィルタの一部を透明フィルタW(白色画素)としてもよい。特に第1のフィルタ(GフィルタG)の一部に代えて透明フィルタWを配置することが好ましい。このようにG画素の一部を白色画素に置き換えることにより、画素サイズを微細化しても色再現性の劣化を抑制することができる。
上述の実施形態では、主としてRGB色に対応する色フィルタから成るカラーフィルタが示されているが、これらの色フィルタの一部を他の色フィルタとしてもよく、例えばエメラルド(E)色に対応するフィルタE(エメラルド画素)としてもよい。特に第1のフィルタ(GフィルタG)の一部に代えてエメラルドフィルタ(Eフィルタ)を配置することが好ましい。このようにGフィルタGの一部をEフィルタで置き換えた4色のカラーフィルタ配列を用いることで、輝度の高域成分の再現を向上させ、ジャギネスを低減させるとともに、解像度感の向上を可能とすることができる。
上述の各実施形態では、原色RGBのカラーフィルタで構成されるカラーフィルタ配列について説明したが、例えば原色RGBの補色であるC(シアン)、M(マゼンタ)、Y(イエロー)に、Gを加えた4色の補色系のカラーフィルタのカラーフィルタ配列にも本発明を適用することができる。この場合も上記条件(1)~(4)のいずれかを満たすカラーフィルタを上記各実施形態に係る第1のフィルタとし、他のカラーフィルタを第2のフィルタとする。
上記各実施形態の各カラーフィルタ配列は、各色のカラーフィルタが水平方向(H)及び垂直方向(V)に2次元配列されてなる基本配列パターンを含み、かつこの基本配列パターンが水平方向(H)及び垂直方向(V)に繰り返し配置されてなるが、本発明はこれに限定されるものではない。
Claims (14)
- 複数の色の画素を含むモザイク画像を取得する画像取得手段と、
前記モザイク画像の混色補正の対象画素に隣接する各周辺画素からの混色率を記憶する記憶手段であって、前記周辺画素の方位方向を示す第1のパラメータと、前記周辺画素の色を示す第2のパラメータとの組合せに対応する混色率を記憶する記憶手段と、
前記画像取得手段により取得したモザイク画像の各画素の色信号から、該色信号に含まれる周辺画素からの混色成分を除去する混色補正手段と、を備え、
前記混色補正手段は、任意の対象画素の混色補正時に、該対象画素の色信号及びその周辺画素の色信号を取得するとともに、前記周辺画素の方位方向及び色に基づいて前記記憶手段から対応する混色率を読み出し、混色補正の対象画素の色信号及び該対象画素に隣接する各周辺画素の色信号と、前記読み出した各周辺画素の混色率とに基づいて前記任意の対象画素に含まれる混色成分を除去する画像処理装置。 - 前記モザイク画像は、M×N(M,N:2以上の整数で、少なくとも一方は3以上)画素からなる基本配列パターンの画素群を含み、該基本配列パターンの画素群が水平及び垂直方向に繰り返して配置された画像である請求項1に記載の画像処理装置。
- 前記混色補正手段は、前記モザイク画像上の任意の対象画素の位置に基づいて、該対象画素に対応する各周辺画素の第1、第2のパラメータを取得するパラメータ取得手段を含み、前記取得した第1、第2のパラメータに基づいて前記記憶手段から対応する混色率を読み出す請求項1又は2に記載の画像処理装置。
- 前記モザイク画像は、所定の画素群毎にアンプを共有する素子構造を有する撮像素子から出力される画像であり、
前記記憶手段は、前記混色補正の対象画素が、前記アンプを共有する画素群内のいずれの位置の画素かを示す位置情報を第3のパラメータとし、前記第1、第2及び第3のパラメータの組合せに対応する混色率を記憶し、
前記パラメータ取得手段は、前記モザイク画像上の任意の対象画素の位置に基づいて、前記周辺画素毎に第1、第2及び第3のパラメータを取得し、
前記混色補正手段は、前記取得した第1、第2及び第3のパラメータに基づいて、前記周辺画素毎に前記記憶手段から対応する混色率を読み出す請求項3に記載の画像処理装置。 - 前記記憶手段は、前記モザイク画像の全領域を複数の分割領域に分割したときの分割領域毎に前記混色率を記憶し、
前記混色補正手段は、前記対象画素の位置が前記複数の分割領域のうちのいずれの分割領域に含まれているかに応じて前記記憶手段から対応する混色率を読み出す請求項1から4のいずれか1項に記載の画像処理装置。 - 前記混色補正手段は、混色補正の対象画素に隣接する各周辺画素の色信号と前記記憶手段から読み出した前記周辺画素の位置毎に設定された混色率とを積和演算して混色成分を算出し、前記算出した混色成分を前記対象画素の色信号から減算する請求項1から5のいずれか1項に記載の画像処理装置。
- 前記混色補正手段により混色成分が除去された前記モザイク画像の各画素の色信号に基づいてホワイトバランスゲインを算出するホワイトバランスゲイン算出手段と、
前記ホワイトバランスゲイン算出手段により算出されたホワイトバランスゲインに基づいて、前記混色補正手段により混色成分が除去された前記モザイク画像の各画素の色信号をホワイトバランス補正するホワイトバランス補正手段と、
を更に備えた請求項1から6のいずれか1項に記載の画像処理装置。 - 複数の色の画素を含むモザイク画像を取得する画像取得工程と、
前記モザイク画像の混色補正の対象画素に隣接する各周辺画素からの混色率を記憶する記憶手段であって、前記周辺画素の方位方向を示す第1のパラメータと、前記周辺画素の色を示す第2のパラメータとの組合せに対応する混色率を記憶する記憶手段を準備する工程と、
前記画像取得工程により取得したモザイク画像の各画素の色信号から、該色信号に含まれる周辺画素からの混色成分を除去する混色補正工程と、を含み、
前記混色補正工程は、任意の対象画素の混色補正時に、該対象画素の色信号及びその周辺画素の色信号を取得するとともに、前記周辺画素の方位方向及び色に基づいて前記記憶手段から対応する混色率を読み出し、混色補正の対象画素の色信号及び該対象画素に隣接する各周辺画素の色信号と、前記読み出した各周辺画素の混色率とに基づいて前記任意の対象画素に含まれる混色成分を除去する画像処理方法。 - 撮影光学系と該撮影光学系を介して被写体像が結像される撮像素子とを含む撮像手段と、
前記撮像手段から出力されるモザイク画像を取得する前記画像取得手段と、
請求項1から7のいずれか1項に記載の画像処理装置と、
を備えた撮像装置。 - 前記撮像素子は、水平方向及び垂直方向に配列された光電変換素子からなる複数の画素上に、所定のカラーフィルタ配列のカラーフィルタが配設され、
前記カラーフィルタ配列は、1色以上の第1の色に対応する第1のフィルタと、輝度信号を得るための寄与率が前記第1の色よりも低い2色以上の第2の色に対応する第2のフィルタとが配列された所定の基本配列パターンを含み、該基本配列パターンが水平及び垂直方向に繰り返して配置され、
前記基本配列パターンは、M×N(M,N:2以上の整数で、少なくとも一方は3以上)画素に対応する配列パターンである請求項9に記載の撮像装置。 - 前記第1のフィルタは、前記カラーフィルタ配列の水平、垂直、斜め右上、及び斜め右下方向の各ライン内に1つ以上配置され、
前記第2の色の各色に対応する前記第2のフィルタは、前記基本配列パターン内に前記カラーフィルタ配列の水平、及び垂直方向の各ライン内に1つ以上配置され、
前記第1のフィルタに対応する第1の色の画素数の比率は、前記第2のフィルタに対応する第2の色の各色の画素数の比率よりも大きい請求項10に記載の撮像装置。 - 前記基本配列パターンは、3×3画素に対応する正方配列パターンであり、中心と4隅に前記第1のフィルタが配置されている請求項10に記載の撮像装置。
- 前記第1の色は、緑(G)色であり、前記第2の色は、赤(R)色及び青(B)であり、
前記所定の基本配列パターンは、6×6画素に対応する正方配列パターンであり、
前記フィルタ配列は、3×3画素に対応する第1の配列であって、中心と4隅にGフィルタが配置され、中心のGフィルタを挟んで上下にBフィルタが配置され、左右にRフィルタが配列された第1の配列と、3×3画素に対応する第2の配列であって、中心と4隅にGフィルタが配置され、中心のGフィルタを挟んで上下にRフィルタが配置され、左右にBフィルタが配列された第2の配列とが、交互に水平方向及び垂直方向に配列されて構成されている請求項10に記載の撮像装置。 - 前記撮像素子は、所定の画素群毎にアンプを共有する素子構造を有し、
前記所定の画素群は、K×L(K≦M,L≦N,K,Lは自然数)画素のサイズを有する請求項10に記載の撮像装置。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013551778A JP5600813B2 (ja) | 2011-12-28 | 2012-12-27 | 画像処理装置及び撮像装置 |
CN201280065020.6A CN104041021B (zh) | 2011-12-28 | 2012-12-27 | 图像处理装置及方法以及摄像装置 |
RU2014127286/07A RU2548166C1 (ru) | 2011-12-28 | 2012-12-27 | Устройство обработки изображений, способ и устройство формирования изображения |
US14/317,694 US9160999B2 (en) | 2011-12-28 | 2014-06-27 | Image processing device and imaging device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011289367 | 2011-12-28 | ||
JP2011-289367 | 2011-12-28 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/317,694 Continuation US9160999B2 (en) | 2011-12-28 | 2014-06-27 | Image processing device and imaging device |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013100032A1 true WO2013100032A1 (ja) | 2013-07-04 |
Family
ID=48697511
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/083837 WO2013100032A1 (ja) | 2011-12-28 | 2012-12-27 | 画像処理装置及び方法並びに撮像装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9160999B2 (ja) |
JP (1) | JP5600813B2 (ja) |
CN (1) | CN104041021B (ja) |
RU (1) | RU2548166C1 (ja) |
WO (1) | WO2013100032A1 (ja) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6478579B2 (ja) * | 2014-11-20 | 2019-03-06 | キヤノン株式会社 | 撮像ユニット、撮像装置、及び画像処理システム |
US20170084650A1 (en) * | 2015-09-22 | 2017-03-23 | Qualcomm Incorporated | Color filter sensors |
CN108476309A (zh) * | 2016-01-08 | 2018-08-31 | 奥林巴斯株式会社 | 图像处理装置、图像处理方法以及程序 |
JP7037346B2 (ja) * | 2017-03-10 | 2022-03-16 | キヤノン株式会社 | 画像処理装置及び画像処理方法、プログラム、記憶媒体 |
CN114495824B (zh) * | 2022-01-26 | 2023-04-04 | 苇创微电子(上海)有限公司 | 一种针对oled显示器图像及文字边缘偏色的校正方法、系统、存储介质和处理器 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0823543A (ja) * | 1994-07-07 | 1996-01-23 | Canon Inc | 撮像装置 |
JPH10271519A (ja) * | 1997-03-26 | 1998-10-09 | Sony Corp | 固体撮像装置 |
JP2011029379A (ja) * | 2009-07-24 | 2011-02-10 | Sony Corp | 固体撮像装置とその製造方法並びにカメラ |
JP2011234231A (ja) * | 2010-04-28 | 2011-11-17 | Canon Inc | 画像処理装置およびその制御方法、撮像装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003284084A (ja) | 2002-03-20 | 2003-10-03 | Sony Corp | 画像処理装置および方法、並びに画像処理装置の製造方法 |
JP4479457B2 (ja) * | 2004-05-27 | 2010-06-09 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにコンピュータ・プログラム |
JP2006041687A (ja) * | 2004-07-23 | 2006-02-09 | Olympus Corp | 画像処理装置、画像処理方法、画像処理プログラム、電子カメラ、及びスキャナ |
JP2007053499A (ja) | 2005-08-16 | 2007-03-01 | Fujifilm Holdings Corp | ホワイトバランス制御装置及び撮像装置 |
JP4389865B2 (ja) | 2005-11-17 | 2009-12-24 | ソニー株式会社 | 固体撮像素子の信号処理装置および信号処理方法並びに撮像装置 |
JP4063306B1 (ja) * | 2006-09-13 | 2008-03-19 | ソニー株式会社 | 画像処理装置、画像処理方法、及びプログラム |
US20100230583A1 (en) * | 2008-11-06 | 2010-09-16 | Sony Corporation | Solid state image pickup device, method of manufacturing the same, image pickup device, and electronic device |
JP5254762B2 (ja) | 2008-11-28 | 2013-08-07 | キヤノン株式会社 | 撮像装置、撮像システム、及び撮像装置における信号の補正方法 |
-
2012
- 2012-12-27 JP JP2013551778A patent/JP5600813B2/ja active Active
- 2012-12-27 RU RU2014127286/07A patent/RU2548166C1/ru not_active IP Right Cessation
- 2012-12-27 CN CN201280065020.6A patent/CN104041021B/zh active Active
- 2012-12-27 WO PCT/JP2012/083837 patent/WO2013100032A1/ja active Application Filing
-
2014
- 2014-06-27 US US14/317,694 patent/US9160999B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0823543A (ja) * | 1994-07-07 | 1996-01-23 | Canon Inc | 撮像装置 |
JPH10271519A (ja) * | 1997-03-26 | 1998-10-09 | Sony Corp | 固体撮像装置 |
JP2011029379A (ja) * | 2009-07-24 | 2011-02-10 | Sony Corp | 固体撮像装置とその製造方法並びにカメラ |
JP2011234231A (ja) * | 2010-04-28 | 2011-11-17 | Canon Inc | 画像処理装置およびその制御方法、撮像装置 |
Also Published As
Publication number | Publication date |
---|---|
US9160999B2 (en) | 2015-10-13 |
RU2548166C1 (ru) | 2015-04-20 |
CN104041021A (zh) | 2014-09-10 |
US20140307121A1 (en) | 2014-10-16 |
CN104041021B (zh) | 2017-05-17 |
JP5600813B2 (ja) | 2014-10-01 |
JPWO2013100032A1 (ja) | 2015-05-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5872408B2 (ja) | カラー撮像装置及び画像処理方法 | |
JP5600814B2 (ja) | 画像処理装置及び方法並びに撮像装置 | |
JP5519083B2 (ja) | 画像処理装置、方法、プログラムおよび撮像装置 | |
US8780239B2 (en) | Image pickup apparatus and signal value correction method | |
WO2012117583A1 (ja) | カラー撮像装置 | |
JP5603506B2 (ja) | 撮像装置及び画像処理方法 | |
JP2013081222A (ja) | 光感度が改善されたイメージ・センサー | |
WO2011132618A1 (ja) | 撮像装置並びに撮像画像処理方法と撮像画像処理プログラム | |
JP5600813B2 (ja) | 画像処理装置及び撮像装置 | |
JP5621053B2 (ja) | 画像処理装置、方法及びプログラム並びに撮像装置 | |
JP2007336387A (ja) | 撮像装置及び信号処理方法 | |
US8922684B2 (en) | Imaging device, control method for imaging device, and storage medium storing a control program for imaging device | |
JP2009147762A (ja) | 画像処理装置、画像処理方法、プログラム | |
JP2006279389A (ja) | 固体撮像装置およびその信号処理方法 | |
JP5624227B2 (ja) | 撮像装置、撮像装置の制御方法、及び制御プログラム | |
JP2000253412A (ja) | 撮像素子及び撮像装置 | |
WO2013100096A1 (ja) | 撮像装置、撮像装置の制御方法、及び制御プログラム | |
JP5968021B2 (ja) | 撮像装置及び撮像装置の制御方法 | |
JP2020027979A (ja) | 撮像装置及び撮像方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12861852 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2013551778 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2014127286 Country of ref document: RU Kind code of ref document: A |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12861852 Country of ref document: EP Kind code of ref document: A1 |