WO2011162155A1 - Image capturing device - Google Patents

Image capturing device Download PDF

Info

Publication number
WO2011162155A1
WO2011162155A1 PCT/JP2011/063794 JP2011063794W WO2011162155A1 WO 2011162155 A1 WO2011162155 A1 WO 2011162155A1 JP 2011063794 W JP2011063794 W JP 2011063794W WO 2011162155 A1 WO2011162155 A1 WO 2011162155A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
unit pixel
signal
visible light
color
Prior art date
Application number
PCT/JP2011/063794
Other languages
French (fr)
Japanese (ja)
Inventor
清 高
Original Assignee
コニカミノルタオプト株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタオプト株式会社 filed Critical コニカミノルタオプト株式会社
Priority to JP2012521442A priority Critical patent/JPWO2011162155A1/en
Publication of WO2011162155A1 publication Critical patent/WO2011162155A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/131Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing infrared wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/135Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements

Definitions

  • the present invention relates to an imaging apparatus capable of performing image processing on original image data obtained by a solid-state imaging device.
  • a night vision video support system that irradiates near-infrared illumination light in front of a vehicle or the like and images a subject with a camera having sensitivity to near-infrared light.
  • the conventional night vision video support system requires a high-quality image, but the near-infrared wavelength component is dominant in the subject light, so that sufficient color information cannot be obtained, so a monochrome image is obtained. May be generated, and the visibility of the driver may deteriorate due to a difference from the actual subject image.
  • techniques as shown in Patent Documents 1 and 2 have been developed.
  • the level value of the visible light luminance signal and the level value of the near-infrared luminance signal are compared to determine whether it is the visible region or the infrared region, and the block determined as the visible region For a block determined as an infrared region, a monochrome image is generated based on the near-infrared luminance signal in the block. ing.
  • the technique of Patent Document 1 has a problem that the visibility of the driver is lowered because color and monochrome areas are mixed in the same image and are different from an actual subject image.
  • an opening having transparency to visible light and infrared light and a non-opening having transparency to visible light and non-transmission to infrared light is arranged so as to be integrated on the solid-state imaging device and pass through the opening.
  • the wide wavelength region component including visible light and infrared light is detected by the wavelength region pixel, a luminance signal is generated by the detection signal, and each color component of R, G, B that has passed through the non-opening portion is detected by each color pixel.
  • a color difference signal is generated from each color signal detected.
  • the original color of the subject may not be reflected correctly and may be buried in noise.
  • the luminance signal in the infrared region tends to increase in value when infrared light is included, so the color difference signal is processed to increase accordingly.
  • the color becomes bright, that is, noise may be emphasized.
  • the color information of the subject is buried in the emphasized noise and may not be correctly identified.
  • An object of the present invention is to provide an imaging apparatus capable of reproducing the original subject color and obtaining an image with little discomfort even at night when infrared light is dominant.
  • At least three types of pixels having different spectral sensitivities are arranged, of which at least one type of pixel is a pixel having sensitivity in the infrared region, and includes a visible light component and an infrared light component.
  • a solid-state image sensor that can be detected, and luminance information of the visible light component obtained from the solid-state image sensor when the subject is imaged by the solid-state image sensor, and infrared light information based on the infrared light component.
  • a color image generation means for generating a color image based on the image data.
  • the luminance information of the visible light component obtained from the solid-state imaging device and the infrared based on the infrared light component.
  • a color image is generated by weighting a color difference signal or the like, so that the color of the original subject can be reproduced, and a color image with less discomfort can be obtained.
  • the “luminance information of the visible light component” is, for example, a luminance signal value in a visible light signal
  • the “infrared light information” is, for example, an infrared light signal value.
  • the imaging apparatus when a solid-state imaging device in which a plurality of pixels capable of detecting a visible light component and an infrared light component from a subject are two-dimensionally arranged and when imaging a subject by the solid-state imaging device,
  • the color difference signal of the visible light component detected by the adjacent unit pixel unit adjacent to the unit pixel unit of interest is the color difference signal of the visible light component of the unit of interest pixel unit in which the detected luminance signal of the visible light component is equal to or less than a predetermined value.
  • a color image generation means for generating a color image using the nominal color difference signal in place of the nominal color difference signal derived based on the above.
  • the color signal of the visible light component of the target unit pixel unit in which the detected luminance signal of the visible light component is equal to or less than a predetermined value when the color image generation unit images a subject with the solid-state imaging element, the color signal of the visible light component of the target unit pixel unit in which the detected luminance signal of the visible light component is equal to or less than a predetermined value. Is replaced with the nominal color difference signal derived based on the color difference signal of the visible light component detected by the adjacent unit pixel portion adjacent to the target unit pixel portion, and a color image is generated using the nominal color difference signal. Even when the color information from the unit pixel unit of interest cannot be obtained, the color of the original subject can be reproduced and a color image with a little uncomfortable feeling can be obtained.
  • the proximity unit pixel unit means a unit pixel unit adjacent to at least the target unit pixel unit.
  • the visible light component being equal to or less than a predetermined value means, for example, a case where the luminance signal of the visible light component is lower
  • the proximity unit pixel unit is a unit pixel unit arranged around the unit pixel unit of interest.
  • the color information which does not largely deviate from the original color information of the unit pixel unit of interest can be obtained.
  • the color image generation unit may determine whether the plurality of proximity unit pixel units correspond to the distance between the target unit pixel unit and the proximity unit pixel unit.
  • the nominal color difference signal is derived by weighting each color difference signal of the visible light component detected in the unit pixel unit according to the degree of association between the target unit pixel unit and the adjacent unit pixel unit and performing weighted averaging. It is preferable. Weighted averaging is performed according to the degree of association between the target unit pixel unit and the proximity unit pixel unit, thereby suppressing the involvement of the proximity unit pixel unit having low relevance and improving color reproducibility. Can do. “Weighting according to the degree of association between the target unit pixel portion and the proximity unit pixel portion” means, for example, weighting according to the distance between the target unit pixel portion and the proximity unit pixel portion.
  • the weighting coefficient it is preferable to increase the weighting coefficient as the distance between the unit pixel unit of interest and the neighboring unit pixel unit is shorter. It is considered that the proximity unit pixel unit that is more distant from the unit pixel unit of interest is more likely to have a different color, so it is desirable to reduce its involvement.
  • the color image generation unit is configured to perform the weighting according to a difference or ratio between a luminance signal in a visible light component detected by the same proximity unit pixel unit and an infrared light signal. It is preferable to change the coefficient.
  • the color image generation unit increases the weighting coefficient when the luminance signal in the visible light component is greater than or equal to the infrared light signal, and the luminance signal in the visible light component is If it is smaller than the infrared light signal, it is preferable to reduce the weighting coefficient. This is because when the luminance signal in the visible light component is larger, the image signal from the unit pixel unit of interest is more likely to have original color information.
  • the weighting coefficient when the luminance signal in the visible light component is larger, the weighting coefficient is 1, and when the infrared light signal is lower, the weighting coefficient is 0. This is preferable.
  • the proximity unit pixel unit is located within a predetermined range centered on the target unit pixel unit, and the color image generation unit includes the visible light component within the predetermined range. If the number of the unit pixel portions whose luminance signal is equal to or higher than the infrared light signal is less than a predetermined number, expanding the predetermined range is effective even when there is little visible light detected within the predetermined range. is there.
  • the predetermined number is, for example, the number of the unit pixel units in which the luminance signal in the visible light component is smaller than the infrared light signal.
  • the color image generation unit divides the imaging surface of the solid-state imaging device into a plurality of blocks, and averages the luminance signal in the visible light component detected from the unit pixel unit for each block.
  • An average value of a color difference signal and an average value of an infrared light signal, and the proximity unit according to a distance between the target unit pixel unit and the block around the block including the target unit pixel unit instead of the color difference signal detected by the pixel, the average value of the block is obtained by deriving the nominal color difference component of the unit pixel portion of interest by weighting and averaging the average values of the color difference signals in the surrounding blocks. Therefore, it is possible to suppress generation of a color image that is extremely colored.
  • the average value of the luminance signal in the visible light component of the block including the unit pixel portion of interest is lower than the average value of the infrared light signal of the same block. Only in this case, it is preferable to perform weighted averaging by weighting the average values of the color difference signals in the surrounding blocks instead of the color difference signals detected by the adjacent unit pixels. This is because if the average value of the luminance signal in the visible light component of the block is equal to or higher than the average value of the infrared light signal of the same block, there is a high possibility that sufficient color information can be obtained in the block.
  • the image pickup apparatus of the present invention it is possible to reproduce an original subject color and obtain an image with a little discomfort even at night when infrared light is dominant.
  • FIG. 3 is a diagram illustrating an arrangement of pixels of a solid-state imaging element 3.
  • FIG. It is the figure which showed the spectral transmission characteristic of Ye, R, and IR filter, the vertical axis
  • 3 is a block diagram illustrating a detailed configuration of an image processing unit 4.
  • FIG. It is a figure which cuts off and shows a part of imaging surface of the solid-state image sensor 3.
  • FIG. It is a graph which shows an example of k (x, y). It is a graph which shows an example of S (x, y). It is a graph which shows an example of L (x, y).
  • FIG. 3 is a diagram illustrating a part of pixels of the solid-state imaging device 3 cut out. It is a figure which divides
  • FIG. 1 is a block diagram of an imaging apparatus 1 according to the present embodiment.
  • the imaging apparatus 1 is close to the subject when the subject brightness is low, such as a lens 2, a solid-state imaging device (hereinafter also referred to as an imaging device) 3, an image processing unit 4, a control unit 5, and the night.
  • a near-infrared light-emitting unit 6 that irradiates infrared light and a drive unit 7 that drives the near-infrared light-emitting unit 6 according to a control signal from the control unit 5 are provided.
  • the imaging device 1 is mounted on a vehicle, for example, and is used for imaging a subject around the vehicle, but the usage is not limited thereto.
  • the lens 2 is composed of an optical lens system that captures an optical image of a subject and guides it to the image sensor 3.
  • the optical lens system for example, a zoom lens, a focus lens, other fixed lens blocks, and the like arranged in series along the optical axis L of the optical image of the subject can be employed.
  • the lens 2 includes a diaphragm (not shown) for adjusting the amount of transmitted light, a shutter (not shown), and the like, and the driving of the diaphragm and the shutter is controlled under the control of the control unit 5.
  • the image pickup device 3 includes a light receiving unit composed of a PD (photodiode), an output circuit that outputs a signal photoelectrically converted by the light receiving unit, and a drive circuit that drives the image pickup device 3, and has a level corresponding to the amount of light. Generate original image data.
  • various imaging sensors such as a CMOS image sensor, a VMlS image sensor, and a CCD image sensor can be employed.
  • the image sensor 3 receives a light image of a subject, converts a visible color image component by a pixel having a color filter, and converts and outputs an infrared image component by a pixel having an infrared filter.
  • a luminance image component including a visible luminance image component and an infrared image component is converted and output by a pixel not provided with a filter, but the present invention is not limited to this.
  • the image processing unit 4 includes an arithmetic circuit and a memory used as a work area for the arithmetic circuit, A / D converts the original image data output from the image sensor 3 to convert it into a digital signal, and performs image processing to be described later. After execution, for example, the data is output to a memory or a display device (not shown).
  • the control unit 5 includes a CPU and a memory that stores a program executed by the CPU, and controls the entire imaging apparatus 1 in response to an external control signal.
  • FIG. 2 is a diagram showing an arrangement of pixels of the solid-state image sensor 3.
  • the image pickup device 3 includes a unit pixel unit 31 including a Ye pixel, an R pixel, an IR pixel, and a W pixel having a visible wavelength region and an infrared wavelength region as sensitive wavelength bands. Is arranged.
  • “Ye” pixel means a pixel having a “Ye” filter, and so on.
  • R pixels are arranged in the first row and first column
  • IR pixels are arranged in the second row and first column
  • W pixels are arranged in the first row and second column.
  • R pixels, IR pixels, W pixels, and Ye pixels are arranged in a staggered manner, such that Ye pixels are arranged in the second row and the second column.
  • the R pixel, IR pixel, W pixel, and Ye pixel may be arranged in a zigzag pattern in another pattern.
  • the Ye pixel Since the Ye pixel includes a Ye filter, it outputs an image component Ye and an infrared image component, which are visible color image components of Ye. Since the R pixel includes an R filter, it outputs an image component R and an infrared image component which are visible color image components of R. Since the IR pixel includes an IR filter, it outputs an image component IR that is an infrared image component. Since the W pixel does not include a filter, an image component W that is a luminance image component including a visible luminance image component and an image component IR is output.
  • FIG. 3 is a diagram showing the spectral transmission characteristics of the Ye, R, and IR filters, where the vertical axis shows the transmittance (sensitivity) and the horizontal axis shows the wavelength (nm).
  • a graph indicated by a dotted line shows the spectral sensitivity characteristics of the pixel in a state where the filter is removed. It can be seen that this spectral sensitivity characteristic has a peak near 600 nm and changes in a convex curve.
  • the visible wavelength region is 400 nm to 700 nm
  • the infrared wavelength region is 700 nm to 1100 nm
  • the sensitive wavelength band is 400 nm to 1100 nm.
  • the light in the visible wavelength region is the visible light component
  • the light in the infrared wavelength region is the infrared light component.
  • the Ye filter has a characteristic of transmitting light in the sensitive wavelength band excluding the blue region in the visible wavelength region. Therefore, the Ye filter mainly transmits yellow light and infrared light.
  • the R filter has a characteristic of transmitting light in a sensitive wavelength band excluding a blue region and a green region in the visible wavelength region. Therefore, the R filter mainly transmits red light and infrared light.
  • the IR filter has a characteristic of transmitting light in a sensitive wavelength band excluding the visible wavelength region, that is, in the infrared wavelength band.
  • W indicates a case where no filter is provided, and all light in the sensitive wavelength band of the pixel is transmitted.
  • Ye, M (magenta) + IR, C (cyan) + IR instead of Ye, R and IR (however, M + IR blocks only green and C + IR only red) Is also possible.
  • the R pixel, the IR pixel, and the Ye pixel can make the spectral transmission characteristics steep, and, for example, the spectral transmission characteristics are better than those of the M + IR filter and the C + IR filter. That is, each of the M + IR filter and the C + IR filter has a characteristic of shielding only the green region and the red region, which are partial regions in the center, in the sensitive wavelength band. It is difficult to provide steep spectral transmission characteristics such as IR filters and Ye filters.
  • each of the M + IR filter and the C + IR filter cannot extract the RGB image components with high accuracy even if the calculation is performed. Therefore, by configuring the image sensor 3 with R pixels, IR pixels, Ye pixels, and W pixels, the performance of the image sensor 3 can be improved.
  • FIG. 4 is a block diagram showing a detailed configuration of the image processing unit 4.
  • the image processing unit 4 that is a color image generation unit includes a color interpolation unit 41, a color signal generation unit 42, a color space conversion unit 43, and an RGB color signal generation unit 44.
  • the color interpolation unit 41 performs an interpolation process for interpolating the missing pixel data on each of the image component Ye, the image component R, the image component IR, and the image component W output from the image sensor 3, and the image component R, the image
  • Each of the component IR, the image component W, and the image component Ye is converted into image data having the same number of pixels as the number of pixels of the image sensor 3.
  • the missing pixel data is generated in the image components Ye, R, R, and W because the R pixel, the IR pixel, the W pixel, and the Ye pixel are arranged in a staggered manner.
  • the interpolation process for example, a linear interpolation process may be employed.
  • the color signal generation unit 42 represents the image component Ye, the image component R, the image component (hereinafter referred to as an infrared light signal) IR, and the image component W that have been subjected to the interpolation processing by the color interpolation unit 41 using the following formula ( 1a), 1b, and 1c are combined to generate color signals dR, dG, and dB (RGB color signals).
  • dR R-IR (1a)
  • dG Ye-R (1b)
  • dB W ⁇ Ye (1c)
  • the color space conversion unit 43 converts the color signals dR, dG, and dB into a visible light luminance signal Y and color difference signals Cb and Cr (an example of a color difference signal) as shown in equations (2a), (2b), and (2c). Convert to a color space containing.
  • the color difference signal Cb indicates a blue color difference signal
  • the color difference signal Cr indicates a red color difference signal.
  • Y 0.3 dR + 0.6 dG + 0.1 dB (2a)
  • Cb ⁇ 0.17 dR ⁇ 0.33 dG + 0.5 dB
  • Cr 0.5dR-0.42dG-0.08dB (2c)
  • the color space conversion unit 43 compares the infrared light signal Ir obtained from the image component IR with the luminance signal Y of visible light for each image signal output from the unit pixel unit 31, and Ir ⁇ Y.
  • the weighting coefficient k (x, y) used in the expression (3) is increased (1 in the extreme case), and when Ir> Y, the weighting coefficient k used in the following expressions (3a) and (3b): (X, y) is reduced (0 in extreme cases), and a value is obtained by a weighted average as follows. Note that it is also possible to simply calculate Y-Ir and determine the weighting coefficient k (x, y) accordingly, without determining the magnitude of the infrared light signal and the luminance signal of visible light.
  • FIG. 5 is a diagram showing a part of the imaging surface of the solid-state imaging device 3 cut out.
  • a unit pixel portion TG of interest is included in the center, and 5 ⁇ 5 unit pixel portions 31 are arranged around the periphery.
  • the color space conversion unit 43 creates a color difference signal of the target unit pixel unit TG based on the color difference signal of the surrounding unit pixel unit 31.
  • the created color difference signal is referred to as a nominal color difference signal, and is distinguished from the color difference signals actually detected from the target unit pixel unit TG (Cr, Cb in the equations (2c) and (2b)).
  • the color difference signals Cb (x 0 , y 0 ) and Cr (x 0 , y 0 ) of the target unit pixel unit TG are converted into peripheral unit pixels based on the following equations (3a) and (3b).
  • the color difference Cb (x, y) and Cr (x, y) of the unit 31 is obtained by a weighted average.
  • x 0 and y 0 are the coordinates of the target unit pixel portion TG
  • x and y are the coordinates of the surrounding unit pixel portion 31.
  • k (x, y) is a value determined by the ratio of the luminance signal Y (x, y) of the visible light and the infrared light signal Ir (x, y) as shown in the equation (4).
  • the luminance signal is larger than the infrared light signal, it is preferable to increase k (x, y) because the involvement of visible light is increased. Since the weighted average is obtained, the sum of k (x, y) is 1 (the same applies to the following equations (9a) and (9b)).
  • FIG. 6 shows an example of k (x, y).
  • k (x, y) f (Y (x, y), Ir (x, y)) (4)
  • a (x, y) is a weighting coefficient, for example, as shown in Expression (5).
  • a (x, y) S (x, y) ⁇ L (x, y) (5)
  • S (x, y) is a value determined by the distance from the target unit pixel unit TG to the peripheral unit pixel unit 31 as shown in the equations (6a) and (6b), and the longer the distance is, the longer the distance is. A lower value is preferred.
  • FIG. 7 shows an example of S (x, y).
  • S (x, y) f (R (x, y)) (6a)
  • R (x, y) ⁇ ((x ⁇ x 0 ) 2 + (y ⁇ y 0 ) 2 ) (6b)
  • L (x, y) is determined by the visible light luminance signal Y (x, y) of the peripheral unit pixel unit 31 as shown in Expression (7), and the value increases as the visible light luminance signal increases. It is preferable to make it high.
  • FIG. 8 shows an example of L (x, y).
  • L (x, y) f (Y (x, y)) (7)
  • the RGB color signal generation unit 44 calculates the display luminance Yadd for the target unit pixel unit TG as shown in Expression (8a), and accordingly, the color difference signal obtained by Expressions (3b) and (3a).
  • Display color difference signals Crm and Cbm are calculated from equations (8b) and (8c) based on Cr and Cb.
  • Yadd Y + R + G + B
  • Crm Cr ⁇ Yadd / Y
  • Cbm Cb ⁇ Yadd / Y (8c)
  • the RGB color signal generation unit 44 performs inverse conversion on the equations (2a), (2b), and (2c), thereby obtaining the RGB color signals dR ′, dG ′, and dB from the luminance signal Yadd and the color difference signals Crm and Cbm. 'Is calculated.
  • the color signal generation unit 42 as the visible pixel / infrared pixel determination unit compares the luminance signal Y of the visible light output from the arbitrary unit pixel unit 31 with the infrared light signal Ir.
  • Ir> Y the unit pixel unit 31 is determined as an infrared pixel
  • Ir ⁇ Y the unit pixel unit 31 is determined as a visible pixel.
  • FIG. 9 shows an example of the determination result.
  • the color signal generation unit 42 uses the expressions (2a), (2b), and (2c) when the target unit pixel unit TG is a visible pixel based on the visible pixel / infrared pixel determination result.
  • the luminance signal and the color difference signals Cb and Cr calculated in the above are used as the value of the unit pixel unit 31.
  • weighting is performed using the color difference signal of the surrounding unit pixel unit 31 as in the above-described embodiment. An average can be performed to obtain a nominal color difference signal of the target unit pixel portion TG. Thereby, the amount of calculation required for image processing can be reduced.
  • the number of visible pixels is a predetermined number (for example, red If the number is smaller than the number of outer pixels), sufficient visible light information for forming the color difference signal of the target unit pixel portion TG may not be obtained.
  • the block is expanded, and, for example, in the block of the unit pixel unit 31 arranged at 7 ⁇ 7 or 9 ⁇ 9 around the target unit pixel unit TG, the visible pixel / infrared pixel by the color signal generation unit 42
  • the color difference signal of the target unit pixel unit TG is calculated based on the color difference signal of the surrounding unit pixel unit 31.
  • the expansion of the block may be performed until the number of visible pixels exceeds the number of infrared pixels as a result of the visible / infrared pixel determination by the color signal generation unit 42. Accordingly, the chromaticity accuracy can be maintained by securing the number of unit pixel portions 31 necessary for calculating the color difference signal.
  • the imaging surface of the solid-state imaging device 3 is divided into blocks BL including a predetermined number (for example, 10 ⁇ 10) of unit pixel units 31, and a luminance signal of visible light for each block BL.
  • the average value Y i of the average value Ir i of the infrared light signal is calculated and compared them, in the case of Ir i ⁇ Y i may increase the weighting coefficients k i of the block BL, Ir i When> Y i , the weighting coefficient k i of the block BL is decreased.
  • a color difference signal is generated based on the color signals R, G, and B of each unit pixel unit 31.
  • the average color difference signals Cb i and C i i for each block BL are calculated.
  • Ir i > Y i , k i 0 and excluded from the calculation.
  • the block of interest surrounding blocks BL each average color difference signals Cb i Surrounding The BL, by using the weighted averages of Cr i
  • the color difference signals Cb (x 0 , y 0 ) and Cr (x 0 , y 0 ) can be generated.
  • i indicates a block number.
  • the peripheral block is a block adjacent to the block including the target unit pixel portion TG.
  • Cb (x 0 , y 0 ) ⁇ k i ⁇ S i ⁇ Cb i (9a)
  • Cr (x 0, y 0) ⁇ k i ⁇ S i ⁇ Cr i (9b)
  • the weighting coefficient S i is expressed by the equation (11).
  • R represents a distance (for example, R1 to R5 in FIG. 10) from the target unit pixel portion to the center of the peripheral block BL. It is preferable that Si is lowered as the distance increases.
  • S i f (R) (11)
  • the imaging surface of the solid-state imaging device 3 is divided into a predetermined number of blocks BL (for example, 10 ⁇ 10 unit pixel units), and the average value Y i of the luminance signal of visible light and infrared light for each block BL.
  • the average value Ir i of the signals is calculated and compared.
  • the block BL is determined as a visible block in which visible light is dominant, and when Ir i > Y i . Determines that the block BL is a visible block in which infrared light is dominant (see FIG. 11).
  • a color difference signal is generated according to equations (2b) and (2c) based on the color signals R, G, and B output from the unit pixel unit 31 in the block.
  • the color difference signal of the target unit pixel portion TG in the block, the average color signal Cb i of visible blocks in a neighborhood, and the weighted average of Cr i, color difference information Cb (x 0, y 0 ) and Cr (x 0 , y 0 ) can be generated. Thereby, the amount of calculation required for image processing can be reduced.
  • the present invention can be applied to in-vehicle cameras, surveillance cameras, etc., but the application is not limited thereto.
  • the number of pixels divided into blocks is arbitrary, and the vertical and horizontal numbers do not have to match.

Abstract

Disclosed is an image capturing device capable of reproducing the original color of a subject even in night when infrared light is dominant and obtaining an image with less sense of discomfort. If an attention unit pixel portion is an infrared pixel portion, a color space conversion unit creates a color difference signal of the attention unit pixel portion on the basis of an image signal of a surrounding unit pixel portion. More specifically, according to the following equations ((3a), (3b)), the color difference signals Cb(x0, y0), Cr(x0, y0) of the attention unit pixel portion are obtained by the weighted average of the color differences Cb(x, y), Cr(x, y) of a unit pixel portion surrounding the attention unit pixel portion, where the symbols x0, y0 are coordinates of the attention unit pixel portion and the symbols x, y are coordinates of the surrounding unit pixel portion. Cb(x0, y0) = Σk(x, y)·a(x, y)·Cb(x, y) (3a) Cr(x0, y0) = Σk(x, y)·a(x, y)·Cr(x, y) (3b)

Description

撮像装置Imaging device
 本発明は、固体撮像素子により得られた原画像データを画像処理できる撮像装置に関する。 The present invention relates to an imaging apparatus capable of performing image processing on original image data obtained by a solid-state imaging device.
 近年、近赤外照明光を車両等の前方に照射し、近赤外光に感度をもつカメラで被写体を撮像する夜間視界映像支援システムが開発されている。ここで、従来の夜間視界映像支援システムは、高画質の画像が要求されている一方で、被写体光において近赤外波長成分が支配的になるので、十分な色情報が得られないためモノクロ画像が生成されてしまい、実際の被写体画像と異なることによりドライバーの視認性が低下してしまうことがある。これに対し、特許文献1,2に示すような技術が開発されている。 In recent years, a night vision video support system has been developed that irradiates near-infrared illumination light in front of a vehicle or the like and images a subject with a camera having sensitivity to near-infrared light. Here, the conventional night vision video support system requires a high-quality image, but the near-infrared wavelength component is dominant in the subject light, so that sufficient color information cannot be obtained, so a monochrome image is obtained. May be generated, and the visibility of the driver may deteriorate due to a difference from the actual subject image. On the other hand, techniques as shown in Patent Documents 1 and 2 have been developed.
特開2006-148690号公報JP 2006-148690 A 特開2007-264628号公報JP 2007-264628 A
 特許文献1の技術によれば、可視光輝度信号のレベル値と近赤外輝度信号とのレベル値とを比較して、可視領域か赤外領域かを判定し、可視領域と判定されたブロックについては、色信号R、G、Bに基づいて、カラー画像を生成し、赤外領域と判定されたブロックについては、そのブロック内の近赤外輝度信号に基づいてモノクロ画像を生成するようにしている。しかしながら、特許文献1の技術では、同じ画像の中にカラーとモノクロの領域が混在し、実際の被写体画像と異なるため、ドライバーの視認性が低下してしまうという問題が残る。 According to the technique of Patent Document 1, the level value of the visible light luminance signal and the level value of the near-infrared luminance signal are compared to determine whether it is the visible region or the infrared region, and the block determined as the visible region For a block determined as an infrared region, a monochrome image is generated based on the near-infrared luminance signal in the block. ing. However, the technique of Patent Document 1 has a problem that the visibility of the driver is lowered because color and monochrome areas are mixed in the same image and are different from an actual subject image.
 一方、特許文献2の技術によれば、可視光および赤外光に対して透過性を有する開口部と可視光に対して透過性を有するとともに赤外光に対して非透過性を有する非開口部とが設けられた赤外光カットフィルタ層と可視光領域をR,G,B成分に分離する色フィルタ群を,固体撮像素子上に一体的となるように配置して、開口部を通過した可視光および赤外光を含む広波長領域成分を波長領域画素で検知し,その検知信号で輝度信号を生成すると共に、非開口部を通過したR,G,Bの各色成分を各色画素で検知し,その各色信号から色差信号を生成するようにしている。 On the other hand, according to the technique of Patent Document 2, an opening having transparency to visible light and infrared light and a non-opening having transparency to visible light and non-transmission to infrared light. An infrared light cut filter layer provided with a portion and a color filter group for separating the visible light region into R, G, and B components are arranged so as to be integrated on the solid-state imaging device and pass through the opening. The wide wavelength region component including visible light and infrared light is detected by the wavelength region pixel, a luminance signal is generated by the detection signal, and each color component of R, G, B that has passed through the non-opening portion is detected by each color pixel. A color difference signal is generated from each color signal detected.
 ここで、赤外領域では被写体からの可視光量が非常に少ないので、被写体本来の色が正しく反映されず、ノイズに埋もれてしまうことがある。一方、赤外領域の輝度信号は、赤外光を含めると値が大きくなる傾向があるので、色差信号もそれに合わせ大きくなるように処理されるため、可視光では色がほとんど見えない被写体でも、処理後の画像では色が鮮やかになり、即ちノイズが強調されてしまうことがある。その結果、夜間撮影などにおいては、被写体の色情報が強調されたノイズに埋もれてしまい、正しく識別されない恐れがある。 Here, since the visible light quantity from the subject is very small in the infrared region, the original color of the subject may not be reflected correctly and may be buried in noise. On the other hand, the luminance signal in the infrared region tends to increase in value when infrared light is included, so the color difference signal is processed to increase accordingly. In the processed image, the color becomes bright, that is, noise may be emphasized. As a result, in night photography or the like, the color information of the subject is buried in the emphasized noise and may not be correctly identified.
 本発明の目的は、赤外光が支配的である夜間においても、本来の被写体の色を再現可能で、違和感の少ない画像を得ることができる撮像装置を提供することである。 An object of the present invention is to provide an imaging apparatus capable of reproducing the original subject color and obtaining an image with little discomfort even at night when infrared light is dominant.
 本発明の撮像装置は、分光感度の異なる少なくとも3種類の画素が配列され、そのうち、少なくとも1種類の画素は赤外領域に感度を持つ画素であって、可視光成分と赤外光成分とを検出可能な固体撮像素子と、前記固体撮像素子により被写体を撮像したときに、前記固体撮像素子から得られた前記可視光成分の輝度情報と、前記赤外光成分に基づく赤外光情報とに基づいて、カラー画像を生成するカラー画像生成手段と、を有することを特徴とする。 In the imaging apparatus of the present invention, at least three types of pixels having different spectral sensitivities are arranged, of which at least one type of pixel is a pixel having sensitivity in the infrared region, and includes a visible light component and an infrared light component. A solid-state image sensor that can be detected, and luminance information of the visible light component obtained from the solid-state image sensor when the subject is imaged by the solid-state image sensor, and infrared light information based on the infrared light component. And a color image generation means for generating a color image based on the image data.
 本発明によれば、カラー画像生成手段が、前記固体撮像素子により被写体を撮像したときに、前記固体撮像素子から得られた前記可視光成分の輝度情報と、前記赤外光成分に基づく赤外光情報とに基づいて、例えば色差信号を重み付けするなどして、カラー画像を生成するので、本来の被写体の色を再現可能で、違和感の少ないカラー画像を得ることができる。尚、「可視光成分の輝度情報」とは、例えば可視光信号における輝度信号値であり、「赤外光情報」とは、例えば赤外光信号値である。 According to the present invention, when the color image generating unit images a subject with the solid-state imaging device, the luminance information of the visible light component obtained from the solid-state imaging device and the infrared based on the infrared light component. Based on the light information, for example, a color image is generated by weighting a color difference signal or the like, so that the color of the original subject can be reproduced, and a color image with less discomfort can be obtained. The “luminance information of the visible light component” is, for example, a luminance signal value in a visible light signal, and the “infrared light information” is, for example, an infrared light signal value.
 本発明の撮像装置は、被写体からの可視光成分と赤外光成分とを検出可能な複数の画素を2次元的に配置した固体撮像素子と、前記固体撮像素子により被写体を撮像したときに、検出した可視光成分の輝度信号が所定値以下であった注目単位画素部の可視光成分の色差信号を、前記注目単位画素部に近接する近接単位画素部が検出した前記可視光成分の色差信号に基づき導出される名目色差信号と置換して、前記名目色差信号を用いてカラー画像を生成するカラー画像生成手段と、を有する。 The imaging apparatus according to the present invention, when a solid-state imaging device in which a plurality of pixels capable of detecting a visible light component and an infrared light component from a subject are two-dimensionally arranged and when imaging a subject by the solid-state imaging device, The color difference signal of the visible light component detected by the adjacent unit pixel unit adjacent to the unit pixel unit of interest is the color difference signal of the visible light component of the unit of interest pixel unit in which the detected luminance signal of the visible light component is equal to or less than a predetermined value. And a color image generation means for generating a color image using the nominal color difference signal in place of the nominal color difference signal derived based on the above.
 本発明によれば、カラー画像生成手段が、前記固体撮像素子により被写体を撮像したときに、検出した可視光成分の輝度信号が所定値以下であった注目単位画素部の可視光成分の色差信号を、前記注目単位画素部に近接する近接単位画素部が検出した前記可視光成分の色差信号に基づき導出される名目色差信号と置換して、前記名目色差信号を用いてカラー画像を生成するので、注目単位画素部からの色情報を得られない場合でも、本来の被写体の色を再現可能で、違和感の少ないカラー画像を得ることができる。近接単位画素部とは、少なくとも注目単位画素部に隣接する単位画素部をいう。又、可視光成分が所定値以下とは、例えば可視光成分の輝度信号が、赤外光信号を下回る場合をいう。 According to the present invention, when the color image generation unit images a subject with the solid-state imaging element, the color signal of the visible light component of the target unit pixel unit in which the detected luminance signal of the visible light component is equal to or less than a predetermined value. Is replaced with the nominal color difference signal derived based on the color difference signal of the visible light component detected by the adjacent unit pixel portion adjacent to the target unit pixel portion, and a color image is generated using the nominal color difference signal. Even when the color information from the unit pixel unit of interest cannot be obtained, the color of the original subject can be reproduced and a color image with a little uncomfortable feeling can be obtained. The proximity unit pixel unit means a unit pixel unit adjacent to at least the target unit pixel unit. The visible light component being equal to or less than a predetermined value means, for example, a case where the luminance signal of the visible light component is lower than the infrared light signal.
 本発明の一態様によれば、前記近接単位画素部とは、前記注目単位画素部の周囲に配置された単位画素部であると好ましい。これにより、注目単位画素部の本来の色情報に対して大きく偏ることのない色情報を得ることができる。 According to one aspect of the present invention, it is preferable that the proximity unit pixel unit is a unit pixel unit arranged around the unit pixel unit of interest. Thereby, the color information which does not largely deviate from the original color information of the unit pixel unit of interest can be obtained.
 本発明の一態様によれば、前記カラー画像生成手段は、前記近接単位画素部が複数ある場合には、前記注目単位画素部と前記近接単位画素部との距離に応じて、当該複数の近接単位画素部で検出された可視光成分の色差信号それぞれを、前記注目単位画素部と前記近接単位画素部との関連度合いに応じて重み付けして加重平均を行うことにより、前記名目色差信号を導出すると好ましい。前記注目単位画素部と前記近接単位画素部との関連度合いに応じて重み付けして加重平均を行うことで、関連性の低い近接単位画素部の関与を抑えて、より色の再現性を高めることができる。「前記注目単位画素部と前記近接単位画素部との関連度合いに応じて重み付けする」とは、例えば、前記注目単位画素部と前記近接単位画素部との距離に応じて重み付けすることをいう。 According to an aspect of the present invention, when there are a plurality of the proximity unit pixel units, the color image generation unit may determine whether the plurality of proximity unit pixel units correspond to the distance between the target unit pixel unit and the proximity unit pixel unit. The nominal color difference signal is derived by weighting each color difference signal of the visible light component detected in the unit pixel unit according to the degree of association between the target unit pixel unit and the adjacent unit pixel unit and performing weighted averaging. It is preferable. Weighted averaging is performed according to the degree of association between the target unit pixel unit and the proximity unit pixel unit, thereby suppressing the involvement of the proximity unit pixel unit having low relevance and improving color reproducibility. Can do. “Weighting according to the degree of association between the target unit pixel portion and the proximity unit pixel portion” means, for example, weighting according to the distance between the target unit pixel portion and the proximity unit pixel portion.
 本発明の一態様によれば、前記注目単位画素部と前記近接単位画素部との距離が短い程、前記重み付けの係数を高めると好ましい。注目単位画素部に対して距離が離れた近接単位画素部ほど、色が異なる可能性が高いと考えられるから、その関与を低めるのが望ましい。 According to one aspect of the present invention, it is preferable to increase the weighting coefficient as the distance between the unit pixel unit of interest and the neighboring unit pixel unit is shorter. It is considered that the proximity unit pixel unit that is more distant from the unit pixel unit of interest is more likely to have a different color, so it is desirable to reduce its involvement.
 本発明の一態様によれば、前記カラー画像生成手段は、同一の前記近接単位画素部で検出された可視光成分における輝度信号と、赤外光信号との差もしくは比に応じて、前記重み付けの係数を変更すると好ましい。 According to an aspect of the present invention, the color image generation unit is configured to perform the weighting according to a difference or ratio between a luminance signal in a visible light component detected by the same proximity unit pixel unit and an infrared light signal. It is preferable to change the coefficient.
 本発明の一態様によれば、前記カラー画像生成手段は、前記可視光成分における輝度信号が前記赤外光信号以上の場合には、前記重み付けの係数を高め、前記可視光成分における輝度信号が前記赤外光信号より小さい場合には、前記重み付けの係数を低めると好ましい。可視光成分における輝度信号の方が大きい場合には、注目単位画素部からの画像信号が本来の色情報を有している可能性が高いからである。 According to an aspect of the present invention, the color image generation unit increases the weighting coefficient when the luminance signal in the visible light component is greater than or equal to the infrared light signal, and the luminance signal in the visible light component is If it is smaller than the infrared light signal, it is preferable to reduce the weighting coefficient. This is because when the luminance signal in the visible light component is larger, the image signal from the unit pixel unit of interest is more likely to have original color information.
 本発明の一態様によれば、前記可視光成分における輝度信号の方が大きい場合には、前記重み付けの係数を1とし、前記赤外光信号の方が低い場合は、前記重み付けの係数を0とすると好ましい。 According to one aspect of the present invention, when the luminance signal in the visible light component is larger, the weighting coefficient is 1, and when the infrared light signal is lower, the weighting coefficient is 0. This is preferable.
 本発明の一態様によれば、前記近接単位画素部が前記注目単位画素部を中心とする所定範囲内に位置しており、前記カラー画像生成手段は、前記所定範囲内において、前記可視光成分における輝度信号が前記赤外光信号以上となる前記単位画素部の数が、所定数を下回った場合、前記所定範囲を拡張すると、当該所定範囲内で検出される可視光が少ない場合も有効である。ここで、所定数とは、例えば前記可視光成分における輝度信号が前記赤外光信号より小さい前記単位画素部の数である。 According to an aspect of the present invention, the proximity unit pixel unit is located within a predetermined range centered on the target unit pixel unit, and the color image generation unit includes the visible light component within the predetermined range. If the number of the unit pixel portions whose luminance signal is equal to or higher than the infrared light signal is less than a predetermined number, expanding the predetermined range is effective even when there is little visible light detected within the predetermined range. is there. Here, the predetermined number is, for example, the number of the unit pixel units in which the luminance signal in the visible light component is smaller than the infrared light signal.
 本発明の一態様によれば、前記カラー画像生成手段は、前記固体撮像素子の撮像面を複数のブロックに分割し、ブロック毎に前記単位画素部から検出される可視光成分における輝度信号の平均値及び色差信号の平均値と、赤外光信号の平均値を求め、前記注目単位画素部と、前記注目単位画素部を含む前記ブロックの周囲の前記ブロックとの距離に応じて、前記近接単位画素が検出した色差信号の代わりに、前記周囲のブロックにおける色差信号の平均値それぞれを重み付けして加重平均を行うことにより、前記注目単位画素部の前記名目色差成分を導出すると、ブロックの平均値を用いることができるので、極端な着色がなされたカラー画像の生成を抑制できる。 According to an aspect of the present invention, the color image generation unit divides the imaging surface of the solid-state imaging device into a plurality of blocks, and averages the luminance signal in the visible light component detected from the unit pixel unit for each block. An average value of a color difference signal and an average value of an infrared light signal, and the proximity unit according to a distance between the target unit pixel unit and the block around the block including the target unit pixel unit Instead of the color difference signal detected by the pixel, the average value of the block is obtained by deriving the nominal color difference component of the unit pixel portion of interest by weighting and averaging the average values of the color difference signals in the surrounding blocks. Therefore, it is possible to suppress generation of a color image that is extremely colored.
 本発明の一態様によれば、前記カラー画像生成手段は、前記注目単位画素部を含む前記ブロックの可視光成分における輝度信号の平均値が、同じブロックの赤外光信号の平均値を下回った場合に限り、前記近接単位画素が検出した色差信号の代わりに、前記周囲のブロックにおける色差信号の平均値それぞれを重み付けして加重平均を行うと好ましい。前記ブロックの可視光成分における輝度信号の平均値が、同じブロックの赤外光信号の平均値以上であれば、そのブロック内で十分な色情報を得ることができる可能性が高いからである。 According to one aspect of the present invention, in the color image generation unit, the average value of the luminance signal in the visible light component of the block including the unit pixel portion of interest is lower than the average value of the infrared light signal of the same block. Only in this case, it is preferable to perform weighted averaging by weighting the average values of the color difference signals in the surrounding blocks instead of the color difference signals detected by the adjacent unit pixels. This is because if the average value of the luminance signal in the visible light component of the block is equal to or higher than the average value of the infrared light signal of the same block, there is a high possibility that sufficient color information can be obtained in the block.
 本発明の撮像装置によれば、赤外光が支配的である夜間においても、本来の被写体の色を再現可能で、違和感の少ない画像を得ることができる。 According to the image pickup apparatus of the present invention, it is possible to reproduce an original subject color and obtain an image with a little discomfort even at night when infrared light is dominant.
本実施の形態にかかる撮像装置1のブロック図である。It is a block diagram of imaging device 1 concerning this embodiment. 固体撮像素子3の画素の配列を示す図である。3 is a diagram illustrating an arrangement of pixels of a solid-state imaging element 3. FIG. Ye、R、IRフィルタの分光透過特性を示した図であり、縦軸は透過率(感度)を示し、横軸は波長(nm)を示している。It is the figure which showed the spectral transmission characteristic of Ye, R, and IR filter, the vertical axis | shaft has shown the transmittance | permeability (sensitivity), and the horizontal axis has shown the wavelength (nm). 画像処理部4の詳細な構成を示すブロック図である。3 is a block diagram illustrating a detailed configuration of an image processing unit 4. FIG. 固体撮像素子3の撮像面の一部を切り取って示す図である。It is a figure which cuts off and shows a part of imaging surface of the solid-state image sensor 3. FIG. k(x,y)の一例を示すグラフである。It is a graph which shows an example of k (x, y). S(x,y)の一例を示すグラフである。It is a graph which shows an example of S (x, y). L(x,y)の一例を示すグラフである。It is a graph which shows an example of L (x, y). 固体撮像素子3の画素の一部を切り取って示す図である。FIG. 3 is a diagram illustrating a part of pixels of the solid-state imaging device 3 cut out. 固体撮像素子3の撮像面を所定画素数のブロックBL(例えば10×10の単位画素部)に分割して示す図である。It is a figure which divides | segments the imaging surface of the solid-state image sensor 3 into the block BL (for example, unit pixel part of 10x10) of the predetermined pixel number. 固体撮像素子3の撮像面を所定画素数のブロックBL(例えば10×10の単位画素部)に分割して示す図である。It is a figure which divides | segments the imaging surface of the solid-state image sensor 3 into the block BL (for example, unit pixel part of 10x10) of the predetermined pixel number.
 以下、本発明の実施の形態にかかる撮像装置1について説明する。図1は、本実施の形態にかかる撮像装置1のブロック図である。図1に示すように撮像装置1は、レンズ2、固体撮像素子(以下、撮像素子ともいう。)3、画像処理部4、制御部5、夜間など被写体輝度が低い場合に被写体に向かって近赤外光を照射する近赤外発光部6,及び制御部5の制御信号により近赤外発光部6を駆動する駆動部7を備えている。ここで、撮像装置1は、例えば車両に搭載され、車両の周囲の被写体を撮像する用途に用いられるが、用途はそれに限られない。 Hereinafter, the imaging apparatus 1 according to the embodiment of the present invention will be described. FIG. 1 is a block diagram of an imaging apparatus 1 according to the present embodiment. As shown in FIG. 1, the imaging apparatus 1 is close to the subject when the subject brightness is low, such as a lens 2, a solid-state imaging device (hereinafter also referred to as an imaging device) 3, an image processing unit 4, a control unit 5, and the night. A near-infrared light-emitting unit 6 that irradiates infrared light and a drive unit 7 that drives the near-infrared light-emitting unit 6 according to a control signal from the control unit 5 are provided. Here, the imaging device 1 is mounted on a vehicle, for example, and is used for imaging a subject around the vehicle, but the usage is not limited thereto.
 レンズ2は、被写体の光像を取り込み、撮像素子3へ導く光学レンズ系から構成される。光学レンズ系としては、被写体の光像の光軸Lに沿って直列的に配置される、例えばズームレンズやフォーカスレンズ、その他の固定レンズブロック等を採用することができる。また、レンズ2は、透過光量を調節するための絞り(図示省略)、シャッタ(図示省略)等を備え、制御部5の制御の下、絞り及びシャッタの駆動が制御される。 The lens 2 is composed of an optical lens system that captures an optical image of a subject and guides it to the image sensor 3. As the optical lens system, for example, a zoom lens, a focus lens, other fixed lens blocks, and the like arranged in series along the optical axis L of the optical image of the subject can be employed. The lens 2 includes a diaphragm (not shown) for adjusting the amount of transmitted light, a shutter (not shown), and the like, and the driving of the diaphragm and the shutter is controlled under the control of the control unit 5.
 撮像素子3は、PD(フォトダイオード)からなる受光部と、受光部により光電変換された信号を出力する出力回路と、撮像素子3を駆動する駆動回路とを含み、光量に応じたレベルを有する原画像データを生成する。ここで、撮像素子3としては、CMOSイメージセンサ、VMlSイメージセンサ、及びCCDイメージセンサ等の種々の撮像センサを採用することができる。 The image pickup device 3 includes a light receiving unit composed of a PD (photodiode), an output circuit that outputs a signal photoelectrically converted by the light receiving unit, and a drive circuit that drives the image pickup device 3, and has a level corresponding to the amount of light. Generate original image data. Here, as the imaging device 3, various imaging sensors such as a CMOS image sensor, a VMlS image sensor, and a CCD image sensor can be employed.
 本実施の形態において、撮像素子3は、被写体の光像を入射して、カラーフィルタを備える画素により可視カラー画像成分を変換出力し、赤外フィルタを備える画素により赤外画像成分を変換出力し、フィルタを備えない画素により可視輝度画像成分と赤外画像成分とを含む輝度画像成分を変換出力するようになっているが、これに限られることはない。 In the present embodiment, the image sensor 3 receives a light image of a subject, converts a visible color image component by a pixel having a color filter, and converts and outputs an infrared image component by a pixel having an infrared filter. A luminance image component including a visible luminance image component and an infrared image component is converted and output by a pixel not provided with a filter, but the present invention is not limited to this.
 画像処理部4は、演算回路及び演算回路の作業領域として用いられるメモリ等を含み、撮像素子3から出力された原画像データをA/D変換してデジタル信号に変換し、後述する画像処理を実行した後、例えば図略のメモリや表示装置に出力する。 The image processing unit 4 includes an arithmetic circuit and a memory used as a work area for the arithmetic circuit, A / D converts the original image data output from the image sensor 3 to convert it into a digital signal, and performs image processing to be described later. After execution, for example, the data is output to a memory or a display device (not shown).
 制御部5は、CPU及びCPUが実行するプログラムを格納するメモリ等を含み、外部からの制御信号に応答し、撮像装置1の全体制御を司る。 The control unit 5 includes a CPU and a memory that stores a program executed by the CPU, and controls the entire imaging apparatus 1 in response to an external control signal.
 図2は、固体撮像素子3の画素の配列を示す図である。図2に示すように撮像素子3には、可視波長領域と赤外波長領域とを有感度波長帯域とするYe画素、R画素、IR画素、及びW画素とを含む単位画素部31がマトリックス状に配列されている。尚、例えば「Ye」画素とは、「Ye」フィルタを有する画素の意味であり、以下同様とする。 FIG. 2 is a diagram showing an arrangement of pixels of the solid-state image sensor 3. As shown in FIG. 2, the image pickup device 3 includes a unit pixel unit 31 including a Ye pixel, an R pixel, an IR pixel, and a W pixel having a visible wavelength region and an infrared wavelength region as sensitive wavelength bands. Is arranged. For example, “Ye” pixel means a pixel having a “Ye” filter, and so on.
 図2に示すごとく、単位画素部31において、第1行第1列にR画素が配列され、第2行第1列にIR画素が配列され、第1行第2列にW画素が配列され、第2行第2列にYe画素が配列されるというように、R画素、IR画素、W画素,及びYe画素が千鳥状に配列されている。但し、これは一例であり、他のパターンでR画素、IR画素、W画素、及びYe画素を千鳥状に配列してもよい。 As shown in FIG. 2, in the unit pixel unit 31, R pixels are arranged in the first row and first column, IR pixels are arranged in the second row and first column, and W pixels are arranged in the first row and second column. R pixels, IR pixels, W pixels, and Ye pixels are arranged in a staggered manner, such that Ye pixels are arranged in the second row and the second column. However, this is only an example, and the R pixel, IR pixel, W pixel, and Ye pixel may be arranged in a zigzag pattern in another pattern.
 Ye画素はYeフィルタを備えているため、Yeの可視カラー画像成分である画像成分Yeおよび赤外画像成分を出力する。R画素はRフィルタを備えているため、Rの可視カラー画像成分である画像成分Rおよび赤外画像成分を出力する。IR画素はIRフィルタを備えているため、赤外画像成分である画像成分IRを出力する。W画素はフィルタを備えていないため、可視輝度画像成分と画像成分IRとを含む輝度画像成分である画像成分Wを出力する。 Since the Ye pixel includes a Ye filter, it outputs an image component Ye and an infrared image component, which are visible color image components of Ye. Since the R pixel includes an R filter, it outputs an image component R and an infrared image component which are visible color image components of R. Since the IR pixel includes an IR filter, it outputs an image component IR that is an infrared image component. Since the W pixel does not include a filter, an image component W that is a luminance image component including a visible luminance image component and an image component IR is output.
 図3は、Ye、R、IRフィルタの分光透過特性を示した図であり、縦軸は透過率(感度)を示し、横軸は波長(nm)を示している。なお、点線で示すグラフはフィルタが取り外された状態における画素の分光感度特性を示している。この分光感度特性は、600nm付近でピークを有し、上に凸の曲線を描いて変化していることが分かる。また、図3では、400nm~700nmが可視波長領域とされ、700nm~1100nmが赤外波長領域とされ、400nm~1100nmが有感度波長帯域とされている。可視波長領域の光を可視光成分とし、赤外波長領域の光を赤外光成分とする。 FIG. 3 is a diagram showing the spectral transmission characteristics of the Ye, R, and IR filters, where the vertical axis shows the transmittance (sensitivity) and the horizontal axis shows the wavelength (nm). A graph indicated by a dotted line shows the spectral sensitivity characteristics of the pixel in a state where the filter is removed. It can be seen that this spectral sensitivity characteristic has a peak near 600 nm and changes in a convex curve. In FIG. 3, the visible wavelength region is 400 nm to 700 nm, the infrared wavelength region is 700 nm to 1100 nm, and the sensitive wavelength band is 400 nm to 1100 nm. The light in the visible wavelength region is the visible light component, and the light in the infrared wavelength region is the infrared light component.
 図3に示すように、Yeフィルタは、可視波長領域の青色領域を除く前記有感度波長帯域の光を透過する特性を有する。よって、Yeフィルタは主にイエローの光と赤外光とを透過する。 As shown in FIG. 3, the Ye filter has a characteristic of transmitting light in the sensitive wavelength band excluding the blue region in the visible wavelength region. Therefore, the Ye filter mainly transmits yellow light and infrared light.
 Rフィルタは、可視波長領域の青色領域及び緑色領域を除く有感度波長帯域の光を透過する特性を有する。よって、Rフィルタは主に赤の光と赤外光とを透過する。 The R filter has a characteristic of transmitting light in a sensitive wavelength band excluding a blue region and a green region in the visible wavelength region. Therefore, the R filter mainly transmits red light and infrared light.
 IRフィルタは、可視波長領域を除く有感度波長帯域、すなわち赤外波長帯域の光を透過する特性を有する。Wはフィルタを備えていない場合を示し、画素の有感度波長帯域の光が全て透過される。 The IR filter has a characteristic of transmitting light in a sensitive wavelength band excluding the visible wavelength region, that is, in the infrared wavelength band. W indicates a case where no filter is provided, and all light in the sensitive wavelength band of the pixel is transmitted.
 他に似たような特性を実現するには、Ye、R、IRの代わりにYe、M(マゼンタ)+IR、C(シアン)+IR(但し、M+IRは、グリーンのみを遮蔽し、C+IRはレッドのみを遮蔽する。)でも実現可能である。ただし、R画素、IR画素、Ye画素は、分光透過特性を急峻にすることができ、例えば、M+IRフィルタやC+IRフィルタに比べて、分光透過特性が良好である。つまり、M+IRフィルタ及びC+IRフィルタは、それぞれ、有感度波長帯域のうち、中央の一部の領域である緑色領域及び赤色領域のみを遮蔽する特性を有しており、このようなフィルタに、Rフィルタ、IRフィルタ、Yeフィルタのような急峻な分光透過特性を持たせることは困難である。そのため、M+IRフィルタ及びC+IRフィルタは、それぞれ、演算してもRGB画像成分を精度良く抽出することができない。よって、R画素、IR画素、Ye画素、W画素で撮像素子3を構成することで、撮像素子3の高性能化を図ることができる。 To achieve similar characteristics, Ye, M (magenta) + IR, C (cyan) + IR instead of Ye, R and IR (however, M + IR blocks only green and C + IR only red) Is also possible. However, the R pixel, the IR pixel, and the Ye pixel can make the spectral transmission characteristics steep, and, for example, the spectral transmission characteristics are better than those of the M + IR filter and the C + IR filter. That is, each of the M + IR filter and the C + IR filter has a characteristic of shielding only the green region and the red region, which are partial regions in the center, in the sensitive wavelength band. It is difficult to provide steep spectral transmission characteristics such as IR filters and Ye filters. Therefore, each of the M + IR filter and the C + IR filter cannot extract the RGB image components with high accuracy even if the calculation is performed. Therefore, by configuring the image sensor 3 with R pixels, IR pixels, Ye pixels, and W pixels, the performance of the image sensor 3 can be improved.
 図4は、画像処理部4の詳細な構成を示すブロック図である。カラー画像生成手段である画像処理部4は、色補間部41、色信号生成部42、色空間変換部43、及びRGB色信号生成部44を備えている。 FIG. 4 is a block diagram showing a detailed configuration of the image processing unit 4. The image processing unit 4 that is a color image generation unit includes a color interpolation unit 41, a color signal generation unit 42, a color space conversion unit 43, and an RGB color signal generation unit 44.
 色補間部41は、撮像素子3から出力された画像成分Ye、画像成分R、画像成分IR、及び画像成分Wのそれぞれに欠落画素データを補間するための補間処理を施し、画像成分R、画像成分IR、画像成分W、及び画像成分Yeのそれぞれを撮像素子3の画素数と同一画素数からなる画像データにする。なお、画像成分Ye、R、R、Wに欠落画素データが発生するのは、R画素、IR画素、W画素、及びYe画素が千鳥状に配列されているためである。また。補間処理としては、例えば線形補間処理を採用すればよい。 The color interpolation unit 41 performs an interpolation process for interpolating the missing pixel data on each of the image component Ye, the image component R, the image component IR, and the image component W output from the image sensor 3, and the image component R, the image Each of the component IR, the image component W, and the image component Ye is converted into image data having the same number of pixels as the number of pixels of the image sensor 3. The missing pixel data is generated in the image components Ye, R, R, and W because the R pixel, the IR pixel, the W pixel, and the Ye pixel are arranged in a staggered manner. Also. As the interpolation process, for example, a linear interpolation process may be employed.
 色信号生成部42は、色補間部41により補間処理が施された画像成分Yeと、画像成分Rと、画像成分(以下、赤外光信号という)IRと、画像成分Wとを下記式(1a)(1b)(1c)により合成して、色信号dR、dG、dB(RGB色信号)を生成する。
dR=R-IR   (1a)
dG=Ye-R   (1b)
dB=W-Ye   (1c)
The color signal generation unit 42 represents the image component Ye, the image component R, the image component (hereinafter referred to as an infrared light signal) IR, and the image component W that have been subjected to the interpolation processing by the color interpolation unit 41 using the following formula ( 1a), 1b, and 1c are combined to generate color signals dR, dG, and dB (RGB color signals).
dR = R-IR (1a)
dG = Ye-R (1b)
dB = W−Ye (1c)
 色空間変換部43は、式(2a)(2b)(2c)に示すように、色信号dR,dG,dBを、可視光の輝度信号Yと色差信号Cb、Cr(色差信号の一例)とを含む色空間に変換する。ここで、色差信号Cbは青の色差信号を示し、色差信号Crは赤の色差信号を示す。
Y=0.3dR+0.6dG+0.1dB        (2a)
Cb=-0.17dR-0.33dG+0.5dB    (2b)
Cr=0.5dR-0.42dG-0.08dB     (2c)
The color space conversion unit 43 converts the color signals dR, dG, and dB into a visible light luminance signal Y and color difference signals Cb and Cr (an example of a color difference signal) as shown in equations (2a), (2b), and (2c). Convert to a color space containing. Here, the color difference signal Cb indicates a blue color difference signal, and the color difference signal Cr indicates a red color difference signal.
Y = 0.3 dR + 0.6 dG + 0.1 dB (2a)
Cb = −0.17 dR−0.33 dG + 0.5 dB (2b)
Cr = 0.5dR-0.42dG-0.08dB (2c)
 また、色空間変換部43は、単位画素部31の出力した画像信号毎に、画像成分IRから得られる赤外光信号Irと、可視光の輝度信号Yとを比較し、Ir≦Yの場合には、(3)式で用いる重み付け係数k(x,y)を大きくし(極端な場合は1)、Ir>Yの場合には、以下の(3a)(3b)式で用いる重み付け係数k(x,y)を小さくして(極端な場合は0)、以下の通り加重平均で値を求める。尚、赤外光信号と可視光の輝度信号の大小判定を行わず、単にY-Irを計算し、それに応じて重み付け係数k(x,y)を決定しても良い。 In addition, the color space conversion unit 43 compares the infrared light signal Ir obtained from the image component IR with the luminance signal Y of visible light for each image signal output from the unit pixel unit 31, and Ir ≦ Y. The weighting coefficient k (x, y) used in the expression (3) is increased (1 in the extreme case), and when Ir> Y, the weighting coefficient k used in the following expressions (3a) and (3b): (X, y) is reduced (0 in extreme cases), and a value is obtained by a weighted average as follows. Note that it is also possible to simply calculate Y-Ir and determine the weighting coefficient k (x, y) accordingly, without determining the magnitude of the infrared light signal and the luminance signal of visible light.
 図5は、固体撮像素子3の撮像面の一部を切り取って示す図である。図5において、中央に注目単位画素部TGを含み、その周囲にわたって5×5の単位画素部31が並んでいる。色空間変換部43は、周囲の単位画素部31の色差信号に基づいて、注目単位画素部TGの色差信号を作成する。作成された色差信号を名目色差信号といい、実際に注目単位画素部TGから検出された色差信号((2c)(2b)式のCr、Cb)と区別する。より具体的には、注目単位画素部TGの色差信号Cb(x0,y0)、Cr(x0,y0)を、以下の式(3a)(3b)に基づき、その周辺の単位画素部31の色差Cb(x,y)、Cr(x,y)の加重平均により求める。但し、x0,y0は注目単位画素部TGの座標であり、x、yは周囲の単位画素部31の座標である。
 Cb(x0,y0)=Σk(x,y)・a(x,y)・Cb(x,y)   (3a)
 Cr(x0,y0)=Σk(x,y)・a(x,y)・Cr(x,y)   (3b)
FIG. 5 is a diagram showing a part of the imaging surface of the solid-state imaging device 3 cut out. In FIG. 5, a unit pixel portion TG of interest is included in the center, and 5 × 5 unit pixel portions 31 are arranged around the periphery. The color space conversion unit 43 creates a color difference signal of the target unit pixel unit TG based on the color difference signal of the surrounding unit pixel unit 31. The created color difference signal is referred to as a nominal color difference signal, and is distinguished from the color difference signals actually detected from the target unit pixel unit TG (Cr, Cb in the equations (2c) and (2b)). More specifically, the color difference signals Cb (x 0 , y 0 ) and Cr (x 0 , y 0 ) of the target unit pixel unit TG are converted into peripheral unit pixels based on the following equations (3a) and (3b). The color difference Cb (x, y) and Cr (x, y) of the unit 31 is obtained by a weighted average. However, x 0 and y 0 are the coordinates of the target unit pixel portion TG, and x and y are the coordinates of the surrounding unit pixel portion 31.
Cb (x 0 , y 0 ) = Σk (x, y) · a (x, y) · Cb (x, y) (3a)
Cr (x 0 , y 0 ) = Σk (x, y) · a (x, y) · Cr (x, y) (3b)
 k(x,y)は、式(4)に示すように,可視光の輝度信号Y(x,y)と赤外光信号Ir(x,y)の比で決まる値であり,可視光の輝度信号が赤外光信号より大きいほどk(x,y)を大きくすると、可視光の関与が高まるので好ましい。尚、加重平均を求めるので、k(x,y)の合計は1とする(以下の(9a)(9b)式においても同様)。図6にk(x,y)の一例を示す。
 k(x,y)=f(Y(x,y),Ir(x,y))   (4)
k (x, y) is a value determined by the ratio of the luminance signal Y (x, y) of the visible light and the infrared light signal Ir (x, y) as shown in the equation (4). When the luminance signal is larger than the infrared light signal, it is preferable to increase k (x, y) because the involvement of visible light is increased. Since the weighted average is obtained, the sum of k (x, y) is 1 (the same applies to the following equations (9a) and (9b)). FIG. 6 shows an example of k (x, y).
k (x, y) = f (Y (x, y), Ir (x, y)) (4)
 a(x,y)は、重み付けの係数であって、例えば式(5)に示すようになる。
 a(x,y)=S(x,y)・L(x,y)   (5) 
a (x, y) is a weighting coefficient, for example, as shown in Expression (5).
a (x, y) = S (x, y) · L (x, y) (5)
 ここで、S(x,y)は、式(6a)(6b)に示すように、注目単位画素部TGから周辺単位画素部31への距離により決められる値であって、その距離が長いほど値が低下すると好ましい。図7にS(x,y)の一例を示す。
 S(x,y)=f(R(x,y))          (6a)
 R(x,y)=√((x-x02+(y-y02)   (6b)
Here, S (x, y) is a value determined by the distance from the target unit pixel unit TG to the peripheral unit pixel unit 31 as shown in the equations (6a) and (6b), and the longer the distance is, the longer the distance is. A lower value is preferred. FIG. 7 shows an example of S (x, y).
S (x, y) = f (R (x, y)) (6a)
R (x, y) = √ ((x−x 0 ) 2 + (y−y 0 ) 2 ) (6b)
 一方、L(x,y)は、式(7)に示すように、周辺単位画素部31の可視光の輝度信号Y(x,y)により決められ、可視光の輝度信号が大きいほど値が高くなるようにすると好ましい。図8にL(x,y)の一例を示す。
 L(x,y)=f(Y(x,y))   (7) 
On the other hand, L (x, y) is determined by the visible light luminance signal Y (x, y) of the peripheral unit pixel unit 31 as shown in Expression (7), and the value increases as the visible light luminance signal increases. It is preferable to make it high. FIG. 8 shows an example of L (x, y).
L (x, y) = f (Y (x, y)) (7)
 更に、RGB色信号生成部44は、注目単位画素部TGについて、式(8a)に示すように、表示用輝度Yaddを計算し、それに合わせて、(3b)(3a)式で求められる色差信号Cr、Cbに基づき式(8b)(8c)から表示用色差信号Crm、Cbmを計算する。
 Yadd=Y+R+G+B   (8a)
 Crm=Cr×Yadd/Y  (8b)
 Cbm=Cb×Yadd/Y  (8c)
Further, the RGB color signal generation unit 44 calculates the display luminance Yadd for the target unit pixel unit TG as shown in Expression (8a), and accordingly, the color difference signal obtained by Expressions (3b) and (3a). Display color difference signals Crm and Cbm are calculated from equations (8b) and (8c) based on Cr and Cb.
Yadd = Y + R + G + B (8a)
Crm = Cr × Yadd / Y (8b)
Cbm = Cb × Yadd / Y (8c)
 その後、RGB色信号生成部44は、式(2a)(2b)(2c)を逆変換することで、輝度信号Yaddと、色差信号Crm、Cbmとから、RGB色信号dR’、dG’,dB’を算出する。 After that, the RGB color signal generation unit 44 performs inverse conversion on the equations (2a), (2b), and (2c), thereby obtaining the RGB color signals dR ′, dG ′, and dB from the luminance signal Yadd and the color difference signals Crm and Cbm. 'Is calculated.
 次に、本実施の形態の変形例について説明する。上述した実施の形態において、可視画素/赤外画素判定部としての色信号生成部42は、任意の単位画素部31から出力される可視光の輝度信号Yと、赤外光信号Irとを比較して、Ir>Yの場合、その単位画素部31を赤外画素と判定し、Ir≦Yの場合、その単位画素部31を可視画素と判定する。尚、かかる判定の結果は画素のバラツキにより変わる恐れがあるが、判定前に画素のバラツキを補正する処理を行うものとする。図9に判定結果例を示す。 Next, a modification of this embodiment will be described. In the above-described embodiment, the color signal generation unit 42 as the visible pixel / infrared pixel determination unit compares the luminance signal Y of the visible light output from the arbitrary unit pixel unit 31 with the infrared light signal Ir. When Ir> Y, the unit pixel unit 31 is determined as an infrared pixel, and when Ir ≦ Y, the unit pixel unit 31 is determined as a visible pixel. Although the result of such determination may change due to pixel variation, processing for correcting pixel variation is performed before determination. FIG. 9 shows an example of the determination result.
 色信号生成部42は、可視画素/赤外画素の判定結果により、注目単位画素部TGが可視画素であった場合、上述の実施の形態とは異なり、(2a)(2b)(2c)式で計算した輝度信号、色差信号Cb、Crを、その単位画素部31の値とする。一方、可視画素/赤外画素の判定結果により、注目単位画素部TGが赤外画素であった場合、上述の実施の形態と同様に、その周囲の単位画素部31の色差信号を用いて加重平均を行い、注目単位画素部TGの名目色差信号を求めることができる。これにより、画像処理に必要な計算量を低減できる。 Unlike the above-described embodiment, the color signal generation unit 42 uses the expressions (2a), (2b), and (2c) when the target unit pixel unit TG is a visible pixel based on the visible pixel / infrared pixel determination result. The luminance signal and the color difference signals Cb and Cr calculated in the above are used as the value of the unit pixel unit 31. On the other hand, if the target pixel unit TG is an infrared pixel based on the determination result of the visible pixel / infrared pixel, weighting is performed using the color difference signal of the surrounding unit pixel unit 31 as in the above-described embodiment. An average can be performed to obtain a nominal color difference signal of the target unit pixel portion TG. Thereby, the amount of calculation required for image processing can be reduced.
 次に、本実施の形態の別な変形例について説明する。例えば注目単位画素部TGの周囲に5×5で並んだ単位画素部31のブロックにおいて、色信号生成部42による可視画素/赤外画素の判定の結果、可視画素の数が所定数(例えば赤外画素の数)を下回った場合、注目単位画素部TGの色差信号を形成する十分な可視光情報が得られない恐れがある。そこで、かかる場合にはブロックを拡張し、例えば注目単位画素部TGの周囲に7×7或いは9×9で並んだ単位画素部31のブロックにおいて、色信号生成部42による可視画素/赤外画素の判定を行って、周囲の単位画素部31の色差信号に基づいて注目単位画素部TGの色差信号を計算する。ブロックの拡張は、色信号生成部42による可視画素/赤外画素の判定の結果、可視画素の数が赤外画素の数を上回るまで行って良い。これにより色差信号の計算に必要な単位画素部31の数を確保することで、色度の精度を保つことができる。 Next, another modification of the present embodiment will be described. For example, in the block of unit pixel units 31 arranged 5 × 5 around the target unit pixel unit TG, as a result of the visible / infrared pixel determination by the color signal generation unit 42, the number of visible pixels is a predetermined number (for example, red If the number is smaller than the number of outer pixels), sufficient visible light information for forming the color difference signal of the target unit pixel portion TG may not be obtained. Therefore, in such a case, the block is expanded, and, for example, in the block of the unit pixel unit 31 arranged at 7 × 7 or 9 × 9 around the target unit pixel unit TG, the visible pixel / infrared pixel by the color signal generation unit 42 The color difference signal of the target unit pixel unit TG is calculated based on the color difference signal of the surrounding unit pixel unit 31. The expansion of the block may be performed until the number of visible pixels exceeds the number of infrared pixels as a result of the visible / infrared pixel determination by the color signal generation unit 42. Accordingly, the chromaticity accuracy can be maintained by securing the number of unit pixel portions 31 necessary for calculating the color difference signal.
 次に、本実施の形態の別な変形例について説明する。ここでは、図10に示すように、固体撮像素子3の撮像面を、所定数(例えば10×10)の単位画素部31を含むブロックBLに分割し、各ブロックBL毎に可視光の輝度信号の平均値Yiと、赤外光信号の平均値Iriを計算し、それらを比較して、Iri≦Yiの場合には、当該ブロックBLの重み付け係数kiを大きくし、Iri>Yiの場合には、当該ブロックBLの重み付け係数kiを小さくする。次に、各単位画素部31の色信号R、G、Bに基づいて、色差信号を生成する。全ての単位画素部31の色差信号を計算した後に、各ブロックBL毎の平均色差信号Cbi、Criを計算する。尚、図10の例では、Iri>Yiの場合には、ki=0として、計算から除外している。 Next, another modification of the present embodiment will be described. Here, as shown in FIG. 10, the imaging surface of the solid-state imaging device 3 is divided into blocks BL including a predetermined number (for example, 10 × 10) of unit pixel units 31, and a luminance signal of visible light for each block BL. the average value Y i of the average value Ir i of the infrared light signal is calculated and compared them, in the case of Ir i ≦ Y i may increase the weighting coefficients k i of the block BL, Ir i When> Y i , the weighting coefficient k i of the block BL is decreased. Next, a color difference signal is generated based on the color signals R, G, and B of each unit pixel unit 31. After calculating the color difference signals of all the unit pixel units 31, the average color difference signals Cb i and C i i for each block BL are calculated. In the example of FIG. 10, when Ir i > Y i , k i = 0 and excluded from the calculation.
 注目ブロックBL中の注目単位画素部TGに対し、式(9a)(9b)に示すように、注目ブロックBLの周辺にある周辺ブロックBLそれぞれの平均色差信号Cbi、Criを加重平均して、色差信号Cb(x0,y0)とCr(x0,y0)を生成することができる。ここでiはブロックの番号を示す。尚、周辺ブロックとは、注目単位画素部TGを含むブロックに隣接するブロックとする。
 Cb(x0,y0)=Σki・Si・Cbi   (9a)
 Cr(x0,y0)=Σki・Si・Cri   (9b)
To the target unit pixel portion TG in the subject block BL, as shown in equation (9a) (9b), the block of interest surrounding blocks BL each average color difference signals Cb i Surrounding The BL, by using the weighted averages of Cr i The color difference signals Cb (x 0 , y 0 ) and Cr (x 0 , y 0 ) can be generated. Here, i indicates a block number. The peripheral block is a block adjacent to the block including the target unit pixel portion TG.
Cb (x 0 , y 0 ) = Σk i · S i · Cb i (9a)
Cr (x 0, y 0) = Σk i · S i · Cr i (9b)
 kiは、式(10)に示すように、各ブロックBL毎における可視光の輝度信号の平均値Yiと、赤外光信号の平均値Iriとの比較結果により決定される値であり,可視光の輝度信号が赤外光信号より大きいほど、大きくすると好ましい。
i=f(Yi,Iri)   (10)
k i is a value determined by a comparison result between the average value Y i of the luminance signal of visible light and the average value Ir i of the infrared light signal for each block BL, as shown in Expression (10). , It is preferable that the luminance signal of visible light is larger as it is larger than the infrared light signal.
k i = f (Y i , Ir i ) (10)
 又、重み付けの係数Siは、式(11)に示される。ここで、Rは注目単位画素部から周辺ブロックBL中心までの距離(例えば図10のR1~R5)を示す。距離が長いほどSiを低下させると好ましい。
 Si=f(R)   (11)
Further, the weighting coefficient S i is expressed by the equation (11). Here, R represents a distance (for example, R1 to R5 in FIG. 10) from the target unit pixel portion to the center of the peripheral block BL. It is preferable that Si is lowered as the distance increases.
S i = f (R) (11)
 次に、本実施の形態の別な変形例について説明する。同様に、固体撮像素子3の撮像面を所定数のブロックBL(例えば10×10の単位画素部)に分割し、各ブロックBL毎に可視光の輝度信号の平均値Yiと、赤外光信号の平均値Iriを計算し、それらを比較して、Iri≦Yiの場合には、当該ブロックBLは可視光が支配的な可視ブロックと判定し、Iri>Yiの場合には、当該ブロックBLは赤外光が支配的な可視ブロックと判定する(図11参照)。 Next, another modification of the present embodiment will be described. Similarly, the imaging surface of the solid-state imaging device 3 is divided into a predetermined number of blocks BL (for example, 10 × 10 unit pixel units), and the average value Y i of the luminance signal of visible light and infrared light for each block BL. The average value Ir i of the signals is calculated and compared. When Ir i ≦ Y i , the block BL is determined as a visible block in which visible light is dominant, and when Ir i > Y i . Determines that the block BL is a visible block in which infrared light is dominant (see FIG. 11).
 可視ブロックと判定された場合,当該ブロックにおける単位画素部31から出力される色信号R,G,Bに基づいて,(2b)(2c)式により色差信号を生成する。一方、赤外ブロック判定された場合,当該ブロックにおける注目単位画素部TGの色差信号は、周辺にある可視ブロックの平均色信号Cbi、Criを加重平均して、色差情報Cb(x0,y0)とCr(x0,y0)を生成することができる。これにより、画像処理に必要な計算量を低減できる。 When it is determined that the block is a visible block, a color difference signal is generated according to equations (2b) and (2c) based on the color signals R, G, and B output from the unit pixel unit 31 in the block. On the other hand, if it is infrared block determination, the color difference signal of the target unit pixel portion TG in the block, the average color signal Cb i of visible blocks in a neighborhood, and the weighted average of Cr i, color difference information Cb (x 0, y 0 ) and Cr (x 0 , y 0 ) can be generated. Thereby, the amount of calculation required for image processing can be reduced.
 本発明は、明細書に記載の実施の形態・変形例に限定されるものではなく、他の実施の形態・変形例を含むことは、本明細書に記載された実施の形態や思想から本分野の当業者にとって明らかである。 The present invention is not limited to the embodiments and modifications described in the specification, and includes other embodiments and modifications based on the embodiments and ideas described in the present specification. It will be apparent to those skilled in the art.
 本発明は、車載カメラや監視カメラ等に適用可能であるが、用途はそれに限られない。ブロック分けした画素数は任意であり、縦横の数が一致しなくても良い。 The present invention can be applied to in-vehicle cameras, surveillance cameras, etc., but the application is not limited thereto. The number of pixels divided into blocks is arbitrary, and the vertical and horizontal numbers do not have to match.
1 撮像装置
2 レンズ
3 固体撮像素子
4 画像処理部
5 制御部
41 色補間部
42 色信号生成部
43 色空間変換部
44 RGB信号生成部
DESCRIPTION OF SYMBOLS 1 Imaging device 2 Lens 3 Solid-state image sensor 4 Image processing part 5 Control part 41 Color interpolation part 42 Color signal generation part 43 Color space conversion part 44 RGB signal generation part

Claims (11)

  1.  分光感度の異なる少なくとも3種類の画素が配列され、そのうち、少なくとも1種類の画素は赤外領域に感度を持つ画素であって、可視光成分と赤外光成分とを検出可能な固体撮像素子と、
     前記固体撮像素子により被写体を撮像したときに、前記固体撮像素子から得られた前記可視光成分の輝度情報と、前記赤外光成分に基づく赤外光情報とに基づいて、カラー画像を生成するカラー画像生成手段と、を有することを特徴とする撮像装置。
    At least three types of pixels having different spectral sensitivities are arranged, of which at least one type is a pixel having sensitivity in the infrared region, and a solid-state imaging device capable of detecting a visible light component and an infrared light component; ,
    When a subject is imaged by the solid-state image sensor, a color image is generated based on luminance information of the visible light component obtained from the solid-state image sensor and infrared light information based on the infrared light component. An image pickup apparatus comprising: a color image generation unit.
  2.  被写体からの可視光成分と赤外光成分とを検出可能な、複数の画素を含む単位画素部を2次元的に配置した固体撮像素子と、
     前記固体撮像素子により被写体を撮像したときに、検出した可視光成分の輝度信号が所定値以下であった注目単位画素部の可視光成分の色差信号を、前記注目単位画素部に近接する近接単位画素部が検出した前記可視光成分の色差信号に基づき導出される名目色差信号と置換して、前記名目色差信号を用いてカラー画像を生成するカラー画像生成手段と、を有することを特徴とする撮像装置。
    A solid-state imaging device in which unit pixel portions including a plurality of pixels that can detect a visible light component and an infrared light component from a subject are two-dimensionally arranged;
    When the subject is imaged by the solid-state imaging device, the color difference signal of the visible light component of the target unit pixel unit in which the detected luminance signal of the visible light component is equal to or less than a predetermined value is a proximity unit adjacent to the target unit pixel unit A color image generation unit configured to generate a color image using the nominal color difference signal by replacing the nominal color difference signal derived based on the color difference signal of the visible light component detected by the pixel unit; Imaging device.
  3.  前記近接単位画素部とは、前記注目単位画素部の周囲に配置された単位画素部であることを特徴とする請求項2に記載の撮像装置。 3. The imaging apparatus according to claim 2, wherein the proximity unit pixel unit is a unit pixel unit arranged around the unit pixel unit of interest.
  4.  前記カラー画像生成手段は、前記近接単位画素部が複数ある場合には、前記注目単位画素部と前記近接単位画素部との距離に応じて、当該複数の近接単位画素部で検出された可視光成分の色差信号それぞれを、前記注目単位画素部と前記近接単位画素部との関連度合いに応じて重み付けして加重平均を行うことにより、前記名目色差信号を導出することを特徴とする請求項2又は3に記載の撮像装置。 In the case where there are a plurality of the proximity unit pixel portions, the color image generation unit may detect visible light detected by the plurality of proximity unit pixel portions according to a distance between the unit pixel unit of interest and the proximity unit pixel portion. 3. The nominal color difference signal is derived by weighting each component color difference signal according to the degree of association between the target unit pixel unit and the adjacent unit pixel unit and performing weighted averaging. Or the imaging device of 3.
  5.  前記注目単位画素部と前記近接単位画素部との距離が短い程、前記重み付けの係数を高めることを特徴とする請求項4に記載の撮像装置。 5. The imaging apparatus according to claim 4, wherein the weighting coefficient is increased as the distance between the target unit pixel unit and the adjacent unit pixel unit is shorter.
  6.  前記カラー画像生成手段は、同一の前記近接単位画素部で検出された可視光成分における輝度信号の輝度信号と、赤外光信号との差もしくは比に応じて、前記重み付けの係数を変更することを特徴とする請求項4又は5に記載の撮像装置。 The color image generating unit changes the weighting coefficient according to a difference or ratio between a luminance signal of a visible light component detected by the same adjacent unit pixel unit and an infrared light signal. The imaging device according to claim 4, wherein:
  7.  前記カラー画像生成手段は、前記可視光成分における輝度信号が前記赤外光信号以上の場合には、前記重み付けの係数を高め、前記可視光成分における輝度信号が前記赤外光信号より小さい場合には、前記重み付けの係数を低めることを特徴とする請求項6に記載の撮像装置。 The color image generating means increases the weighting coefficient when the luminance signal in the visible light component is greater than or equal to the infrared light signal, and when the luminance signal in the visible light component is smaller than the infrared light signal. The imaging apparatus according to claim 6, wherein the weighting coefficient is decreased.
  8.  前記カラー画像生成手段は、前記可視光成分における輝度信号の方が大きい場合には、前記重み付けの係数を1とし、前記赤外光信号の方が低い場合は、前記重み付けの係数を0とすることを特徴とする請求項6又は7に記載の撮像装置。 The color image generation means sets the weighting coefficient to 1 when the luminance signal in the visible light component is larger, and sets the weighting coefficient to 0 when the infrared light signal is lower. The imaging apparatus according to claim 6 or 7, wherein
  9.  前記近接単位画素部が前記注目単位画素部を中心とする所定範囲内に位置しており、前記カラー画像生成手段は、前記所定範囲内において、前記可視光成分における輝度信号が前記赤外光信号以上となる前記単位画素部の数が、所定数を下回った場合、前記所定範囲を拡張することを特徴とする請求項2乃至8のいずれか1項に記載の撮像装置。 The proximity unit pixel unit is located within a predetermined range centered on the unit pixel unit of interest, and the color image generation unit is configured such that the luminance signal in the visible light component is the infrared light signal within the predetermined range. 9. The imaging apparatus according to claim 2, wherein the predetermined range is expanded when the number of the unit pixel units is less than a predetermined number. 10.
  10.  前記カラー画像生成手段は、前記固体撮像素子の撮像面を複数のブロックに分割し、ブロック毎に前記単位画素部から検出される可視光成分における輝度信号の平均値及び色差信号の平均値と、赤外光信号の平均値を求め、前記注目単位画素部と、前記注目単位画素部を含む前記ブロックの周囲の前記ブロックとの距離に応じて、前記近接単位画素が検出した色差信号の代わりに、前記周囲のブロックにおける色差信号の平均値それぞれを重み付けして加重平均を行うことにより、前記注目単位画素部の前記名目色差成分を導出することを特徴とする請求項2乃至9のいずれか1項に記載の撮像装置。 The color image generation unit divides the imaging surface of the solid-state imaging device into a plurality of blocks, and an average value of luminance signals and an average value of color difference signals in a visible light component detected from the unit pixel unit for each block; An average value of the infrared light signal is obtained, and instead of the color difference signal detected by the adjacent unit pixel according to the distance between the target unit pixel portion and the block around the block including the target unit pixel portion. 10. The nominal color difference component of the unit pixel portion of interest is derived by weighting each average value of color difference signals in the surrounding blocks and performing weighted averaging. The imaging device according to item.
  11.  前記カラー画像生成手段は、前記注目単位画素部を含む前記ブロックの可視光成分における輝度信号の平均値が、同じブロックの赤外光信号の平均値を下回った場合に限り、前記近接単位画素が検出した色差信号の代わりに、前記周囲のブロックにおける色差信号の平均値それぞれを重み付けして加重平均を行うことを特徴とする請求項10に記載の撮像装置。 The color image generation means is arranged so that the adjacent unit pixel is only in the case where the average value of the luminance signal in the visible light component of the block including the unit pixel unit of interest is lower than the average value of the infrared light signal of the same block. The image pickup apparatus according to claim 10, wherein weighted averaging is performed by weighting each average value of the color difference signals in the surrounding blocks instead of the detected color difference signal.
PCT/JP2011/063794 2010-06-23 2011-06-16 Image capturing device WO2011162155A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2012521442A JPWO2011162155A1 (en) 2010-06-23 2011-06-16 Imaging device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010142660 2010-06-23
JP2010-142660 2010-06-23

Publications (1)

Publication Number Publication Date
WO2011162155A1 true WO2011162155A1 (en) 2011-12-29

Family

ID=45371346

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/063794 WO2011162155A1 (en) 2010-06-23 2011-06-16 Image capturing device

Country Status (2)

Country Link
JP (1) JPWO2011162155A1 (en)
WO (1) WO2011162155A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013111228A1 (en) * 2012-01-24 2013-08-01 日本電気株式会社 Marker detection device and marker detection method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002142228A (en) * 2000-10-31 2002-05-17 Toyota Central Res & Dev Lab Inc Image pickup device
WO2010053029A1 (en) * 2008-11-04 2010-05-14 コニカミノルタオプト株式会社 Image inputting apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4678172B2 (en) * 2004-11-22 2011-04-27 株式会社豊田中央研究所 Imaging device
JP5070742B2 (en) * 2006-06-09 2012-11-14 ソニー株式会社 Information acquisition method, information acquisition device, semiconductor device, signal processing device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002142228A (en) * 2000-10-31 2002-05-17 Toyota Central Res & Dev Lab Inc Image pickup device
WO2010053029A1 (en) * 2008-11-04 2010-05-14 コニカミノルタオプト株式会社 Image inputting apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013111228A1 (en) * 2012-01-24 2013-08-01 日本電気株式会社 Marker detection device and marker detection method

Also Published As

Publication number Publication date
JPWO2011162155A1 (en) 2013-08-22

Similar Documents

Publication Publication Date Title
JP5206796B2 (en) Image input device
US8805070B2 (en) Image processing apparatus and image processing method
JP5397788B2 (en) Image input device
JP4603011B2 (en) Image capturing apparatus and operation method thereof
US10217034B2 (en) Image processing device, imaging device, and image processing method
EP1528793B1 (en) Image processing apparatus, image-taking system and image processing method
KR20150140832A (en) Device for acquiring bimodal images
JP2007317750A (en) Imaging device
JP5259381B2 (en) Imaging apparatus and imaging method
JP2018196083A (en) Image processing system
CN104756488A (en) Signal processing device, signal processing method and signal processing program
JP2013223152A (en) Image pickup device
JP4465958B2 (en) Color imaging device
JP2010276469A (en) Image processor and image processing method of ranging apparatus
JP2012010141A (en) Image processing apparatus
KR100894420B1 (en) Apparatus and method for generating image using multi-channel filter
WO2011162155A1 (en) Image capturing device
JP2006033483A (en) Color imaging apparatus
JP2012008845A (en) Image processor
JPWO2016031922A1 (en) Multispectral camera
JP7121538B2 (en) Imaging device and imaging method
JP5054659B2 (en) Imaging device
JP2010171950A (en) Imaging apparatus and color correcting method of imaging apparatus
JP4993275B2 (en) Image processing device
JP6717660B2 (en) Imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11798040

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2012521442

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11798040

Country of ref document: EP

Kind code of ref document: A1