WO2015015580A1 - 撮像装置、撮像方法並びに車載撮像システム - Google Patents
撮像装置、撮像方法並びに車載撮像システム Download PDFInfo
- Publication number
- WO2015015580A1 WO2015015580A1 PCT/JP2013/070685 JP2013070685W WO2015015580A1 WO 2015015580 A1 WO2015015580 A1 WO 2015015580A1 JP 2013070685 W JP2013070685 W JP 2013070685W WO 2015015580 A1 WO2015015580 A1 WO 2015015580A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- light region
- signal
- infrared light
- visible light
- color
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 126
- 238000012907 on board imaging Methods 0.000 title 1
- 230000035945 sensitivity Effects 0.000 claims abstract description 65
- 238000012545 processing Methods 0.000 claims abstract description 55
- 239000011159 matrix material Substances 0.000 claims description 171
- 238000000034 method Methods 0.000 claims description 23
- 239000000203 mixture Substances 0.000 claims description 10
- 230000002194 synthesizing effect Effects 0.000 claims description 8
- 239000002131 composite material Substances 0.000 claims 1
- 230000001678 irradiating effect Effects 0.000 claims 1
- 238000001514 detection method Methods 0.000 description 53
- 101100495256 Caenorhabditis elegans mat-3 gene Proteins 0.000 description 34
- 101100491335 Caenorhabditis elegans mat-2 gene Proteins 0.000 description 28
- 102100040428 Chitobiosyldiphosphodolichol beta-mannosyltransferase Human genes 0.000 description 28
- 101000891557 Homo sapiens Chitobiosyldiphosphodolichol beta-mannosyltransferase Proteins 0.000 description 28
- 238000010586 diagram Methods 0.000 description 13
- 230000000694 effects Effects 0.000 description 8
- 229910052736 halogen Inorganic materials 0.000 description 8
- 150000002367 halogens Chemical class 0.000 description 8
- 238000012937 correction Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 101100495270 Caenorhabditis elegans cdc-26 gene Proteins 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 230000005855 radiation Effects 0.000 description 2
- 101100377185 Mus musculus Znf354b gene Proteins 0.000 description 1
- 101100297655 Rattus norvegicus Pim3 gene Proteins 0.000 description 1
- 101100095818 Rattus norvegicus Sik1 gene Proteins 0.000 description 1
- 101150086963 Znf354a gene Proteins 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000004397 blinking Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/11—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/85—Camera processing pipelines; Components thereof for processing colour signals for matrixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/67—Circuits for processing colour signals for matrixing
Definitions
- the present invention relates to an imaging device, an imaging method, and an in-vehicle imaging system.
- Patent Document 1 Japanese Patent Application Laid-Open No. 2004-133867 discloses that there are a plurality of imaging units that capture an image of a subject and generate an image, a color temperature information calculation unit that calculates color temperature information of the subject, a natural light source, and at least one kind of artificial light source.
- the color reproduction matrix is recorded in association with the type of light source and the position coordinates in a predetermined color space, and the color temperature information in the color space is recorded from the color reproduction matrix recorded in the recording unit.
- the color reproduction matrix associated with the position coordinates close to the position coordinates corresponding to the two color reproduction matrices associated with the same type of light source, and the light source associated with a different light source Two or less of the color reproduction matrices are selected, and based on the selected plurality of the color reproduction matrices, the position coordinates and the previous position of the color reproduction matrix are selected.
- a calculation unit that calculates a correction color reproduction matrix by performing an interpolation process based on the position coordinates corresponding to the color temperature information, and a color reproduction process for the image generated by the imaging unit using the correction color reproduction matrix An image pickup apparatus including a color reproduction processing unit for performing the above described.
- Patent Document 1 describes only the sensitivity characteristics in the visible light region, and includes an imaging unit including pixels having sensitivity in the visible light region and the near infrared light region and pixels having sensitivity in the near infrared light region. There is room for improvement in color reproducibility when a color image is picked up using.
- the present invention for example, in the case where a color image is captured using an imaging unit composed of pixels having sensitivity in the visible light region and near infrared light region and pixels having sensitivity in the near infrared light region.
- An imaging device, an imaging method, and an in-vehicle imaging system capable of obtaining an output image with good color reproducibility according to the type are provided.
- An image sensor including a pixel having sensitivity in a visible light region and a near-infrared light region and a pixel having sensitivity in a near-infrared light region, and the output signal of the image sensor is close to the visible light region.
- Color reproduction processing means for performing color reproduction processing using a signal from a pixel having sensitivity in an infrared light region and a signal from a pixel having sensitivity in the near infrared light region, and an output signal of the image sensor
- a visible light amount calculating means for calculating a signal amount in the visible light region using the color light, and the color reproduction processing means so that color reproduction processing is performed based on the signal amount in the visible light region calculated by the visible light amount calculating means.
- a control means for controlling the image pickup apparatus.
- the imaging apparatus further comprising: a near-infrared light amount calculating unit that calculates a signal amount in a near-infrared light region using an output signal of the image sensor, and the control unit Is an image pickup apparatus that controls the color reproduction processing means so that color reproduction processing is performed according to a difference between a signal amount in the visible light region and a signal amount in the near-infrared light region.
- An imaging method using an imaging device having an image sensor including a pixel having sensitivity in a visible light region and a near-infrared light region and a pixel having sensitivity in a near-infrared light region, A visible light amount calculating step for calculating a signal amount in the visible light region using an output signal, and a pixel having sensitivity in the visible light region and the near-infrared light region based on the calculated signal amount in the visible light region. And a color reproduction processing step for performing color reproduction processing using a signal from a pixel having sensitivity in the near-infrared light region.
- An image sensor including a pixel having sensitivity in a visible light region and a near infrared light region and a pixel having sensitivity in a near infrared light region, and the output signal of the image sensor is close to the visible light region.
- Color reproduction processing means for performing color reproduction processing using a signal from a pixel having sensitivity in an infrared light region and a signal from a pixel having sensitivity in the near infrared light region, and an output signal of the image sensor
- a visible light amount calculating means for calculating a signal amount in the visible light region using the color light, and the color reproduction processing means so that color reproduction processing is performed based on the signal amount in the visible light region calculated by the visible light amount calculating means.
- An imaging device comprising: a control means for controlling; a visible light irradiation light source that irradiates a subject with visible light; a near infrared light irradiation light source that irradiates a subject with near infrared light; and an image output from the imaging device
- Image recognition to recognize objects Apparatus, image synthesizing apparatus that outputs a synthesized image obtained by synthesizing an image output from the imaging apparatus and a recognition result by the image recognizing apparatus, a display apparatus that displays a synthesized image output from the image synthesizing apparatus, and a system And a control device.
- an imaging apparatus for example, even when a color image is captured using an imaging unit including pixels having sensitivity in the visible light region and near infrared light region and pixels having sensitivity in the near infrared light region, An imaging apparatus, an imaging method, and an in-vehicle imaging system that can obtain an output image with good color reproducibility according to the type of light source can be provided.
- FIG. 6 is a diagram illustrating an example of a flowchart of a color matrix coefficient and subtraction coefficient determination method in a control unit according to the first embodiment.
- FIG. 6 is a diagram illustrating an example of an interpolation method for color matrix coefficients and subtraction coefficients in a control unit according to the first embodiment. It is a figure which shows an example of the determination method of the AWB gain in a control part.
- FIG. 10 is a diagram illustrating an example of a flowchart of a color matrix coefficient and subtraction coefficient determination method in a control unit according to the second embodiment.
- FIG. 10 is a diagram illustrating an example of a color matrix coefficient and subtraction coefficient interpolation method in a control unit according to the second embodiment. It is a figure which shows an example of the flowchart of the color matrix coefficient determination method in a control part. It is a figure which shows an example of the interpolation method of the color matrix coefficient in a control part. It is a figure showing an example of 1 composition of a modification of an imaging device. It is a figure which shows the example of 1 structure of a vehicle-mounted imaging system. It is a figure explaining a certain scene.
- FIG. 1 is a configuration diagram illustrating an imaging apparatus 100 according to the first embodiment.
- the imaging apparatus 100 includes a lens 101, an imaging unit 102, a (red region + near infrared light region) (hereinafter, (R + I)) signal demosaicing unit 103, a (green region + near infrared light region) (hereinafter, (G + I )) Signal demosaicing unit 104, (blue region + near infrared light region) (hereinafter (B + I)) signal demosaicing unit 105, near infrared light region (hereinafter, I) signal demosaicing unit 106, color matrix calculation Unit 107, auto white balance (hereinafter, AWB) gain unit 108, R signal gamma computing unit 109, G signal gamma computing unit 110, B signal gamma computing unit 111, first color difference computing unit 112, second color difference computing unit 113, Luminance matrix calculation unit 114, high frequency enhancement unit 115, lumina
- the lens 101 is a lens or a lens group that forms an image of light coming from a subject.
- the imaging unit 102 includes (R + I) pixels, (G + I) pixels, (B + I) pixels, and pixels having sensitivity in the near-infrared light region, which are pixels having sensitivity in both the visible light region and the near-infrared light region.
- a certain I pixel is used as appropriate.
- Each pixel performs photoelectric conversion and A / D conversion on the light imaged by the lens 101, and outputs a signal from each pixel, which is digital data.
- the (R + I) signal demosaicing unit 103 performs interpolation processing on the signal from the (R + I) pixel output from the imaging unit 102 and corresponds to the positions of other (G + I) pixel, (B + I) pixel, and I pixel.
- the (R + I) signal is output.
- the (G + I) signal demosaicing unit 104 subjects the signal from the (G + I) pixel output from the imaging unit 102 to interpolation processing, and outputs a (G + I) signal.
- the (B + I) signal demosaicing unit 105 performs interpolation processing on the signal from the (B + I) pixel output from the imaging unit 102 and outputs a (B + I) signal.
- the I signal demosaicing unit 106 performs interpolation processing on the signal from the I pixel output from the imaging unit 102 and outputs the I signal.
- the color matrix calculation unit 107 controls the signals output from the (R + I) signal demosaicing unit 103, the (G + I) signal demosaicing unit 104, the (B + I) signal demosaicing unit 105, and the I signal demosaicing unit 106, Using the subtraction coefficients and color matrix coefficients output from the unit 120, color reproduction processing is performed by calculation, and R signals, G signals, and B signals, which are color signals, are output.
- the AWB gain unit 108 adds the AWB gain corresponding to the color temperature of the light source to each color signal output from the color matrix calculation unit 107, and outputs each color signal after the AWB gain processing.
- the R signal gamma computing unit 109 performs gamma computation on the R signal output from the AWB gain unit 108 and outputs the R signal.
- the G signal gamma calculation unit 110 performs gamma calculation on the G signal output from the AWB gain unit 108 and outputs the G signal.
- the B signal gamma computing unit 111 performs gamma computation on the B signal output from the AWB gain unit 108 and outputs the B signal.
- the first color difference calculation unit 112 and the second color difference calculation unit 113 are configured to output the first color difference signal and the second color difference signal from the color signals output from the R signal gamma calculation unit 109, the G signal gamma calculation unit 110, and the B signal gamma calculation unit 111.
- a color difference signal is generated.
- ITU-R International Telecommunication Union-Radiocommunications Sector
- the color difference is calculated according to 709, and the first color difference signal is Pb which is a color difference signal mainly indicating a difference between blue and luminance, and the second color difference signal is Pr which is a color difference signal mainly indicating a difference between red and luminance. Can do.
- the first color difference signal and the second color difference signal are output to the outside of the imaging apparatus 100.
- the luminance matrix calculation unit 114 uses signals output from the (R + I) signal demosaicing unit 103, the (G + I) signal demosaicing unit 104, the (B + I) signal demosaicing unit 105, and the I signal demosaicing unit 106, respectively. A luminance signal is generated.
- the high-frequency emphasizing unit 115 performs processing for enhancing high spatial frequency components from the luminance signal output from the luminance matrix calculation unit 114, and outputs a luminance signal that sharpens the contour portion (edge) in the image. To do.
- the luminance signal gamma calculation unit 116 performs gamma correction processing on the luminance signal output from the high frequency emphasizing unit 115 and outputs the luminance signal to the outside of the imaging apparatus 100.
- the luminance signal output to the outside of the imaging apparatus 100, and the first color difference signal 1 and the second color difference signal are color image signal outputs.
- the visible light amount detection unit 117 receives attention from signals output from the (R + I) signal demosaicing unit 103, the (G + I) signal demosaicing unit 104, the (B + I) signal demosaicing unit 105, and the I signal demosaicing unit 106.
- the amount of radiated light in the visible light region around the pixel is detected and output as a signal amount in the visible light region.
- the near-infrared light amount detection unit 118 is configured to output signals from the (R + I) signal demosaicing unit 103, the (G + I) signal demosaicing unit 104, the (B + I) signal demosaicing unit 105, and the I signal demosaicing unit 106, respectively.
- the amount of radiated light in the near infrared light region around the pixel of interest is detected and output as a signal amount in the near infrared light region.
- the AWB detection unit 119 includes a first color difference signal and a second color difference signal output from the first color difference calculation unit 112 and the second color difference calculation unit 113, a luminance signal output from the luminance signal gamma calculation unit 116, and a control unit 120. Using a signal indicating the output AWB detection range as appropriate, a white balance deviation is detected and a white balance detection signal is output.
- the control unit 120 uses the signal amount in the visible light region output from the visible light amount detection unit 117 and the signal amount in the near infrared light region output from the near infrared light amount detection unit 118 as a light source near the target pixel.
- the optimum subtraction coefficient and color matrix coefficient are determined and output to the color matrix calculation unit 107.
- control unit 120 uses the signal amount in the visible light region output from the visible light amount detection unit 117 and the signal amount in the near infrared light region output from the near infrared light amount detection unit 118, so that A signal indicating the AWB detection range optimum for the light source is generated and output to the AWB detection unit 119.
- the color matrix calculation unit 107 is configured using, for example, an I subtraction unit 121, an R signal matrix calculation unit 122, a G signal matrix calculation unit 123, and a B signal matrix calculation unit 124 as appropriate.
- the I subtraction unit 121 calculates a value obtained by multiplying the I signal output from the I signal demosaicing unit 106 by a coefficient (subtraction coefficient) from the (R + I) signal output from the (R + I) signal demosaicing unit 103. Subtract to generate an R signal. Further, the I subtraction unit 121 subtracts a value obtained by multiplying the I signal output from the I signal demosaicing unit 106 by the subtraction coefficient from the (G + I) signal output from the (G + I) signal demosaicing unit 104. A G signal is generated.
- the I subtractor 121 subtracts a value obtained by multiplying the I signal output from the I signal demosaicing unit 106 by the subtraction coefficient from the (B + I) signal output from the (B + I) signal demosaicing unit 105, A B signal is generated.
- a B signal is generated.
- the R signal matrix calculation unit 122 generates and outputs an R signal with better color reproducibility from the R signal, G signal, and B signal output from the I subtraction unit 121 by matrix calculation.
- the G signal matrix calculation unit 123 generates and outputs a G signal with better color reproducibility from the R signal, G signal, and B signal output from the I subtraction unit 121 by matrix calculation.
- the B signal matrix calculation unit 122 generates and outputs a B signal having better color reproducibility by matrix calculation from the R signal, G signal, and B signal output from the I subtraction unit 121.
- the color matrix coefficient and the subtraction coefficient can be controlled according to the signal amount in the visible light region and the signal amount in the near infrared light region. It is possible to obtain a color difference signal with good color reproducibility even when a color image is captured using an imaging unit composed of pixels having sensitivity in the infrared light region and pixels having sensitivity in the near infrared light region.
- the imaging device 100 that can be provided can be provided.
- FIG. 2 is a diagram illustrating an example of the arrangement of pixels of the imaging element of the imaging unit 102.
- the four color pixels of (R + I) pixel 201, (G + I) pixel 202, I pixel 203, and (B + I) pixel 204 form a unit configuration of 2 ⁇ 2 pixels, and the unit configuration is vertical. ⁇ It is repeatedly arranged on each side.
- FIG. 3 is a diagram showing an example of the wavelength sensitivity characteristic of each pixel included in each pixel of the image sensor shown in FIG.
- the imaging unit 102 includes (R + I) pixels that are sensitive to the red region (R) 301 and the near-infrared light region (I) 302 in the visible light region, and the green region (G) 305 in the visible light region.
- the light region (I) 312 includes four types of pixels (B + I) having sensitivity.
- an imaging device having sensitivity in the near-infrared light region as shown in FIG. 3 for example, by giving sensitivity to the near-infrared light region in addition to the visible light region,
- a light source that emits in both the visible and near infrared wavelengths
- the component in the near-infrared light region (I) is an unnecessary wavelength component from the viewpoint of the reproducibility of the sensitivity characteristic with respect to the color of the human eye.
- the sensitivity to the near-infrared light region (I) included in each pixel in FIG. 3 is substantially the same, for example, if the output signal of the I pixel is subtracted from the output signal of the (R + I) pixel, the red color A signal having sensitivity in the region (R) can be obtained.
- each pixel has variations in sensitivity characteristics in the near-infrared light region (I), and each pixel includes unnecessary wavelength components as described later. A specific method for reducing the decrease in color reproducibility due to this variation will be described below, focusing on the operations of the color matrix calculation unit 107 and the AWB gain unit 108.
- the color matrix calculation unit 107 outputs R, G, and B signals that are color signals based on the (R + I) signal, (G + I) signal, (B + I) signal, and (I) signal output from the imaging unit 102. .
- the I subtraction unit 121 removes signal components in the near-infrared light region and outputs color signals R1, G1, and B1 having sensitivity in the visible light amount region.
- R1 (R + I) ⁇ ki1 ⁇ I
- G1 (G + I) ⁇ ki2 ⁇ I
- B1 (B + I) ⁇ ki3 ⁇ I
- (ki1, ki2, ki3) are subtraction coefficients.
- R2 R signal
- G2 G signal
- B2 B signal
- the red component (R), green component (G), and blue component (B) have sensitivity to the same wavelength.
- the color reproducibility is improved by adjusting the size of the overlapping area by the color matrix calculation of (Expression 4) to (Expression 6) according to the characteristics of the overlapping area.
- the color matrix calculation unit 107 includes an I subtraction unit 121, an R signal matrix calculation unit 122, a G signal matrix calculation unit 123, and a B signal matrix calculation unit 124.
- a configuration that realizes (Equation 9) may be used. In that case, since the number of stages of computation is reduced, the latency when realized by hardware can be improved while reducing the decrease in color reproducibility.
- the AWB gain unit 108 performs the following calculation according to the color temperature of the light source.
- R3 kr ⁇ R2 (Equation 11)
- G3 kg ⁇ G2 (Equation 12)
- B3 kb ⁇ B2
- (kr, kg, kb) is a coefficient called AWB gain.
- the wavelength sensitivity characteristics of the components in the near-infrared light region (I) of each pixel (302, 306, 309, FIG. 312) varies, and the near-infrared light region (I) component cannot be optimally reduced by simply subtracting the signal value of the I pixel.
- each pixel contains unnecessary wavelength components.
- the red region (R) 301 and near-infrared light region (I) 302 in FIG. 3 are effective wavelength components, and 303 and 304 in FIG. 3 are unnecessary wavelength components.
- the (G + I) pixel has unnecessary wavelength components 307 and 308, the (I) pixel has unnecessary wavelength component 310, and the (B + I) pixel has unnecessary wavelength component 313.
- These unnecessary wavelength components (303, 304, 307, 308, 310, and 313 in FIG. 3) are preferably zero, but are not zero. Therefore, when the signal value of the I pixel is subtracted or after the color matrix calculation. In the wavelength sensitivity characteristics, positive and negative sensitivities are given to unintended wavelength components.
- the variation in the characteristics of the components in the near infrared light region (I) and the degree of influence of unnecessary wavelength components vary depending on the type of light source. For example, considering a case where a general three-wavelength fluorescent lamp is used as a light source, the light source emits one radiant energy in the red region (R), green region (G), and blue region (B) of the visible light amount region. Other wavelengths including the near-infrared light region (I) have very little or no radiation. In such a case, there is almost no influence of variations in the near-infrared light region (I), but it is affected by unnecessary wavelength components.
- the light source has higher radiation energy in the near-infrared light region than in the visible light region.
- the influence of variations in the near-infrared light region (I) becomes large, while the influence of unnecessary wavelength components becomes relatively small.
- the light source is a near-infrared projector that emits only the near-infrared light region (I)
- colors cannot be reproduced.
- the color matrix calculation unit 107 aims to achieve good color reproduction by minimizing these influences and adjusting the overlapping of the characteristics of the wavelength components. In this way, the visible light region included in the light source and It is necessary to consider that the degree of influence of unnecessary wavelength components and variations varies depending on the difference in radiant energy in the near-infrared light region. However, when the matrix coefficient is fixed or when the color matrix is controlled in the color space as in the method of Patent Document 1, the difference in the radiant energy between the visible light region and the near infrared light region is taken into consideration. Since this is not possible, there is a problem that good color reproduction cannot be obtained.
- this embodiment introduces means for selecting a subtraction coefficient and a color matrix coefficient according to the difference in radiant energy between the visible light region and the near infrared light region included in the light source. is doing. The means and effects will be described below.
- the visible light amount detection unit 117 detects the signal amount Yd of the visible light region around the target pixel by, for example, the following calculation.
- Yd ⁇ (kyd1 ⁇ ((R + I) ⁇ kid1 ⁇ I) + Kid2 ⁇ ((G + I) ⁇ kid2 ⁇ I) + Kid3 ⁇ ((B + I) ⁇ kid3 ⁇ I)) (Kid1, kid2, kid3, kyd1, kyd2, and kyd3 are coefficients, and ⁇ is the sum of signal amounts around the pixel of interest)
- the near-infrared light amount detection unit 118 detects the signal amount Id in the near-infrared light region around the target pixel by, for example, the following calculation.
- FIG. 4 is an example of a flowchart of a method for determining a color matrix coefficient and a subtraction coefficient in the control unit 120.
- the control unit 120 reads the visible light region signal amount Yd from the visible light amount detection unit 117 and the near infrared light amount detection unit 118 reads the near infrared light region signal amount Id.
- the reading of the signal amount Yd in the visible light region or the reading of the signal amount Id in the near-infrared light region may be performed first, or may be performed simultaneously.
- step 403 the control unit 120 derives the light amount subtraction result D as follows.
- D Yd ⁇ Id
- the control unit 120 determines a subtraction coefficient and color matrix coefficient set Mat3 based on the light amount subtraction result D (hereinafter, Mat * (* is an arbitrary number) is subtraction. Represents a combination of coefficients and color matrix coefficients).
- Mat * (* is an arbitrary number) is subtraction. Represents a combination of coefficients and color matrix coefficients).
- D is relatively high under a light source with high radiant energy in the visible light region, such as under a fluorescent lamp, and radiant energy in the near infrared light region is high as in a halogen lamp.
- the control unit 120 can estimate the type of the light source based on the tendency of the light amount subtraction result D, and can generate an appropriate color matrix coefficient and subtraction coefficient set Mat3 according to the type of the light source. In step 406, the control unit 120 outputs the color matrix coefficient / subtraction coefficient set Mat 3 to the color matrix calculation unit 107.
- FIG. 5 is a diagram showing a method of deriving color matrix coefficients (krr, krg, krb, kgr, kgg, kgb, kbr, kbg, kbb) and subtraction coefficients (ki1, ki2, ki3) from the light quantity subtraction result D. It is.
- the color matrix coefficient and the subtraction coefficient are determined in advance. For example, in FIG. 5, it is predetermined when D is ⁇ 255, ⁇ 128, 0, 128, 255.
- the value of Yd ⁇ Id can take a value from 0 to 255, and values are determined in advance at substantially equal intervals for both ends and three points therebetween.
- the selected first coefficient set is Mat1 (501 to 504, 513 to 516, 525 to 528), and the selected second coefficient set is Mat2 (505 to 508, 517 to 520, 529 to 532). ).
- Mat3 a set of coefficients determined by interpolation for each coefficient from the set of two coefficients according to the value of D is defined as Mat3 (509 to 512, 521 to 524, 533 to 536).
- the color matrix coefficient is brought close to 0 so as to be optimal for, for example, a near-infrared projector, and an achromatic color is obtained.
- the coefficient set is set and the adjustment is made in consideration of both unnecessary wavelength components and variations in the near-infrared light region so as to be optimal for, for example, a halogen lamp when D is 0,
- the optimum subtraction coefficient and color matrix coefficient are set for fluorescent lamps (specialized for removing unnecessary wavelength components in the visible light region), and values between them are determined to take intermediate values as appropriate. .
- FIG. 6 is an explanatory diagram showing a method of controlling the AWB detection range in the control unit 120.
- the first color difference signal output from the first color difference calculation unit 112 in FIG. 1 is on the horizontal axis 601 in FIG. 6, and the output value of the second color difference signal output from the second color difference calculation unit 113 is on the vertical axis 602 in FIG. It shows.
- AWB detection ranges 603 and 604 are defined.
- the AWB detection range indicates a range of a color difference level of a pixel regarded as white in AWB.
- the AWB detection ranges 603 and 604 are defined by rectangles with a threshold value provided for each axis, but the present invention is not limited to this and may be defined by any shape.
- the AWB detection unit 119 includes a first color difference signal Pb of all pixels in which both the first color difference signal (here, Pb) and the second color difference signal (here, Pr) of each pixel are within the AWB detection range 604.
- the average value Pba of the second color difference signal Pr and the average value Pba of the second color difference signal Pr are obtained, and these (Pba, Pra) are output to the control unit 120.
- the control unit 120 records the adjusted AWB gain inside the control unit 120 and outputs it to the AWB gain unit 108.
- the AWB detection range at a certain color matrix coefficient and subtraction coefficient is shown at 603 in FIG. Although it has already been shown that the subtraction coefficient and the color matrix coefficient are controlled by the flowchart of FIG. 4, as a result, the first color difference signal and the second color difference signal change due to the change of the subtraction coefficient and the color matrix coefficient. That is, the AWB detection range also changes.
- the AWB detection range is also corrected according to changes in the subtraction coefficient and the color matrix coefficient. If there is a change in the subtraction coefficient and the color matrix coefficient since the original AWB detection range 603, for example, BT. Based on the color difference calculation formula 709, the axes of the first color difference signal and the second color difference signal change. A corrected range for the AWB detection range is determined in advance in accordance with changes in the first color difference signal and the second color difference signal. In the present embodiment, the AWB detection range 604 is determined based on the value of the light quantity subtraction result D of (Equation 15), the values of kr, kg, kb, and the value of the luminance signal.
- the image pickup apparatus has the same configuration as that of the first embodiment, and the processing content in the control unit 120 is different from that of the first embodiment.
- the same parts as those of the first embodiment are omitted as appropriate, and the different parts will be mainly described.
- FIG. 7 is an example of a flowchart of processing contents in the control unit 120 according to the second embodiment, that is, a method for determining a color matrix coefficient and a subtraction coefficient.
- the control unit 120 reads the signal amount Yd in the visible light region from the visible light amount detection unit 117.
- the control unit 120 determines a set Mat3 of a subtraction coefficient and a color matrix coefficient from the signal amount Yd in the visible light region.
- control unit 120 can estimate the type of the light source based on the tendency of the signal amount Yd in the visible light region, and can generate an appropriate color matrix coefficient and subtraction coefficient set Mat3 according to the type of the light source. In step 704, the control unit 120 outputs the color matrix coefficient / subtraction coefficient set Mat 3 to the color matrix calculation unit 107.
- FIG. 8 shows a method of deriving color matrix coefficients (krr, krg, krb, kgr, kgg, kgb, kbr, kbb, kbb) and subtraction coefficients (ki1, ki2, ki3) from the signal amount in the visible light region. It is a figure.
- a color matrix coefficient and a subtraction coefficient are determined in advance for the jump value of the signal amount Yd in the visible light region. For example, in FIG. 8, it is determined in advance when Yd is 0, 64, 128, 192, 255.
- the value of Yd can take a value from 0 to 255, and the values are determined in advance at substantially equal intervals for both ends and three points therebetween.
- the selected first coefficient set is Mat1 (801 to 804, 813 to 816, 825 to 828), and the selected second coefficient set is Mat2 (805 to 808, 817 to 820, 829 to 832).
- Mat3 a set of coefficients determined by interpolation for each coefficient from the set of two coefficients according to the value of Yd is defined as Mat3 (809 to 812, 821 to 824, 833 to 836).
- a near-infrared projector for example, a color matrix coefficient is brought close to 0 to make an achromatic color
- a fluorescent lamp a near-infrared projector
- the optimum subtraction coefficient and color matrix coefficient are set for (specifically removing unnecessary wavelength components in the visible light region), and values between them are determined so as to take intermediate values as appropriate. In this way, even when the light source changes for each region in the screen or in time series, the optimum subtraction coefficient and color matrix coefficient can be selected, and color reproducibility can be improved. The effect of being able to be obtained is obtained.
- the signal amount in the near infrared light region is not used. Therefore, it is possible to adopt a configuration in which the near-infrared light quantity detection unit 118 is excluded from FIG. In that case, the circuit scale can be reduced.
- the imaging apparatus according to the third embodiment has the same configuration as that of the first embodiment.
- the image pickup apparatus has the same configuration as that of the first embodiment, and the processing content and the like in the control unit 120 are different from those of the first embodiment.
- the same parts as those of the first embodiment are omitted as appropriate, and the different parts will be mainly described.
- FIG. 9 is an example of a flowchart of a processing content in the control unit 120 according to the third embodiment, that is, a method for determining a color matrix coefficient and a subtraction coefficient.
- the control unit 120 reads the signal amount Yd in the visible light region from the visible light amount detection unit 117 and the signal amount Id in the near infrared light region from the near infrared light amount detection unit 118.
- the reading of the signal amount Yd in the visible light region or the reading of the signal amount Id in the near-infrared light region may be performed first, or may be performed simultaneously.
- step 903 and step 904 the control unit 120 determines a subtraction coefficient and color matrix coefficient set Mat5 from the combination of the signal amount Yd in the visible light region and the signal amount Id in the near-infrared light region. Since there is a characteristic in the radiant energy characteristic for each wavelength depending on the type of the light source, the type of the light source mainly irradiated in the vicinity of the target pixel can be estimated by the combination of Yd and Id. For example, it can be estimated that when both Yd and Id are high, a halogen lamp, a fluorescent lamp when Yd is high and Id is low, a near-infrared projector when Yd is low and Id is high, and the like. An appropriate color matrix coefficient and subtraction coefficient set Mat5 is generated according to the estimated type of light source. In step 905, the control unit 120 outputs the determined coefficient set Mat 5 to the color matrix calculation unit 107.
- FIG. 10 is a diagram illustrating a method for deriving a color matrix coefficient and a subtraction coefficient from the signal amount Yd in the visible light region and the signal amount Id in the near-infrared light region.
- a color matrix coefficient and a subtraction coefficient are determined in advance for combinations of jump values of Yd and Id. For example, in FIG. 10, when Yd and Id are 0, 64, 128, 192, and 255, respectively, a set of color matrix coefficients and subtraction coefficients is determined.
- each coefficient is defined in a table for a combination of jump values of (Yd, Id).
- a set of predetermined coefficients when (Yd, Id) in FIG. 10 is (0, 0) and (0, 255), it is for a near-infrared projector (for example, the color matrix coefficient is set close to 0).
- Achromatic for halogen lamps (both taking into account variations in unwanted wavelength components and near-infrared light region), (255, 0) for fluorescent lamps (visible light region)
- the subtraction coefficients and color matrix coefficients that are optimal for the removal of unnecessary wavelength components are set, and other points are determined so as to take intermediate values as appropriate.
- the optimum subtraction coefficient and color matrix coefficient can be selected, and color reproducibility can be improved.
- one coefficient set determined by interpolation is determined for each coefficient from the four coefficient sets, but the initially selected coefficient set is not limited to four and can be changed as appropriate. is there.
- the color matrix and the subtraction coefficient suitable for the light source can be obtained with higher accuracy. Therefore, the color reproducibility can be improved.
- FIG. 11 is a configuration diagram illustrating an imaging apparatus 1100 according to the fourth embodiment.
- the imaging apparatus 1100 includes a lens 101, an imaging unit 102, an (R + I) signal demosaicing unit 103, a (G + I) signal demosaicing unit 104, a (B + I) signal demosaicing unit 105, an I signal demosaicing unit 106, and a color matrix calculation unit.
- the color matrix calculation unit 1101 is configured by appropriately using, for example, an I subtraction unit 121, an R signal matrix calculation unit 122, a G signal matrix calculation unit 123, and a B signal matrix calculation unit 124.
- the configuration of the imaging apparatus 1100 in FIG. 11 according to the fourth embodiment is the same as the configuration other than the color matrix calculation unit 1101 and the visible light amount detection unit 1102 as compared with the configuration of the imaging apparatus 100 in FIG. Therefore, the same configuration is omitted as appropriate for the description, and different portions will be mainly described below.
- the color matrix calculation unit 1101 in FIG. 11 has basically the same configuration as the color matrix calculation unit 107 in FIG. 1 and outputs the output of the I subtraction unit 121 in FIG. 11 that is an intermediate output to the visible light detection unit 1102. It has a configuration with dots added.
- the visible light amount detection unit 1102 in FIG. 11 is different from the visible light amount detection unit 117 in FIG. 1 in that the color signals R1, G1, and B1 output from the I subtraction unit 121 are input.
- the signal components in the near-infrared light region are removed, and the color signals R1, G1, and B1 having sensitivity in the visible light amount region are converted into It is calculated as 1) to (Equation 3).
- the visible light amount detection unit 1102 in FIG. 11 detects the signal amount Yd in the visible light region around the target pixel by the following calculation.
- Equation 16 is equal to the equation (Equation 13) described in Embodiment 1 with the following equation added.
- control unit 120 may be in accordance with, for example, any of the operations of the first embodiment, the second embodiment, and the third embodiment, and can be appropriately applied to the modifications described in the respective embodiments. In each operation, the effects described in the first to third embodiments can be obtained.
- FIG. 12 is a configuration diagram illustrating an in-vehicle imaging system 1200 according to the fifth embodiment.
- the in-vehicle imaging system 1200 uses a visible light irradiation light source 1201, a near infrared light irradiation light source 1202, a light source switch 1203, an imaging device 1204, an image recognition device 1205, an image composition device 1206, a display device 1207, and a system control device 1208 as appropriate. Composed.
- the visible light irradiation light source 1201 is a light source that emits light including a visible light region.
- a white light-emitting diode (hereinafter referred to as LED) light source that emits light in the visible light region, a visible light region and near-infrared light.
- a halogen lamp that irradiates light in a region.
- the near infrared light irradiation light source 1202 is a light source that irradiates light in a near infrared light region.
- an LED light source that emits light having a wavelength of 650 nm to 1200 nm.
- the light source switch 1203 is a switch for turning on / off the irradiation of the visible light irradiation light source 1201 and the near-infrared light irradiation light source 1202, and outputs a lighting signal indicating ON / OFF of each light source to the system control device 1208.
- the imaging device 1204 images a subject in the visible light region and the near-infrared light region, and outputs a luminance signal, a first color difference signal, and a second color difference signal.
- the imaging device 1204 has the same configuration as the imaging devices of the first to fourth embodiments. Further, the imaging device 1204 inputs a control signal output from the system control device 1208 to the control unit 120.
- the image recognition device 1205 recognizes the subject using the luminance signal, the first color difference signal, and the second color difference signal output from the imaging device 1204, and outputs a recognition result signal corresponding to the recognition result.
- the image composition device 1206 is a luminance signal obtained by superimposing the image recognition result on the luminance signal, the first color difference signal, and the second color difference signal output from the imaging device 1204.
- the first color difference signal and the second color difference signal are output.
- the display device 1207 is a device that displays the luminance signal, the first color difference signal, and the second color difference signal output from the image composition device 1206, and is, for example, a liquid crystal display.
- the in-vehicle imaging system 1200 is assumed to be a device mounted on a vehicle such as an automobile or a train.
- the visible light irradiation light source 1201 corresponds to a low beam
- the near infrared irradiation light source 1201 corresponds to a high beam.
- the light source switch 1203 corresponds to a high beam / low beam switching switch operated by a vehicle driver.
- FIG. 13 is a diagram for explaining a scene of this embodiment. The following describes one effect of the in-vehicle imaging system according to the present embodiment, using, for example, one night scene of FIG.
- the host vehicle 1302 is traveling on the road 1301.
- Light is emitted from the own vehicle from the visible light irradiation light source 1201 in FIG. 12 to the visible light irradiation range 1303 in FIG. 13 and from the near infrared light irradiation light source 1202 to the near infrared light irradiation range 1304 in FIG. Yes.
- a marker 1305 is present in the visible light irradiation range 1303.
- a pedestrian 1306, a traffic light 1307, a vehicle 1309, and a self-luminous sign 1310 exist in the near infrared light irradiation range 1304.
- the traffic light 1308 is in operation and there is a lamp 1308 that is emitting light. For example, assuming a traffic light commonly used in Japan, one of the red, yellow, and green lamps is lit or blinking.
- the driver visually recognizes images taken by several imaging devices described later.
- the pedestrian 1306, the traffic light 1307, and the vehicle 1309 can be determined from the image where they exist.
- the sign 1305, the lamp 1308 during light emission of the traffic light, and the self-light emitting sign 1310 are required to be able to be determined from the image where they exist and in what color.
- FIG. 14 shows an output image when, for example, a visible light imaging device is used as the imaging device in the scene of FIG.
- the visible light region imaging device is an imaging device that captures a color image with light in the visible light region, and is a common color camera in recent years.
- this imaging apparatus only a subject that exists within the visible light irradiation range 1303 or that emits light can be captured in color.
- the pedestrian 1306 and the vehicle 1309 are not imaged, it cannot be determined from the image how the pedestrian 1306 and the vehicle 1309 are moving.
- the traffic light 1307 only the lamp 1308 that is emitting light is captured, but since it is not an overall image, it cannot be determined from the image that it is the traffic light 1307.
- FIG. 15 shows an output image when a near-infrared light imaging device is used as the imaging device in the scene of FIG.
- the near-infrared light imaging device is a monochrome camera having sensitivity only to light in the near-infrared light region or to light in the near-infrared light region and visible light region. Is called.
- a subject that is in the visible light irradiation range 1303, in the near-infrared irradiation range 1304, or is self-luminous can be captured in monochrome.
- the sign 1305, the lamp 1308 during light emission of the traffic light, and the self-light emitting sign 1310 are captured in monochrome, and the color cannot be determined from the image. As described above, there is room for improvement even when the near-infrared light imaging device is used as shown in FIG.
- FIG. 16 shows an output image when the imaging device described in each embodiment according to the present invention is used as the imaging device in the scene of FIG. That is, FIG. 16 is an output image displayed on the display device 1207 when the in-vehicle imaging system 1200 of FIG. 12 is mounted on the host vehicle 1302 of FIG.
- a subject imaged with light in the visible light region will be colored, and a subject imaged with light in the near-infrared light region will be monochrome.
- FIG. 16 it can be determined from the image where the pedestrian 1306, the traffic light 1307, and the vehicle 1309 are present.
- the sign 1305, the lamp 1308 during light emission of the traffic light, and the self-light emitting sign 1310 can be determined from the image where they exist and in what color. Therefore, by outputting the image output from the imaging device 1200 to the display device 1207 through the image composition device 1206 and displaying it, the driver can determine the location and color of the subject from the image, and as a result, driving more safely. Can be auxiliary.
- FIG. 17 shows an output image generated by the image synthesizing device 1206 by recognizing the traffic light 1307 in the image recognition device 1205 and according to the recognition result.
- the traffic light 1307 and the light emitting lamp 1308 are highlighted as the highlighted traffic light 1701 and the highlighted light emitting lamp 1702.
- the highlighting means that the visibility of the subject is improved by making the edge of the subject appear thicker, the contrast is increased, or the image is enlarged by partial scaling.
- the image recognition device 1205 since both the color and shape of the subject can be captured simultaneously by using the imaging device 1204, the image recognition device 1205 has the subject, for example, the traffic light 1307, and the lamp 1308 that is emitting light. It becomes possible to recognize what the color is.
- the output image shown in FIG. 17 generated in this manner is output to the display device 1207 and displayed, so that the driver can determine the location and color of the subject from the image in a form with good visibility. As an aid to driving more safely.
- Example 6 is an in-vehicle imaging system that outputs different output images with the same configuration as the in-vehicle imaging system 1200 described above.
- the subject is converted into an image called a template and displayed as shown in FIG.
- the template is an image pickup device based on a pre-prepared picture or a pre-prepared picture regardless of good visibility such as low image contrast or poor color reproducibility. This is a part of an image output by correcting the size and angle based on the output image actually captured in 1200.
- the image recognition device 1206 recognizes the subject from the output image of the imaging device 1200 as shown in FIG. 16 and outputs the recognition result to the image composition device 1206.
- the image composition device 1206 a part of the subject is replaced with a sign template 1801, a pedestrian template 1802, a traffic light template 1803, a vehicle template 1804, and a self-luminous sign template 1805 from the image output from the imaging device 1200.
- Generated output image The output image generated in this way as shown in FIG. 18 is output to the display device 1207 and displayed, so that the driver can determine the location and color of the subject from the image in a form with good visibility. As an aid to driving more safely.
- the present invention is not limited to this, and only a part of objects may be replaced with a template. For example, it is possible to replace only those with a visibility lower than a predetermined threshold value and those with poor visibility with a template. Also, while outputting the image of FIG. 17, the template is applied to each object at regular time intervals. Alternatively, they may be displayed in a superimposed manner.
- the invention disclosed in the present application is not limited to the above-described embodiments, and includes various modifications.
- the above-described embodiments have been described in detail for easy understanding of the present invention, and are not necessarily limited to those having all the configurations described.
- a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
- each of the above-described configurations may be configured such that a part or all of the configuration is configured by hardware, or is realized by executing a program by a processor.
- control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown. Actually, it may be considered that almost all the components are connected to each other.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Color Television Image Signal Generators (AREA)
- Closed-Circuit Television Systems (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Description
(1)可視光領域と近赤外光領域に感度を有する画素と近赤外光領域に感度を有する画素とを備えるイメージセンサと、前記イメージセンサの出力信号のうち、前記可視光領域と近赤外光領域に感度を有する画素からの信号と、前記近赤外光領域に感度を有する画素からの信号と、を用いて色再現処理を行う色再現処理手段と、前記イメージセンサの出力信号を用いて可視光領域の信号量を算出する可視光量算出手段と、前記可視光量算出手段により算出された可視光領域の信号量に基づいた色再現処理がなされるように前記色再現処理手段を制御する制御手段と、を備えることを特徴とする撮像装置である。
(2)(1)記載の撮像装置であって、さらに、前記イメージセンサの出力信号を用いて近赤外光領域の信号量を算出する近赤外光量算出手段と、を備え、前記制御手段は、前記可視光領域の信号量と前記近赤外光領域の信号量との差分に応じて色再現処理がなされるように前記色再現処理手段を制御することを特徴とする撮像装置である。
(3)可視光領域と近赤外光領域に感度を有する画素と近赤外光領域に感度を有する画素とを備えるイメージセンサを有する撮像装置を用いた撮像方法であって、前記イメージセンサの出力信号を用いて可視光領域の信号量を算出する可視光量算出ステップと、前記算出された可視光領域の信号量に基づいて、前記可視光領域と近赤外光領域に感度を有する画素からの信号と、前記近赤外光領域に感度を有する画素からの信号と、を用いて色再現処理を行う色再現処理ステップと、を有することを特徴とする撮像方法である。
(4)可視光領域と近赤外光領域に感度を有する画素と近赤外光領域に感度を有する画素とを備えるイメージセンサと、前記イメージセンサの出力信号のうち、前記可視光領域と近赤外光領域に感度を有する画素からの信号と、前記近赤外光領域に感度を有する画素からの信号と、を用いて色再現処理を行う色再現処理手段と、前記イメージセンサの出力信号を用いて可視光領域の信号量を算出する可視光量算出手段と、前記可視光量算出手段により算出された可視光領域の信号量に基づいた色再現処理がなされるように前記色再現処理手段を制御する制御手段と、を備える撮像装置と、被写体に可視光を照射する可視光照射光源と、被写体に近赤外光を照射する近赤外光照射光源と、前記撮像装置の出力する画像から物体を認識する画像認識装置と、前記撮像装置の出力する画像と、前記画像認識装置による認識結果とを合成した合成画像を出力する画像合成装置と、前記画像合成装置の出力する合成画像を表示する表示装置と、システム制御装置と、を備えた車載撮像システムである。
撮像装置100は、レンズ101、撮像部102、(赤色領域+近赤外光領域)(以下、(R+I))信号デモザイキング部103、(緑色領域+近赤外光領域)(以下、(G+I))信号デモザイキング部104、(青色領域+近赤外光領域)(以下、(B+I))信号デモザイキング部105、近赤外光領域(以下、I)信号デモザイキング部106、色マトリクス演算部107、オートホワイトバランス(以下、AWB)ゲイン部108、R信号ガンマ演算部109、G信号ガンマ演算部110、B信号ガンマ演算部111、第一色差演算部112、第二色差演算部113、輝度マトリクス演算部114、高域強調部115、輝度信号ガンマ演算部116、可視光量検波部117、近赤外光量検波部118、AWB検波部119、制御部120を適宜用いて構成される。
撮像部102は、可視光領域及び近赤外光領域の両方に感度を持つ画素である(R+I)画素、(G+I)画素、(B+I)画素及び、近赤外光領域に感度を持つ画素であるI画素を適宜用いて構成される。各画素は、レンズ101により結像される光に光電変換及びA/D変換を施し、デジタルデータである、各画素からの信号を出力する。
R信号ガンマ演算部109は、AWBゲイン部108から出力されたR信号に、ガンマ演算を行い、R信号を出力する。同様に、G信号ガンマ演算部110は、AWBゲイン部108から出力されたG信号に、ガンマ演算を行い、G信号を出力する。同様に、B信号ガンマ演算部111は、AWBゲイン部108から出力されたB信号に、ガンマ演算を行い、B信号を出力する。
図2は、撮像部102の撮像素子の画素の並びの例を示した図である。図2では、(R+I)画素201、(G+I)画素202、I画素203、(B+I)画素204の4色の画素が2×2画素サイズの単位構成を成しており、その単位構成が縦・横それぞれに繰り返し配置されている。
(数1)R1=(R+I) - ki1×I
(数2)G1=(G+I) - ki2×I
(数3)B1=(B+I) - ki3×I
ここで、(ki1,ki2,ki3)は減算係数である。
(数4)R2 = krr×R1 + krg×G1 + krb×B1
(数5)G2 = kgr×R1 + kgg×G1 + kgb×B1
(数6)B2 = kbr×R1 + kbg×G1 + kbb×B1
ここで、(krr,krg,krb,kgr,kgg,kgb,kbr,kbg,kbb)は色マトリクス係数である。
(数7)R2 = krr2×(R+I) + krg2×(G+I)
+ krb2×(B+I) + kii2×I
(数8)G2 = kgr2×(R+I) + kgg2×(G+I)
+ kgb2×(B+I) + kgi2×I
(数9)B2 = krr2×(R+I) + krg2×(G+I)
+ krb2×(B+I) + kii2×I
ここで、(krr2,krg2,krb2,kri2,kgr2,kgg2,kgb2,kgi2,kbr2,kbg2,kbb2,kbi2)は色マトリクス係数である。
(数10)R3 = kr×R2
(数11)G3 = kg×G2
(数12)B3 = kb×B2
ここで、(kr,kg,kb)は、AWBゲインという係数である
しかし、実際には各画素の近赤外光領域(I)の成分の波長感度特性(図3の302,306,309,312)にはばらつきがあり、単純にI画素の信号値を減算しただけでは、近赤外光領域(I)成分を最適に低減することができない。
例えば、一般的な三波長型蛍光灯を光源にした場合を考えると、光源は可視光量域の赤色領域(R),緑色領域(G),青色領域(B)に1つずつの放射エネルギーのピークを持っており、近赤外光領域(I)を含むそれ以外の波長は放射が非常に少ないかゼロである。そのような場合には、近赤外光領域(I)のばらつきの影響はほとんど無いが、不要な波長成分の影響は受ける。また、例えば、ハロゲンランプを光源とした場合を考えると、光源は可視光領域に比べて近赤外光領域での放射エネルギーが高い。そのような場合には、近赤外光領域(I)のばらつきの影響が大きくなり、一方で不要な波長成分の影響が比較的小さくなる。また、例えば、光源を、近赤外光領域(I)のみを放射する近赤外投光器とすると、色が再現できない。
(数13)Yd =Σ( kyd1×((R+I) - kid1×I)
+ kyd2×((G+I) - kid2×I)
+ kyd3×((B+I) - kid3×I) )
(kid1,kid2,kid3,kyd1,kyd2,kyd3は係数、Σは注目画素周辺の信号量の総和を示す)
近赤外光量検波部118は、例えば、次のような計算により、注目画素周辺の近赤外光領域の信号量Idを検出する。
(数14)Id =ΣI
(Σは注目画素周辺の信号量の総和を示す。総和する領域は式5の場合と同じ)
(数13)、(数14)の演算が画素ごと、もしくは動画像のフレームやフィールドごとに実施される。
図4は、制御部120における色マトリクス係数及び減算係数の決定方法のフローチャートの例である。
まず、ステップ401及び402で、制御部120は、可視光量検波部117から可視光領域の信号量Ydを、近赤外光量検波部118から近赤外光領域の信号量Idを読み出す。ここで、可視光領域の信号量Ydの読み出しと、近赤外光領域の信号量Idの読み出しとはどちらが先に実行されていてもよく、同時に実施しても構わない。
次に、ステップ403で、制御部120は、光量減算結果Dを次のように導出する。
(数15)D = Yd-Id
次に、ステップ404及びステップ405で、制御部120は、光量減算結果Dに基づいて、減算係数及び色マトリクス係数の組Mat3を決定する(以下、Mat*(*は任意の数字)は、減算係数と色マトリクス係数の組み合わせを表すこととする)。(数15)から分かるように、例えば蛍光灯下のように、可視光領域の放射エネルギーが高い光源下においてはDが比較的高く、ハロゲンランプのように近赤外光領域の放射エネルギーが高い光源下ではDが比較的低くまたはマイナスに、近赤外投光器のように近赤外のみの放射エネルギーが強い光源下では、絶対値の大きいマイナスになる傾向がある。制御部120は、この光量減算結果Dの傾向に基づいて光源の種類を推定することができ、光源の種類に応じて適切な色マトリクス係数及び減算係数の組Mat3を生成することができる。
ステップ406では、制御部120は、色マトリクス係数及び減算係数の組Mat3を、色マトリクス演算部107に出力する。
図5は、光量減算結果Dから、色マトリクス係数(krr,krg,krb,kgr,kgg,kgb,kbr,kbg,kbb)及び減算係数(ki1,ki2,ki3)を導出する方法を示した図である。光量減算結果Dの飛び飛びの値について、予め、色マトリクス係数及び減算係数が決められている。例えば、図5では、Dが-255、-128,0,128,255のときについて予め決められている。本実施例では、Yd-Idの値が0から255の値をとり得るとしており、両端とそれらの間3点について、ほぼ等間隔に値が予め決められているとした。注目画素の光量減算結果Dが決定されると、そのDの値に近い係数の組が2つ選択される。選択された1つ目の係数の組をMat1(501~504、513~516、525~528)、選択された2つ目の係数の組をMat2(505~508、517~520、529~532)とする。そこから、Dの値に応じて2つの係数の組から、係数ごとに内挿補間によって決定した係数の組をMat3(509~512、521~524、533~536)とする。
また、(数15)において、同様の効果を得るために、YdとIdの除算によって比率を求めるようにしても構わないが、減算を用いるほうが回路規模を小さくできるという特徴がある。
図6は、制御部120における、AWB検波範囲の制御の方法を示した説明図である。図1の第一色差演算部112から出力された第一色差信号を図6の横軸601に、第二色差演算部113から出力された第二色差信号の出力値を図6の縦軸602に示している。色差平面上で、AWB検波範囲603,604が定義されている。AWB検波範囲は、AWBにおける白色とみなす画素の色差レベルの範囲を示す。例えば、図6ではAWB検波範囲603,604を、軸ごとに閾値を設けて矩形で定義しているが、これに限られず、どのような形状で定義しても良い。
まず、ステップ701で、制御部120は、可視光量検波部117から可視光領域の信号量Ydを読み出す。
次に、ステップ702,703で、制御部120は、可視光領域の信号量Ydから、減算係数及び色マトリクス係数の組Mat3を決定する。例えば蛍光灯やハロゲンランプのように、可視光領域の放射エネルギーが高い光源下で、かつ被写体の反射率が高い場合にはYdが高なり、光源がほとんどあたっていないか近赤外光領域の放射エネルギーが高い光源下では、Ydは比較的低い値になる傾向がある。制御部120は可視光領域の信号量Ydの傾向に基づいて光源の種類を推定し、光源の種類に応じて適切な色マトリクス係数及び減算係数の組Mat3を生成することができる。
ステップ704で、制御部120は、色マトリクス係数及び減算係数の組Mat3を、色マトリクス演算部107に出力する。
図8は、可視光領域の信号量から、色マトリクス係数(krr,krg,krb,kgr,kgg,kgb,kbr,kbg,kbb)及び減算係数(ki1,ki2,ki3)を導出する方法を示した図である。可視光領域の信号量Ydの飛び飛びの値について、予め、色マトリクス係数及び減算係数が決められている。例えば図8では、Ydが0,64,128,192,255のときについて予め決められている。本実施例では、Ydの値が0から255の値をとり得るとしており、両端とそれらの間3点について、ほぼ等間隔に値が予め決められているとした。注目画素の可視光領域の信号量Ydが読み出されると、そのYdの値に近い係数の組が2つ選択される。選択された1つ目の係数の組をMat1(801~804、813~816、825~828)、選択された2つ目の係数の組をMat2(805~808、817~820、829~832)とする。そこから、Ydの値に応じて2つの係数の組から、係数ごとに内挿補間によって決定した係数の組をMat3(809~812、821~824、833~836)とする。
まず、ステップ901及び902で、制御部120は、可視光量検波部117から可視光領域の信号量Yd、及び近赤外光量検波部118から近赤外光領域の信号量Idを読み出す。ここで、可視光領域の信号量Ydの読み出しと、近赤外光領域の信号量Idの読み出しとはどちらが先に実行されていてもよく、同時に実施しても構わない。
次に、ステップ903及びステップ904で、制御部120は、可視光領域の信号量Ydと近赤外光領域の信号量Idの組み合わせから、減算係数及び色マトリクス係数の組Mat5を決定する。光源の種類によって、波長ごとの放射エネルギー特性には特徴があるので、YdとIdの組み合わせによって、注目画素付近に主に照射されている光源の種類を推測することができる。例えば、YdとIdが共に高い場合にはハロゲンランプ、Ydが高くIdが低い場合には蛍光灯、Ydが低くIdが高い場合には近赤外投光器、などと推測することができる。この推定された光源の種類に応じて適切な、色マトリクス係数及び減算係数の組Mat5を生成する。
ステップ905で、制御部120は、決定された係数の組Mat5を、色マトリクス演算部107に出力する。
図10は、可視光領域の信号量Ydと近赤外光領域の信号量Idから、色マトリクス係数及び減算係数を導出する方法を示した図である。YdとIdの飛び飛びの値の組み合わせについて、予め、色マトリクス係数及び減算係数が決められている。例えば図10では、Yd,Idがそれぞれ0,64,128,192,255のときについて、色マトリクス係数及び減算係数の組が決められている。本実施例では、Yd,Idの値がそれぞれ0から255の値をとり得るとしており、両端とそれらの間3点について、ほぼ等間隔に値が予め決められているとした。例えば、(Yd,Id)の飛び飛びの値の組み合わせについて、各係数がテーブルで定義されている。
(数16)Yd =Σ( kyd1×R1
+ kyd2×G1
+ kyd3×B1
(kyd1,kyd2,kyd3は係数、Σは注目画素周辺の信号量の総和を示す)
この(数16)の式は、実施例1で説明した(数13)に対して、下式の条件を追加したものと等しい。
(数17)kid1=kd1
(数18)kid2=kd2
(数19)kid3=kd3
この(数17)乃至(数19)の式の条件を追加することで、I減算部121から出力された色信号R1,G1,B1をもとに可視光領域の信号量Ydを計算しても、その後の制御部120における動作を図1の場合と同様に実施することができる。
車載撮像システム1200は、可視光照射光源1201、近赤外光照射光源1202、光源スイッチ1203、撮像装置1204、画像認識装置1205、画像合成装置1206、表示装置1207、システム制御装置1208を適宜用いて構成される。
近赤外光照射光源1202は、近赤外光領域の光を照射する光源である。例えば、650nmから1200nmの波長の光を照射するLED光源などである。
光源スイッチ1203は、可視光照射光源1201及び近赤外光照射光源1202の照射をON/OFFするスイッチで、それぞれの光源のON/OFFを示す点灯信号をシステム制御装置1208に出力する。
画像合成装置1206は、画像認識装置1205から出力される認識結果信号に基づいて、撮像装置1204から出力された輝度信号、第一色差信号、第二色差信号に、画像認識結果を重畳した輝度信号、第一色差信号、第二色差信号を出力する。
表示装置1207は、画像合成装置1206から出力された輝度信号、第一色差信号、第二色差信号を表示する装置で、例えば、液晶ディスプレイなどである。
101 レンズ
102 撮像部
103 (R+I)信号デモザイキング部
104 (G+I)信号デモザイキング部
105 (B+I)信号デモザイキング部
106 I信号デモザイキング部
107 色マトリクス演算部
108 AWBゲイン部
109 R信号ガンマ演算部
110 G信号ガンマ演算部
111 B信号ガンマ演算部
112 色差演算部1
113 色差演算部2
114 輝度マトリクス演算部
115 高域強調部
116 輝度信号ガンマ演算部
117 可視光量検波部
118 近赤外光量検波部
119 AWB検波部
120 制御部
121 I減算部
122 R信号マトリクス演算部
123 G信号マトリクス演算部
124 B信号マトリクス演算部
201 (R+I)画素
202 (G+I)画素
203 I画素
204 (B+I)画素
301 赤色領域(R)成分
302 近赤外光領域(I)成分
303 不要波長成分
304 不要波長成分
305 緑色領域(G)成分
306 近赤外光領域(I)成分
307 不要波長成分
308 不要波長成分
309 近赤外光領域(I)成分
310 不要波長成分
311 赤色領域(B)成分
312 近赤外光領域(I)成分
313 不要波長成分
401 ステップ1
402 ステップ2
403 ステップ3
404 ステップ4
405 ステップ5
406 ステップ6
501 Mat1のkrr(色マトリクス係数)
502 Mat1のkrg(色マトリクス係数)
503 Mat1のkrb(色マトリクス係数)
504 Mat1のki1(減算係数)
505 Mat2のkrr(色マトリクス係数)
506 Mat2のkrg(色マトリクス係数)
507 Mat2のkrb(色マトリクス係数)
508 Mat2のki1(減算係数)
509 Mat3のkrr(色マトリクス係数)
510 Mat3のkrg(色マトリクス係数)
511 Mat3のkrb(色マトリクス係数)
512 Mat3のki1(減算係数)
513 Mat1のkgr(色マトリクス係数)
514 Mat1のkgg(色マトリクス係数)
515 Mat1のkgb(色マトリクス係数)
516 Mat1のki2(減算係数)
517 Mat2のkgr(色マトリクス係数)
518 Mat2のkgg(色マトリクス係数)
519 Mat2のkgb(色マトリクス係数)
520 Mat2のki2(減算係数)
521 Mat3のkgr(色マトリクス係数)
522 Mat3のkgg(色マトリクス係数)
523 Mat3のkgb(色マトリクス係数)
524 Mat3のki2(減算係数)
525 Mat1のkbr(色マトリクス係数)
526 Mat1のkbg(色マトリクス係数)
527 Mat1のkbb(色マトリクス係数)
528 Mat1のki3(減算係数)
529 Mat2のkbr(色マトリクス係数)
530 Mat2のkbg(色マトリクス係数)
531 Mat2のkbb(色マトリクス係数)
532 Mat2のki3(減算係数)
533 Mat3のkbr(色マトリクス係数)
534 Mat3のkbg(色マトリクス係数)
535 Mat3のkbb(色マトリクス係数)
536 Mat3のki3(減算係数)
601 色差信号1 Pb
602 色差信号2 Pr
603 元のAWB検波範囲
604 修正後のAWB検波範囲
701 ステップ1
702 ステップ2
703 ステップ3
704 ステップ4
801 Mat1のkrr(色マトリクス係数)
802 Mat1のkrg(色マトリクス係数)
803 Mat1のkrb(色マトリクス係数)
804 Mat1のki1(減算係数)
805 Mat2のkrr(色マトリクス係数)
806 Mat2のkrg(色マトリクス係数)
807 Mat2のkrb(色マトリクス係数)
808 Mat2のki1(減算係数)
809 Mat3のkrr(色マトリクス係数)
810 Mat3のkrg(色マトリクス係数)
811 Mat3のkrb(色マトリクス係数)
812 Mat3のki1(減算係数)
813 Mat1のkgr(色マトリクス係数)
814 Mat1のkgg(色マトリクス係数)
815 Mat1のkgb(色マトリクス係数)
816 Mat1のki2(減算係数)
817 Mat2のkgr(色マトリクス係数)
818 Mat2のkgg(色マトリクス係数)
819 Mat2のkgb(色マトリクス係数)
820 Mat2のki2(減算係数)
821 Mat3のkgr(色マトリクス係数)
822 Mat3のkgg(色マトリクス係数)
823 Mat3のkgb(色マトリクス係数)
824 Mat3のki2(減算係数)
825 Mat1のkbr(色マトリクス係数)
826 Mat1のkbg(色マトリクス係数)
827 Mat1のkbb(色マトリクス係数)
828 Mat1のki3(減算係数)
829 Mat2のkbr(色マトリクス係数)
830 Mat2のkbg(色マトリクス係数)
831 Mat2のkbb(色マトリクス係数)
832 Mat2のki3(減算係数)
833 Mat3のkbr(色マトリクス係数)
834 Mat3のkbg(色マトリクス係数)
835 Mat3のkbb(色マトリクス係数)
836 Mat3のki3(減算係数)
901 ステップ1
902 ステップ2
903 ステップ3
904 ステップ4
905 ステップ5
1001 Mat1(色マトリクス係数及び減算係数)
1002 Mat2(色マトリクス係数及び減算係数)
1003 Mat3(色マトリクス係数及び減算係数)
1004 Mat4(色マトリクス係数及び減算係数)
1005 Mat5(色マトリクス係数及び減算係数)
1100 撮像装置
1101 色マトリクス演算部
1102 可視光量検波部
1201 可視光照射光源
1202 近赤外照射光源
1203 光源スイッチ
1204 撮像装置
1205 画像認識装置
1206 画像合成装置
1207 表示装置
1208 システム制御装置
1301 道路
1302 自車両
1303 可視光照射範囲
1304 近赤外光照射範囲
1305 標識
1306 歩行者
1307 信号機
1308 発光中のランプ
1309 車両
1310 自発光標識
1601 強調表示された信号機
1602 強調表示された発光中のランプ
1701 標識テンプレート
1702 歩行者テンプレート
1703 信号機テンプレート
1704 車両テンプレート
Claims (15)
- 可視光領域と近赤外光領域に感度を有する画素と近赤外光領域に感度を有する画素とを備えるイメージセンサと、
前記イメージセンサの出力信号のうち、前記可視光領域と近赤外光領域に感度を有する画素からの信号と、前記近赤外光領域に感度を有する画素からの信号と、を用いて色再現処理を行う色再現処理手段と、
前記イメージセンサの出力信号を用いて可視光領域の信号量を算出する可視光量算出手段と、
前記可視光量算出手段により算出された可視光領域の信号量に基づいた色再現処理がなされるように前記色再現処理手段を制御する制御手段と、
を備えることを特徴とする撮像装置。 - 請求項1記載の撮像装置であって、
さらに、前記イメージセンサの出力信号を用いて近赤外光領域の信号量を算出する近赤外光量算出手段と、を備え、
前記制御手段は、前記可視光領域の信号量と前記近赤外光領域の信号量との差分に応じて色再現処理がなされるように前記色再現処理手段を制御することを特徴とする撮像装置。 - 請求項2記載の撮像装置であって、
前記制御部は、前記可視光領域の信号量と前記近赤外光領域の信号量との差分に基づいて色マトリックス係数を設定し、前記設定されたマトリックス係数に応じた色再現処理がなされるように前記色再現処理手段を制御することを特徴とする撮像装置。 - 請求項3記載の撮像装置であって、
前記色再現手段は、前記可視光領域と近赤外光領域に感度を有する画素からの信号に対して、前記近赤外光領域に感度を有する画素からの信号を用いた減算処理を行う減算部を備え、
前記制御手段は、前記可視光領域の信号量と前記近赤外光領域の信号量との差分に基づいて設定された減算係数に応じて前記減算部での減算処理がなされるように制御することを特徴とする撮像装置。 - 請求項3又は4記載の撮像装置であって、
前記色マトリックス係数は、前記可視光領域の信号量と前記近赤外光領域の信号量との差分に応じて選択された複数の色マトリックス係数を補間処理して得たものであることを特徴とする撮像装置。 - 請求項1記載の撮像装置であって、
さらに、前記イメージセンサの出力信号を用いて近赤外光領域の信号量を算出する近赤外光量算出手段と、を備え、
前記制御手段は、前記可視光領域の信号量と前記近赤外光領域の信号量とを入力とする参照テーブルに基づいて設定された所定の係数に応じて色再現処理がなされるように前記色再現処理手段を制御することを特徴とする撮像装置。 - 可視光領域と近赤外光領域に感度を有する画素と近赤外光領域に感度を有する画素とを備えるイメージセンサを有する撮像装置を用いた撮像方法であって、
前記イメージセンサの出力信号を用いて可視光領域の信号量を算出する可視光量算出ステップと、
前記算出された可視光領域の信号量に基づいて、前記可視光領域と近赤外光領域に感度を有する画素からの信号と、前記近赤外光領域に感度を有する画素からの信号と、を用いて色再現処理を行う色再現処理ステップと、
を有することを特徴とする撮像方法。 - 請求項7記載の撮像方法であって、
さらに、前記イメージセンサの出力信号を用いて近赤外光領域の信号量を算出する近赤外光量算出ステップを有し、
前記色再現処理ステップは、前記可視光領域の信号量と前記近赤外光領域の信号量との差分に基づいて前記色再現処理を行うことを特徴とする撮像方法。 - 請求項8記載の撮像方法であって、
前記色再現処理ステップは、前記可視光領域の信号量と前記近赤外光領域の信号量との差分に基づいて設定された色マトリックス係数に応じて色再現処理を行うことを特徴とする撮像方法。 - 請求項7記載の撮像方法であって、
さらに、前記イメージセンサの出力信号を用いて近赤外光領域の信号量を算出する近赤外光量算出ステップを有し、
前記色再現処理ステップは、前記可視光領域の信号量と前記近赤外光領域の信号量とを入力とする参照テーブルに基づいて設定された所定の係数に応じて色再現処理を行うことを特徴とする撮像方法。 - 可視光領域と近赤外光領域に感度を有する画素と近赤外光領域に感度を有する画素とを備えるイメージセンサと、
前記イメージセンサの出力信号のうち、前記可視光領域と近赤外光領域に感度を有する画素からの信号と、前記近赤外光領域に感度を有する画素からの信号と、を用いて色再現処理を行う色再現処理手段と、
前記イメージセンサの出力信号を用いて可視光領域の信号量を算出する可視光量算出手段と、
前記可視光量算出手段により算出された可視光領域の信号量に基づいた色再現処理がなされるように前記色再現処理手段を制御する制御手段と、
を備える撮像装置と、
被写体に可視光を照射する可視光照射光源と、
被写体に近赤外光を照射する近赤外光照射光源と、
前記撮像装置の出力する画像から物体を認識する画像認識装置と、
前記撮像装置の出力する画像と、前記画像認識装置による認識結果とを合成した合成画像を出力する画像合成装置と、
前記画像合成装置の出力する合成画像を表示する表示装置と、
システム制御装置と、
を備えた車載撮像システム。 - 請求項11に記載の車載撮像システムであって、
前記画像認識装置は被写体に含まれる信号機を認識し、該信号機を強調して表示することを特徴とする車載撮像システム。 - 請求項11に記載の車載撮像システムであって、
前記画像認識装置は、認識した物体をテンプレート画像に置き換えることを特徴とする車載撮像システム。 - 請求項11乃至13のいずれかに記載の車載撮像システムであって、
前記撮像装置は、さらに、前記イメージセンサの出力信号を用いて近赤外光領域の信号量を算出する近赤外光量算出手段と、を備え、
前記制御手段は、前記可視光領域の信号量と前記近赤外光領域の信号量との差分に応じて色再現処理がなされるように前記色再現処理手段を制御することを特徴とする車載撮像システム。 - 請求項11乃至13のいずれかに記載の車載撮像システムであって、
前記撮像装置は、さらに、前記イメージセンサの出力信号を用いて近赤外光領域の信号量を算出する近赤外光量算出手段と、を備え、
前記制御手段は、前記可視光領域の信号量と前記近赤外光領域の信号量とを入力とする参照テーブルに基づいて設定された所定の係数に応じて色再現処理がなされるように前記色再現処理手段を制御することを特徴とする車載撮像システム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/908,276 US10154208B2 (en) | 2013-07-31 | 2013-07-31 | Imaging device, imaging method, and on-board imaging system |
CN201380078602.2A CN105453532B (zh) | 2013-07-31 | 2013-07-31 | 摄像装置、摄像方法和车载摄像系统 |
JP2015529262A JP6211614B2 (ja) | 2013-07-31 | 2013-07-31 | 撮像装置、撮像方法並びに車載撮像システム |
PCT/JP2013/070685 WO2015015580A1 (ja) | 2013-07-31 | 2013-07-31 | 撮像装置、撮像方法並びに車載撮像システム |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/070685 WO2015015580A1 (ja) | 2013-07-31 | 2013-07-31 | 撮像装置、撮像方法並びに車載撮像システム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015015580A1 true WO2015015580A1 (ja) | 2015-02-05 |
Family
ID=52431160
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/070685 WO2015015580A1 (ja) | 2013-07-31 | 2013-07-31 | 撮像装置、撮像方法並びに車載撮像システム |
Country Status (4)
Country | Link |
---|---|
US (1) | US10154208B2 (ja) |
JP (1) | JP6211614B2 (ja) |
CN (1) | CN105453532B (ja) |
WO (1) | WO2015015580A1 (ja) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106412531A (zh) * | 2015-08-12 | 2017-02-15 | 杭州海康威视数字技术股份有限公司 | 像素阵列构件、图像处理装置及摄像机 |
JP2017204824A (ja) * | 2016-05-13 | 2017-11-16 | クラリオン株式会社 | 撮像装置 |
JP2018050164A (ja) * | 2016-09-21 | 2018-03-29 | キヤノン株式会社 | 撮像装置及びその制御方法、プログラム |
JP2019168423A (ja) * | 2018-03-26 | 2019-10-03 | 浜松ホトニクス株式会社 | 画像取得装置及び画像取得方法 |
WO2019187446A1 (ja) * | 2018-03-30 | 2019-10-03 | ソニー株式会社 | 画像処理装置、画像処理方法、画像処理プログラムおよび移動体 |
JP2019205018A (ja) * | 2018-05-22 | 2019-11-28 | クラリオン株式会社 | 撮像装置及び撮像方法 |
US20210259545A1 (en) * | 2018-06-14 | 2021-08-26 | Nanolux Co. Ltd. | Ophthalmic photography apparatus and ophthalmic photography system |
WO2021166601A1 (ja) * | 2020-02-17 | 2021-08-26 | ソニーセミコンダクタソリューションズ株式会社 | 撮像装置、および撮像方法 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016143851A (ja) * | 2015-02-05 | 2016-08-08 | ソニー株式会社 | 固体撮像素子、および電子装置 |
JP2017112401A (ja) * | 2015-12-14 | 2017-06-22 | ソニー株式会社 | 撮像素子、画像処理装置および方法、並びにプログラム |
US10218926B2 (en) * | 2016-07-21 | 2019-02-26 | Christie Digital Systems Usa, Inc. | Device, system and method for cross-talk reduction in visual sensor systems |
CN108389870A (zh) * | 2017-02-03 | 2018-08-10 | 松下知识产权经营株式会社 | 摄像装置 |
CN110830675B (zh) * | 2018-08-10 | 2022-05-03 | 株式会社理光 | 读取装置、图像形成装置及读取方法 |
US20230137831A1 (en) * | 2021-11-03 | 2023-05-04 | Samsung Electronics Co., Ltd. | Electronic device for improving image quality |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012067028A1 (ja) * | 2010-11-16 | 2012-05-24 | コニカミノルタオプト株式会社 | 画像入力装置および画像処理装置 |
JP2012142832A (ja) * | 2011-01-05 | 2012-07-26 | Seiko Epson Corp | 撮像装置 |
JP2013121132A (ja) * | 2011-12-08 | 2013-06-17 | Samsung Yokohama Research Institute Co Ltd | 撮像装置及び撮像方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4251317B2 (ja) | 2003-06-23 | 2009-04-08 | 株式会社ニコン | 撮像装置および画像処理プログラム |
DE102005006290A1 (de) * | 2005-02-11 | 2006-08-24 | Bayerische Motoren Werke Ag | Verfahren und Vorrichtung zur Sichtbarmachung der Umgebung eines Fahrzeugs durch Fusion eines Infrarot- und eines Visuell-Abbilds |
EP1976296A4 (en) * | 2006-01-20 | 2011-11-16 | Sumitomo Electric Industries | INFRARED IMAGING SYSTEM |
JP5432075B2 (ja) * | 2010-07-06 | 2014-03-05 | パナソニック株式会社 | 撮像装置および色温度算出方法 |
-
2013
- 2013-07-31 JP JP2015529262A patent/JP6211614B2/ja active Active
- 2013-07-31 CN CN201380078602.2A patent/CN105453532B/zh active Active
- 2013-07-31 WO PCT/JP2013/070685 patent/WO2015015580A1/ja active Application Filing
- 2013-07-31 US US14/908,276 patent/US10154208B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012067028A1 (ja) * | 2010-11-16 | 2012-05-24 | コニカミノルタオプト株式会社 | 画像入力装置および画像処理装置 |
JP2012142832A (ja) * | 2011-01-05 | 2012-07-26 | Seiko Epson Corp | 撮像装置 |
JP2013121132A (ja) * | 2011-12-08 | 2013-06-17 | Samsung Yokohama Research Institute Co Ltd | 撮像装置及び撮像方法 |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106412531A (zh) * | 2015-08-12 | 2017-02-15 | 杭州海康威视数字技术股份有限公司 | 像素阵列构件、图像处理装置及摄像机 |
CN106412531B (zh) * | 2015-08-12 | 2019-04-12 | 杭州海康威视数字技术股份有限公司 | 像素阵列构件、图像处理装置及摄像机 |
JP2017204824A (ja) * | 2016-05-13 | 2017-11-16 | クラリオン株式会社 | 撮像装置 |
JP2018050164A (ja) * | 2016-09-21 | 2018-03-29 | キヤノン株式会社 | 撮像装置及びその制御方法、プログラム |
JP2019168423A (ja) * | 2018-03-26 | 2019-10-03 | 浜松ホトニクス株式会社 | 画像取得装置及び画像取得方法 |
WO2019187446A1 (ja) * | 2018-03-30 | 2019-10-03 | ソニー株式会社 | 画像処理装置、画像処理方法、画像処理プログラムおよび移動体 |
US11410337B2 (en) | 2018-03-30 | 2022-08-09 | Sony Corporation | Image processing device, image processing method and mobile body |
JP2019205018A (ja) * | 2018-05-22 | 2019-11-28 | クラリオン株式会社 | 撮像装置及び撮像方法 |
JP7121538B2 (ja) | 2018-05-22 | 2022-08-18 | フォルシアクラリオン・エレクトロニクス株式会社 | 撮像装置及び撮像方法 |
US20210259545A1 (en) * | 2018-06-14 | 2021-08-26 | Nanolux Co. Ltd. | Ophthalmic photography apparatus and ophthalmic photography system |
WO2021166601A1 (ja) * | 2020-02-17 | 2021-08-26 | ソニーセミコンダクタソリューションズ株式会社 | 撮像装置、および撮像方法 |
Also Published As
Publication number | Publication date |
---|---|
CN105453532B (zh) | 2019-03-01 |
JPWO2015015580A1 (ja) | 2017-03-02 |
US10154208B2 (en) | 2018-12-11 |
JP6211614B2 (ja) | 2017-10-11 |
CN105453532A (zh) | 2016-03-30 |
US20160173790A1 (en) | 2016-06-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6211614B2 (ja) | 撮像装置、撮像方法並びに車載撮像システム | |
US11244209B2 (en) | Image processing device, imaging device, and image processing method | |
US9542608B2 (en) | Image processing apparatus | |
US9338371B2 (en) | Imaging device | |
JP5454572B2 (ja) | 画像入力装置 | |
JP4904440B2 (ja) | 画像処理方法および装置,ならびに画像処理プログラムおよびこのプログラムを記録した媒体 | |
JP6029954B2 (ja) | 撮像装置 | |
WO2012067028A1 (ja) | 画像入力装置および画像処理装置 | |
CN106303483B (zh) | 一种图像处理方法及装置 | |
US20150172618A1 (en) | Imaging device | |
JP5860663B2 (ja) | ステレオ撮像装置 | |
JP2014171214A (ja) | 画像処理装置 | |
WO2015141050A1 (ja) | マルチエリアホワイトバランス制御装置、マルチエリアホワイトバランス制御方法、マルチエリアホワイトバランス制御プログラム、マルチエリアホワイトバランス制御プログラムを記録したコンピュータ、マルチエリアホワイトバランス画像処理装置、マルチエリアホワイトバランス画像処理方法、マルチエリアホワイトバランス画像処理プログラム、マルチエリアホワイトバランス画像処理プログラムを記録したコンピュータ及びマルチエリアホワイトバランス画像処理装置を備えた撮像装置 | |
US20150195500A1 (en) | In-Vehicle Imaging Device | |
WO2020027210A1 (ja) | 画像処理装置、画像処理方法、および画像処理プログラム | |
JP2012008845A (ja) | 画像処理装置 | |
JP2012010141A (ja) | 画像処理装置 | |
WO2015111197A1 (ja) | 撮像装置及び車載撮像システム | |
JP5904825B2 (ja) | 画像処理装置 | |
JP2013009041A (ja) | 車両用撮影表示制御システム | |
JPWO2019142586A1 (ja) | 画像処理システム及び配光制御システム | |
JP2013187572A (ja) | 画像処理装置 | |
WO2023026407A1 (ja) | 画像処理装置、画像処理方法および画像処理プログラム | |
JP2010154266A (ja) | 赤外線照射式撮像装置 | |
JP2023094991A (ja) | 画像処理装置及び画像処理方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201380078602.2 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13890810 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015529262 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14908276 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13890810 Country of ref document: EP Kind code of ref document: A1 |