WO2013089036A1 - 撮像装置 - Google Patents
撮像装置 Download PDFInfo
- Publication number
- WO2013089036A1 WO2013089036A1 PCT/JP2012/081798 JP2012081798W WO2013089036A1 WO 2013089036 A1 WO2013089036 A1 WO 2013089036A1 JP 2012081798 W JP2012081798 W JP 2012081798W WO 2013089036 A1 WO2013089036 A1 WO 2013089036A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- image sensor
- filter
- sensitivity
- image data
- Prior art date
Links
- 238000012545 processing Methods 0.000 claims abstract description 121
- 230000035945 sensitivity Effects 0.000 claims abstract description 121
- 238000003384 imaging method Methods 0.000 claims description 455
- 239000000463 material Substances 0.000 claims description 163
- 238000006243 chemical reaction Methods 0.000 claims description 102
- 238000002834 transmittance Methods 0.000 claims description 48
- 239000002131 composite material Substances 0.000 claims description 41
- 230000007935 neutral effect Effects 0.000 claims description 31
- 230000003595 spectral effect Effects 0.000 claims description 25
- 239000011159 matrix material Substances 0.000 claims description 16
- 230000000875 corresponding effect Effects 0.000 description 51
- 238000000034 method Methods 0.000 description 40
- 238000010586 diagram Methods 0.000 description 30
- 239000000049 pigment Substances 0.000 description 22
- 230000004048 modification Effects 0.000 description 16
- 238000012986 modification Methods 0.000 description 16
- 230000008569 process Effects 0.000 description 12
- 230000015572 biosynthetic process Effects 0.000 description 11
- 229920006395 saturated elastomer Polymers 0.000 description 7
- 238000005286 illumination Methods 0.000 description 6
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 229910052710 silicon Inorganic materials 0.000 description 4
- 239000010703 silicon Substances 0.000 description 4
- 239000000758 substrate Substances 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 230000003321 amplification Effects 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 230000001276 controlling effect Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000001459 lithography Methods 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000000206 photolithography Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 239000012780 transparent material Substances 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 244000141359 Malus pumila Species 0.000 description 1
- 240000008790 Musa x paradisiaca Species 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 235000021016 apples Nutrition 0.000 description 1
- 235000021015 bananas Nutrition 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 238000010030 laminating Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/1462—Coatings
- H01L27/14623—Optical shielding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/741—Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/14603—Special geometry or disposition of pixel-elements, address-lines or gate-electrodes
- H01L27/14607—Geometry of the photosensitive area
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14643—Photodiode arrays; MOS imagers
- H01L27/14645—Colour imagers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/73—Circuitry for compensating brightness variation in the scene by influencing the exposure time
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/749—Circuitry for compensating brightness variation in the scene by influencing the pick-up tube voltages
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/50—Control of the SSIS exposure
- H04N25/57—Control of the dynamic range
- H04N25/58—Control of the dynamic range involving two or more exposures
- H04N25/581—Control of the dynamic range involving two or more exposures acquired simultaneously
- H04N25/585—Control of the dynamic range involving two or more exposures acquired simultaneously with pixels having different sensitivities within the sensor, e.g. fast or slow pixels or pixels having different sizes
Definitions
- the present disclosure relates to an imaging device, and more specifically, to an imaging device that obtains an image by high dynamic range (HDR) synthesis.
- HDR high dynamic range
- a technique for acquiring a plurality of image data by performing imaging a plurality of times and combining the plurality of image data is disclosed in, for example, Japanese Patent Laid-Open No. 2000-244797. Is well known.
- N image data with different sensitivities are obtained by performing N times (N ⁇ 2) imaging with different exposure times in a still image.
- the HDR synthesized image is obtained by synthesizing the N pieces of image data.
- Japanese Patent Laid-Open No. 2006-270364 discloses a technique for expanding the dynamic range by arranging image sensors having different sensitivities adjacent to each other and synthesizing a signal from a high-sensitivity image sensor and a signal from a low-sensitivity image sensor. It is.
- the sensitivity in one image data is controlled by controlling the shutter speed so that the charge accumulation time of the image sensor is different for each region in the imaging unit. Make a difference.
- Japanese Patent Laid-Open No. 2006-270364 two image data having a sensitivity difference in one unit frame are arranged by disposing a small image sensor having a complementary color filter in an image sensor arranged in a honeycomb shape. It is possible to get. However, in this method, it becomes difficult to form a small image sensor having a complementary color filter as the entire image sensor is reduced, and the arrangement of the color filters becomes complicated. In addition, there is a problem that the color filter material and process cost increase. Control of the shutter speed for each area in the technique disclosed in Japanese Patent Application Laid-Open No. 2006-253876 leads to complication of the control circuit and control algorithm of the imaging apparatus, resulting in high cost. Furthermore, in the techniques disclosed in these patent publications, the cost for image processing such as resolution recovery, jaggy elimination, and color noise compensation is increased, and the structure of the imaging apparatus is complicated.
- an object of the present disclosure is to provide an imaging apparatus having a simple configuration and structure that can obtain an image by high dynamic range synthesis and that can easily handle moving image shooting. .
- a composite image is generated using low sensitivity image data corresponding to the high illuminance image region.
- an imaging apparatus includes: A first imaging element that includes a first filter and a first photoelectric conversion element and receives light in a first wavelength band; A second imaging element that includes a second filter and a second photoelectric conversion element and receives light in a second wavelength band having a peak wavelength longer than the peak wavelength of the first wavelength band; A third imaging device comprising a third filter and a third photoelectric conversion element, and receiving light in a third wavelength band having a peak wavelength longer than the peak wavelength of the second wavelength band; and A fourth imaging element comprising a fourth filter and a fourth photoelectric conversion element and receiving light in the first wavelength band, the second wavelength band, and the third wavelength band; An imaging unit in which imaging element units each including four imaging element subunits are arranged in a two-dimensional matrix, and Image processing unit, An imaging device comprising: The light transmittance of the fourth filter is lower than the light transmittance of the first filter, the light transmittance of the second filter, and the light transmittance of
- Low-sensitivity image data is generated by the output from the fourth image sensor in each of the four image sensor subunits;
- the image processing unit further uses the high sensitivity image data corresponding to the low illuminance image area in the low illuminance image area obtained from the low sensitivity image data or the high sensitivity image data.
- a composite image is generated using low sensitivity image data corresponding to the high illuminance image region.
- an imaging apparatus includes: A first imaging element that includes a first photoelectric conversion element and receives light in the visible light band; and A second image sensor comprising a neutral density filter and a second photoelectric conversion element and receiving light in the visible light band; An image pickup unit in which image pickup device units each including an array are arranged, and Image processing unit, An imaging device comprising: The image processing unit generates high-sensitivity image data by the output from the first image sensor, generates low-sensitivity image data by the output from the second image sensor, The image processing unit further uses the high sensitivity image data corresponding to the low illuminance image area in the low illuminance image area obtained from the low sensitivity image data or the high sensitivity image data. In the high illuminance image region obtained from the data, a composite image is generated using low sensitivity image data corresponding to the high illuminance image region.
- a monochrome image can be obtained by the imaging device according to the fourth aspect of the present disclosure.
- the imaging device basically, in one imaging, low-sensitivity image data and high-sensitivity image data are generated based on an output from the imaging unit, and the image processing unit A high-sensitivity image data is used for the high-illuminance image area and a low-sensitivity image data is used for the high-illuminance image area.
- the imaging device can easily obtain an image based on high dynamic range synthesis, for example, despite the simple configuration and structure similar to those of an imaging device having a conventional Bayer array, It is possible to easily cope with shooting. Therefore, an inexpensive imaging device based on the high dynamic range composition method can be provided.
- FIG. 1A and 1B are schematic diagrams of an imaging unit in the imaging apparatus according to the first embodiment, and light receiving characteristics of the first imaging element, the second imaging element, the third imaging element, and the fourth imaging element, respectively.
- FIG. 2A and 2B are conceptual diagrams for explaining the operation of the imaging unit in the imaging apparatus according to the first embodiment.
- 3A and 3B are conceptual diagrams for explaining the operation of the imaging unit in the imaging apparatus according to the first embodiment.
- FIG. 4 is a conceptual diagram for explaining the operation of the imaging unit in the imaging apparatus according to the first embodiment.
- FIG. 5 is a conceptual diagram for explaining the operation of the imaging unit in the imaging apparatus according to the first embodiment.
- FIG. 6 is a diagram schematically illustrating an imaging state of the imaging apparatus according to the first embodiment.
- FIG. 7A and 7B are a conceptual diagram of the image pickup apparatus according to the first embodiment and a schematic partial cross-sectional view of the image pickup element, respectively.
- FIG. 8 is a schematic partial plan view of the light shielding layer when it is assumed that the imaging device is cut along the arrow BB in FIG. 7B in the imaging apparatus of the second embodiment.
- FIG. 9A and FIG. 9B respectively show the first image sensor, the second image sensor, the sensitivity ratio of the third image sensor and the fourth image sensor in the image pickup apparatus of Example 3, and the first image sensor and the second image sensor.
- FIG. 4 is a diagram illustrating gain ratios of a third image sensor and a fourth image sensor.
- FIG. 10A and 10B are diagrams illustrating a final output ratio of high-sensitivity image data and a final output ratio of low-sensitivity image data, respectively, in the imaging apparatus according to the third embodiment.
- FIG. 11A and FIG. 11B are diagrams each schematically showing an imaging state (exposure time) in the imaging apparatus of Example 4 over time, and a first imaging element, a second imaging element, a third imaging element, and It is a figure which shows typically the output of a 4th image pick-up element.
- FIG. 12 is a schematic partial plan view of an exposure mask with a corner serif pattern for forming a filter based on a lithography technique.
- FIG. 16B is a schematic partial plan view of a second filter and the like for explaining still another method of forming the first filter, the second filter, the third filter, and the fourth filter of Example 5. is there.
- FIG. 17A and FIG. 17B schematically illustrate the conceptual diagram of the imaging unit in the imaging apparatus of Example 6, and the light receiving characteristics of the first imaging element, the third imaging element, the second imaging element, and the fourth imaging element, respectively.
- FIG. 18A and 18B are conceptual diagrams for explaining the operation of the imaging unit in the imaging apparatus according to the sixth embodiment.
- 19A and 19B are conceptual diagrams for explaining the operation of the imaging unit in the imaging apparatus according to the sixth embodiment.
- FIG. 20 is a conceptual diagram for explaining the operation of the imaging unit in the imaging apparatus according to the sixth embodiment.
- FIG. 21 is a conceptual diagram for explaining the operation of the imaging unit in the imaging apparatus according to the sixth embodiment.
- FIG. 22 is a schematic partial plan view of a light shielding layer when it is assumed that the imaging device is cut along an arrow BB in FIG. 7B in the imaging device of the seventh embodiment.
- FIG. 23A and FIG. 23B respectively show the first image sensor, the second image sensor, the sensitivity ratio of the third image sensor, and the fourth image sensor in the image pickup apparatus of Example 7, and the first image sensor and the second image sensor.
- FIG. 10 is a diagram illustrating a gain ratio according to a difference in opening size between the third image sensor and the fourth image sensor.
- 24A and 24B are diagrams illustrating final output ratios of high-sensitivity image data in the imaging apparatus according to the seventh embodiment.
- FIG. 25A and 25B are diagrams illustrating final output ratios of low-sensitivity image data in the imaging apparatus according to the seventh embodiment.
- FIG. 26 is a conceptual diagram of an imaging unit in the imaging apparatus according to the eleventh embodiment.
- 27A and 27B are conceptual diagrams for explaining the operation of the imaging unit in the imaging apparatus according to the eleventh embodiment.
- 28A and 28B are conceptual diagrams for explaining the operation of the imaging unit in the imaging apparatus according to the eleventh embodiment.
- FIG. 29 is a conceptual diagram for explaining the operation of the imaging unit in the imaging apparatus according to the eleventh embodiment.
- Example 1 imaging device according to the first aspect of the present disclosure
- Example 2 Modification of Example 1 4
- Example 3 another modification of Example 1 5.
- Example 4 another modification of Example 1) 6
- Example 5 Modification of Examples 1 to 4) 7).
- Example 6 Imaging device according to second and fourth aspects of present disclosure
- Example 7 Modification of Example 6) 9.
- Example 8 another modification of Example 6) 10.
- Example 9 another modification of Example 6) 11.
- Example 10 modification of Example 1 and Example 6) 12
- Example 11 imaging device according to the third aspect of the present disclosure), others
- a first opening is formed between the first filter and the first photoelectric conversion element
- a second opening is formed between the second filter and the second photoelectric conversion element
- a third opening is formed between the third filter and the third photoelectric conversion element
- a fourth opening is formed between the fourth filter and the fourth photoelectric conversion element
- the fourth opening may be smaller than the first opening, the second opening, and the third opening.
- the opening can be obtained by forming the opening in a light shielding layer formed between the filter and the photoelectric conversion element.
- the planar shape of the opening include a circle or a regular polygon (eg, regular pentagon, regular hexagon, regular heptagon, regular octagon).
- the regular polygon includes a pseudo regular polygon (a regular polygon in which each side is configured by a curve, and a regular polygon in which a vertex is rounded). The same applies to the imaging apparatus according to the second aspect of the present disclosure described below.
- a first opening is formed in the light incident region of the first photoelectric conversion element
- a second opening is formed between the neutral density filter and the second photoelectric conversion element
- a third opening is formed in the light incident region of the third photoelectric conversion element
- a fourth opening is formed between the neutral density filter and the fourth photoelectric conversion element
- the third opening may be smaller than the first opening
- the fourth opening may be smaller than the second opening.
- the image processing unit includes a first imaging element, a second imaging element, and a third imaging element.
- a first gain adjustment unit that adjusts the output from the image sensor and a second gain adjustment unit that adjusts the output from the fourth image sensor can be provided.
- the first gain adjustment unit When the adjustment coefficient for the output from the first image sensor, the second image sensor, and the third image sensor is Gn 1 , and the adjustment coefficient for the output from the fourth image sensor by the second gain adjustment unit is Gn 2 , Gn 1 / Gn 2 ⁇ 1 Is preferably satisfied.
- the value of (Gn 1 / Gn 2 ) include 2, 4, 8, and the like. Even in such a form, the dynamic range can be expanded.
- the value of Gn 1 / Gn 2 may be fixed to “1” when the light / dark difference of the shooting scene is small, but when the light / dark difference of the shooting scene is large, the high sensitivity of the low illuminance region in the shooting scene is high.
- it can be changed automatically or manually.
- the adjustment of the outputs from the first image sensor, the second image sensor, and the third image sensor in the first gain adjustment unit may be the same or different depending on the case.
- the image processing unit includes a first gain adjustment unit that adjusts outputs from the first imaging element and the third imaging element, and a second imaging unit.
- the second gain adjustment unit that adjusts the output from the element and the fourth image sensor can be provided.
- the output from the first image sensor and the third image sensor by the first gain adjuster When the adjustment coefficient is Gn 1 and the adjustment coefficient for the output from the second image sensor and the fourth image sensor by the second gain adjustment unit is Gn 2 , Gn 1 / Gn 2 ⁇ 1 Is preferably satisfied.
- Specific examples of the value of (Gn 1 / Gn 2 ) include 2, 4, 8, and the like. Even in such a form, the dynamic range can be expanded.
- the image processing unit generates N sets of high-sensitivity image data and low-sensitivity image data with different exposure times, The image processing unit further divides the illuminance area of the image obtained from the low-sensitivity image data or the high-sensitivity image data into 2N-level areas from the lowest illuminance area to the highest illuminance area, and the lowest illuminance area. In each of the N low-illuminance image areas from the Nth low-illuminance area to the Nth low-illuminance image area, N sets of high-sensitivity image data corresponding to each of the N low-illuminance image areas are used.
- N 1
- N 2
- the imaging time for obtaining the first set of high-sensitivity image data and low-sensitivity image data, and the second set of high-sensitivity image data and low-sensitivity image data are obtained. It can be set as the form which has a double relationship between the imaging time of. Such a configuration can be expanded even when N is 3 or more. Also in this form, the dynamic range can be expanded.
- an image having a dynamic range expanded in 2N stages can be obtained from N sets of image data, and it is possible to easily handle moving image shooting.
- N stages an image having a dynamic range expanded in 2N stages can be obtained from N sets of image data.
- the image processing unit generates N sets of high-sensitivity image data and low-sensitivity image data with different exposure times, The image processing unit further divides the illuminance area of the image obtained from the low-sensitivity image data or the high-sensitivity image data into 2N-level image areas from the image area with the lowest illuminance to the image area with the highest illuminance. In each of the N low-light image areas from the low-light image area to the Nth low-light image area, N sets of high-sensitivity image data corresponding to the N low-light image areas are used.
- N In each of the N-level high-illuminance image areas from the (N + 1) th lowest image area to the highest-illuminance image area, N sets corresponding to the N-level high-illuminance image areas respectively. It can be set as the form which produces
- generates a synthesized image using low sensitivity image data. In this case, N 2, and the imaging time for obtaining the first set of high-sensitivity image data and low-sensitivity image data, and the second set of high-sensitivity image data and low-sensitivity image data are obtained. It can be set as the form which has a double relationship between the imaging time of. Such a configuration can be expanded even when N is 3 or more. Also in this form, the dynamic range can be expanded.
- an image having a dynamic range expanded in 2N stages can be obtained from N sets of image data, and it is possible to easily handle moving image shooting.
- N stages an image having a dynamic range expanded in 2N stages can be obtained from N sets of image data.
- a form in which no gap exists between the first filter, the second filter, the third filter, and the fourth filter can do.
- Such a configuration uses an exposure mask with a corner serif pattern when forming at least one of the first filter, the second filter, the third filter, and the fourth filter by using a photolithography technique. This can be easily realized.
- the neutral density filter may have a spectral characteristic equal to a spectral characteristic of a filter in which the first filter and the third filter are stacked.
- the neutral density filter may have a spectral characteristic equal to the spectral characteristic of the first imaging element.
- the neutral density filter includes a first material layer made of the first material constituting the first filter in the imaging device according to the first aspect or the third aspect of the present disclosure, and a second material constituting the second filter.
- the neutral density filter includes a first material constituting the first filter, a second material constituting the second filter, and a third filter in the imaging device according to the first aspect or the third aspect of the present disclosure. It can be set as the structure which consists of the material with which the 3rd material which comprises this was mixed.
- a material constituting an on-chip lens having a substantially uniform spectral transmittance in the visible light region can be cited, and a transparent material adapted to human spectral visual sensitivity characteristics can be used. You can also
- the first wavelength band can be exemplified by a wavelength band (mainly a blue wavelength band) of 350 nm to 550 nm, Examples of the peak wavelength in one wavelength band include 430 nm to 480 nm.
- a wavelength band of 450 nm to 650 nm (mainly a green wavelength band) can be exemplified, and as the peak wavelength of the second wavelength band, 500 nm to 550 nm can be exemplified.
- a wavelength band of 550 nm to 750 nm (mainly a red wavelength band) can be exemplified as the third wavelength band, and a peak wavelength of the third wavelength band can be exemplified as 580 nm to 620 nm.
- the filter may be a color filter that transmits a specific wavelength such as cyan, magenta, or yellow.
- a Bayer array can be given as an array of four imaging elements in one imaging element unit. That is, when the image sensor units are arranged in a two-dimensional matrix, and the image sensor units are arranged in a first direction and a second direction orthogonal to the first direction, for example, In one image sensor unit, the third image sensor and the fourth image sensor are arranged adjacent to each other along the first direction, and the first image sensor and the second image sensor are arranged adjacent to each other along the first direction. The first image sensor and the fourth image sensor are arranged adjacent to each other along the second direction, and the second image sensor and the third image sensor are arranged adjacent to each other along the second direction. .
- the third image sensor and the fourth image sensor are arranged adjacently along the first direction, and the first image sensor and the second image sensor along the first direction.
- the first imaging element and the fourth imaging element are arranged adjacent to each other along the second direction
- the second imaging element and the third imaging element are arranged adjacent to each other along the second direction. It is arranged.
- the fourth imaging element receives light in the first wavelength band, the second wavelength band, and the third wavelength band. In other words, It can be said that light in the visible light band is received.
- the light transmittance of the fourth filter is the light transmittance of the first filter, the light transmittance of the second filter, and the light transmittance of the third filter.
- the light transmittance of the fourth filter is T 4
- the average value of the light transmittance of the first filter, the light transmittance of the second filter, and the light transmittance of the third filter is T 123.
- the average luminance in the high-sensitivity image data in the low-illuminance area in the shooting scene and the average luminance in the low-sensitivity image data in the high-illuminance area in the shooting scene If the difference is greater than or equal to a predetermined value, low-sensitivity image data obtained from low-sensitivity image data or high-sensitivity image data uses high-sensitivity image data corresponding to the low-illuminance image region. In the high illuminance image area obtained from the image data or the high sensitivity image data, the low sensitivity image data corresponding to the high illuminance image area is used to generate a composite image, while the average luminance and the low sensitivity image data in the high sensitivity image data are generated. When the difference from the average brightness in the image is less than a predetermined value, an image can be obtained using only high-sensitivity image data corresponding to the low-illuminance image area.
- the second imaging element is located in a region where the fourth imaging element is located, and the assumed second imaging element (for the sake of convenience, “assuming The output that will be output by the second image sensor ”is interpolated from the output of the second image sensor adjacent to the hypothetical second image sensor, so that the Bayer array image sensor (1 It is preferable to obtain an output similar to the output from the red image pickup device, the two green image pickup devices, and the blue image pickup device (also in the following description). Such an interpolation process is referred to as an “assuming second image sensor interpolation process” for convenience.
- the assumed fourth image sensor (referred to as “assumed first (4) image sensor” for convenience) is provided.
- the output that will be output is interpolated from the output of the fourth image sensor adjacent to the assumed first (4) image sensor, and it is assumed that the fourth image sensor is located in the region where the second image sensor is located.
- an output that would be output by the assumed fourth image sensor (referred to as “assumed second (4) image sensor” for convenience) is output to the fourth image sensor adjacent to the assumed second (4) image sensor.
- the fourth imaging element is located in a region where the first imaging element is located, and the assumed fourth imaging element (for the sake of convenience, “assuming The output that will be output by the first (4) image sensor ”is interpolated from the output of the fourth image sensor adjacent to the assumed first (4) image sensor, and is output to the region where the second image sensor is located.
- the assumed third image sensor (referred to as “assumed first (3) image sensor” for convenience). Is output from the output of the third image sensor adjacent to the assumed first (3) image sensor, and the third image sensor is assumed to be located in the region where the second image sensor is located. Then, an output that would be output by the assumed third image sensor (referred to as “assumed second (3) image sensor” for convenience) is output to the third image sensor adjacent to the assumed second (3) image sensor.
- the assumed third image sensor (for convenience, “assumed fourth (3) image sensor” The output of the third image sensor adjacent to the assumed fourth (3) image sensor.
- the assumed second image sensor (referred to as “assumed first (2) image sensor” for convenience).
- the assumed second imaging device Assuming that the second image sensor is interpolated from the output of the element and that the second image sensor is located in the region where the fourth image sensor is located, the assumed second image sensor (for the sake of convenience, “assumed fourth (2) image sensor” The output of the second image sensor adjacent to the assumed fourth (2) image sensor.
- the assumed first image sensor (referred to as “assumed second (1) image sensor” for convenience). Is interpolated from the output of the first image sensor adjacent to the assumed second (1) image sensor, and the first image sensor is assumed to be located in the region where the third image sensor is located. Then, the output that the assumed first imaging device (referred to as “assumed third (1) imaging device” for convenience) will output the first imaging adjacent to the assumed third (1) imaging device.
- the assumed first image sensor (for the sake of convenience, “assumed fourth (1) image sensor” The output of the first image sensor adjacent to the assumed fourth (1) image sensor.
- the data structure and data structure of low-sensitivity image data can be aligned with the data structure and data structure of high-sensitivity image data with the original number of pixels, simplifying various signal processing Can be achieved.
- the total output of the assembly of the four first imaging elements in the four imaging element subunits is referred to as a “first imaging element assembly output”, and the four second
- the total output of the assembly of the imaging elements is called “second imaging element assembly output”
- the total output of the assembly of the four third imaging elements is called “third imaging element assembly output”
- the above-described “interpolation process of the assumed second image sensor” may be performed.
- the same output as the output from the Bayer array image sensor one red image sensor, two green image sensors, and one blue image sensor in the normal image pickup apparatus is obtained.
- This makes it possible to align the data structure and data structure of the low-sensitivity image data with the original number of pixels of 1/4 and the data structure and data structure of high-sensitivity image data, thereby simplifying various signal processing. Can be achieved.
- the high-sensitivity image data is subjected to the 4-pixel addition process and the low-sensitivity image data is not subjected to the addition process, there is an effect that the sensitivity difference can be increased four times.
- the low-illuminance image area and the high-illuminance image area are obtained from the low-sensitivity image data or the high-sensitivity image data.
- the brightness value of the image data is saturated or close to saturation, it is determined that this area is a high illuminance area, and the brightness value of the low sensitivity image data is used.
- the low illuminance image area and the high illuminance image area can be obtained by a known method.
- the first gain adjustment unit performs gain adjustment for the outputs from the first image sensor, the second image sensor, and the third image sensor, and the fourth image sensor has spectral characteristics based on these outputs. It is preferable to match the spectral characteristics of the fourth filter as much as possible.
- the photoelectric conversion element examples include a CCD element, a CMOS image sensor, a CIS (Contact Image Sensor), and a CMD (Charge Modulation Device) type signal amplification type image sensor.
- examples of the image sensor include a front side illumination type image sensor and a back side illumination type image sensor.
- a digital still camera, a video camera, a camcorder, or a so-called mobile phone with a camera can be configured from the imaging device of the present disclosure.
- the imaging apparatus may be provided with a lens system.
- the lens system may be a single focus lens or a so-called zoom lens, and the configuration and structure of the lens and the lens system may be determined based on specifications required for the lens system.
- Example 1 relates to an imaging apparatus according to the first aspect of the present disclosure.
- FIG. 1A shows a conceptual diagram of an image pickup unit in the image pickup apparatus of Example 1
- FIG. 7A shows a conceptual diagram of the imaging device
- FIG. 7B shows a schematic partial sectional view of the imaging device.
- the light receiving characteristics of the first image sensor, the second image sensor, the third image sensor, and the fourth image sensor are schematically shown in FIG. 1B.
- FIG. 6 schematically illustrates an imaging state of the imaging apparatus according to the first embodiment.
- the imaging apparatus 1 is an imaging apparatus for obtaining a composite image in which a part of an image is a monochrome image and the remaining part is a color image.
- a first imaging element 41 that includes a first filter 43 and a first photoelectric conversion element 42 and receives light in a first wavelength band
- a second imaging element 51 that includes a second filter 53 and a second photoelectric conversion element 52 and receives light in a second wavelength band having a peak wavelength longer than the peak wavelength of the first wavelength band
- a third imaging device 61 that includes a third filter 63 and a third photoelectric conversion element 62 and receives light in a third wavelength band having a peak wavelength longer than the peak wavelength of the second wavelength band
- a fourth imaging element 71 including a fourth filter 73 and a fourth photoelectric conversion element 72 and receiving light in the first wavelength band, the second wavelength band, and the third wavelength band
- An image pickup unit 30 in which image pickup device units 31 are arranged in a two-dimensional matrix, and Image processing unit 11, It has.
- the first wavelength band is 350 nm to 550 nm (mainly the blue wavelength band), the peak wavelength of the first wavelength band is 450 nm, and the first image sensor 41 is blue.
- -It is an image sensor.
- the second wavelength band is 450 nm to 650 nm (mainly the green wavelength band), the peak wavelength of the second wavelength band is 530 nm, and the second image sensor 51 is a green image sensor.
- the third wavelength band is 550 nm to 750 nm (mainly the red wavelength band), the peak wavelength of the third wavelength band is 600 nm, and the third image sensor 61 is a red image sensor.
- the light transmittance of the fourth filter 73 functioning as an ND filter (Neutral Density Filter) is higher than the light transmittance of the first filter 43, the light transmittance of the second filter 53, and the light transmittance of the third filter 63. Low.
- the sensitivity of the fourth image sensor 71 is lower than the sensitivity of the first image sensor 41, the second image sensor 51, and the third image sensor 61.
- FIG. 1B schematically shows the light receiving characteristics of the first image sensor, the second image sensor, the third image sensor, and the fourth image sensor.
- the light transmittance of the fourth filter 73 is expressed as T 4 .
- T 4 When the average value of the light transmittance of the first filter 43, the light transmittance of the second filter 53, and the light transmittance of the third filter 63 is T123 , it is not limited. T 4 / T 123 ⁇ 0.05 It is.
- the fourth filter 73 includes a pigment-based first material constituting the first filter 43, a pigment-based second material constituting the second filter 53, and a pigment-based third material constituting the third filter 63.
- the fourth filter 73 is a mixture of a pigment-based first material constituting the first filter 43 and a pigment-based third material constituting the third filter 63. Made of different materials.
- the imaging elements 41, 51, 61, 71 include, for example, photoelectric conversion elements 42, 52, 62, 72 provided on the silicon semiconductor substrate 80.
- a first planarizing film 82, filters 43, 53, 63, 73, an on-chip lens 84, and a second planarizing film 85 are laminated.
- Reference numeral 86 is a light shielding layer
- reference numeral 88 is a wiring layer.
- the photoelectric conversion elements 42, 52, 62, 72 are composed of, for example, CMOS image sensors. Furthermore, for example, a video camera or the like is configured from the imaging apparatus of the first embodiment.
- the imaging apparatus 1 includes a lens system 20.
- a photographing lens 21, a diaphragm 22 and an imaging lens 23 are housed in a lens barrel, and function as a zoom lens.
- the taking lens 21 is a lens for collecting incident light from the subject.
- the photographic lens 21 includes a focus lens for focusing, a zoom lens for enlarging a subject, and the like, and is generally realized by a combination of a plurality of lenses to correct chromatic aberration and the like.
- the diaphragm unit 22 has a function of narrowing down in order to adjust the amount of collected light, and is generally configured by combining a plurality of plate-shaped diaphragm blades. At least at the position of the diaphragm 22, light from one point of the subject becomes parallel light.
- the imaging lens 23 images light on the imaging unit 30.
- the imaging unit 30 is disposed inside the camera body 2.
- the camera body unit 2 includes, for example, an image processing unit 11 and an image storage unit 12. Then, image data is formed based on an output (electric signal) from the imaging unit 30.
- the image processing unit 11 including a known circuit converts the electrical signal output from the imaging unit 30 into image data and records the image data in the known image storage unit 12.
- a band pass filter (not shown) that blocks light other than visible light is disposed above the image sensor, on the seal glass surface of the image sensor package, or in the lens barrel.
- FIG. 1A in the image pickup apparatus 1 of the first embodiment, four image pickup devices 41, 51, 61, 71 (shown by a solid line and a solid line in FIG. 1A) in one image pickup device unit 31 (shown by a solid square in FIG. 1A).
- the array (shown by dotted squares) is a Bayer array. That is, the image sensor units 31 are arranged in a two-dimensional matrix, but the image sensor units 31 are arranged in a first direction and a second direction orthogonal to the first direction, so that one image pickup is performed.
- the third imaging element 61 and the fourth imaging element 71 are arranged adjacent to each other along the first direction, and the first imaging element 41 and the second imaging element 51 are arranged adjacent to each other along the first direction.
- the first imaging element 41 and the fourth imaging element 71 are adjacently arranged along the second direction, and the second imaging element 51 and the third imaging element 61 are adjacent along the second direction. It is arranged.
- the image processing unit 11 generates high-sensitivity image data based on outputs from the first image sensor 41, the second image sensor 51, and the third image sensor 61, and low-sensitivity image data based on an output from the fourth image sensor 71. Is generated.
- the first image sensor 41, the second image sensor 51, and the third image sensor 61 respectively have signal values “B”, “G 1 ”, Assume that “R” is output.
- the fourth image sensor 71 outputs the signal value “Gy 4 ” shown in FIG. 2B.
- the light receiving sensitivities of the first photoelectric conversion element 42 constituting the first imaging element 41, the second photoelectric conversion element 52 constituting the second imaging element 51, and the third photoelectric conversion element 62 constituting the third imaging element 61 are as follows. For convenience, it is equally set to “1” to simplify the description. For the sake of convenience, the light receiving sensitivity of the fourth photoelectric conversion element 72 constituting the fourth imaging element 71 is set to “1/8”. The same applies to the following description. That is, the ratio of the average accumulated charge amount of the first image sensor 41, the second image sensor 51, and the third image sensor 61 to the accumulated charge amount of the fourth image sensor 71 is 8: 1 (18 dB). Due to the sensitivity difference of 18 dB, low-sensitivity image data for three exposure levels is acquired. The same applies to the following description.
- Each image sensor that has received the light in the state illustrated in FIG. 2A and FIG. 2B sends an output (electric signal) based on the amount of received light to the image processing unit 11.
- interpolation processing is performed based on the following method.
- subjected "'” shows the output obtained by interpolation in principle.
- ' 2 , B) can be obtained (see FIG. 3A), and high-sensitivity image data can be obtained from the entire obtained output.
- the assumed fourth image sensor 71 (assumed second (4) image sensor) outputs.
- the expected output is interpolated from the output of the fourth image sensor 71 adjacent to the assumed second (4) image sensor.
- the output value “Gy ′ 2 ” of the (i + 1, j + 1) th hypothetical second (4) image sensor is obtained from the average value of the output from the fourth) image sensor 71.
- the assumed fourth image sensor 71 (assumed third (4) image sensor) outputs.
- the output that would be interpolated from the output of the fourth image sensor 71 adjacent to the assumed third (4) image sensor.
- the (i + 1, j + 1) th assumed third (4) from the average value of the outputs from the (i, j + 1) th fourth imaging element 71 and the (i + 1, j + 1) th fourth imaging element 71.
- the output value “Gy ′ 3 ” of the image sensor is obtained.
- interpolation method (bi-linear method) is merely an example, and it goes without saying that the output from the adjacent second image sensor 51 can be assumed as it is.
- 51 output method nearest neighbor method
- advanced pixel interpolation algorithm using direction selection algorithm can also be applied, algorithm is determined by target frame rate and overall system image processing cost do it. The same applies to the following description.
- the data structure and data structure of low-sensitivity image data can be aligned with the data structure and data structure of high-sensitivity image data without changing the original number of pixels. Therefore, it is possible to generate an HDR synthesized image with a very low cost processing engine.
- image processing since image processing has the simplest periodicity and symmetry, image quality degradation such as false color and jaggy hardly occurs when a composite image is formed.
- the image processing unit 11 uses the high sensitivity image data corresponding to the low illuminance image region in the low illuminance image region obtained from the low sensitivity image data or the high sensitivity image data.
- a composite image is generated using the low sensitivity image data corresponding to the high illuminance image region. Specifically, for example, a low illuminance image region and a high illuminance image region are extracted from the high sensitivity image data.
- FIG. 6 a shooting scene is assumed in which apples and bananas are placed on a table placed in a dark room, and trees are in the bright outdoors outside the window.
- the shooting conditions exposure and shutter speed
- the indoor shooting state is appropriate, but “white stripes” occur in outdoor scenery outside the window.
- the shooting conditions exposure and gain
- the indoor imaging state becomes “blackout” (see the lower left hand side of FIG. 6).
- the shutter speed is adjusted so that the maximum luminance value in the high illuminance region of the low-sensitivity image data is about 80% output of the AD conversion full range. It is assumed that the frame rate is fixed, for example, 30 fps.
- the output from the fourth image sensor 71 is not saturated and the gray level is obtained, but the luminance distribution can be acquired with good linearity.
- the high-illuminance image region at least one image sensor is saturated in the first image sensor 41, the second image sensor 51, and the third image sensor 61, and part or all of the color information is lost. Yes.
- the first image sensor 41, the second image sensor 51, and the third image sensor 61 are not saturated, and the luminance distribution and color information can be acquired with good linearity.
- low-sensitivity image data and high-sensitivity image data are obtained by one imaging, and in a dark room (low-light image area), using high-sensitivity image data corresponding to a dark room (low-light image area), In a bright outdoor area (high illuminance image area), a composite image is generated using low-sensitivity image data corresponding to a bright outdoor area (high illuminance image area).
- tone mapping processing is performed to convert the luminance range in one image into a range suitable for display in accordance with the dynamic range (for example, RGB 8 bits) of a display device that displays the composite image or a printer that prints the composite image.
- the dynamic range for example, RGB 8 bits
- a bright outdoor (high illuminance image area) image is a single color (black and white) image
- a dark indoor (low illuminance image area) image is a color image.
- Such a composite image requires a full screen color in a device that requires high quality such as a normal digital camera or a camcorder.
- the imaging apparatus has a simple configuration and structure similar to those of an imaging apparatus having a conventional Bayer array, for example, that is, an image is obtained by high dynamic range synthesis using an inexpensive imaging apparatus. Obtainable.
- the column signal processing circuit is arranged, for example, for each column of the image sensor, and performs signal processing such as noise removal for each column on the output output from the image sensor for one row. That is, the column signal processing circuit performs signal processing such as correlated double sampling (CDS) for removing fixed pattern noise unique to the image sensor, signal amplification, analog / digital conversion, and the like.
- the horizontal drive circuit is constituted by, for example, a shift register, and sequentially selects a column signal processing circuit by sequentially outputting horizontal scanning pulses, and outputs a signal from each of the column signal processing circuits.
- the image sensor is driven based on the circuit of the first embodiment disclosed in, for example, JP-A-2005-323331.
- the film thickness of the filters 43, 53, and 63 of the image sensors 41, 51, and 61 is typically thinner than 1 ⁇ m. Therefore, it is desirable that the film thickness of the fourth filter 73 of the fourth image sensor 71 formed on the same plane is approximately the same considering the light collecting performance of the image sensor. Also, from the viewpoint of the filter formation process, it is often difficult to form a thick filter with a predetermined thickness or less.
- the light transmittance of the fourth filter 73 having a film thickness of 1 ⁇ m in Example 1 is limited to about 5%, and the output of the fourth image sensor 71 is the first image sensor 41, the second image sensor 51, and the third image sensor. This is about 1/8 of the output of the element 61, and this determines the dynamic range of the entire imaging apparatus. However, in practice, it may be desirable to reduce the sensitivity of the fourth image sensor 71 in order to cope with a photographing scene having a larger contrast between light and dark.
- the first opening 47, the second image pickup element 41, the second image pickup element 41, the second image pickup element 51, and the third image pickup element 61 have the first opening 47, If the sizes of the second opening 57, the third opening 67, and the fourth opening 77 are determined, the sensitivity of the fourth image sensor 71 is the first image sensor 41, the second image sensor 51, and the third image sensor 61. 1/8 (Example 1) to 1/16 (24 dB). As described above, the amount of light received by the fourth image sensor 71 can be reduced as compared with the amount of light received by the first image sensor 41, the second image sensor 51, and the third image sensor 61. The dynamic range can be expanded without increasing the film thickness.
- gains that can be set independently between the first image sensor 41, the second image sensor 51, the third image sensor 61, and the fourth image sensor 71 are combined to obtain the final stage.
- variably controlling the output of the image it is possible to expand the dynamic range of the HDR synthesized image and to enable variable control of the dynamic range, thereby making it possible to cope with various scenes of light and dark differences. .
- the difference in brightness is often required to be 32 dB or more.
- the image processing unit 11 includes a first gain adjustment unit that adjusts outputs from the first image sensor 41, the second image sensor 51, and the third image sensor 61, and a fourth image sensor 71.
- the output from the first image sensor 41, the second image sensor 51, and the third image sensor 61 by the first gain adjuster is provided.
- FIG. 9A shows the sensitivity ratio of the first image sensor 41, the second image sensor 51, the third image sensor 61, and the fourth image sensor 71
- FIG. 9B shows the first image sensor 41, the second image sensor 51, and the second image sensor.
- the gain ratio of the three image sensors 61 and the fourth image sensor 71 is shown
- FIG. 10A shows the final output ratio of high-sensitivity image data (the ratio of output to the outside; the same applies to the following)
- FIG. 10B shows low-sensitivity image data.
- the final output ratio is shown.
- the number in “()” represents the sensitivity ratio
- FIG. 9B or FIG. 23B to be described later the number in “ ⁇ >” represents the gain value.
- Example 2 and Example 3 can also be combined.
- Example 4 also relates to a modification of Example 1.
- an imaging apparatus capable of such high frame rate (HFR) imaging specifically, an imaging apparatus capable of imaging at double speed (60 fps) HFR is used.
- the image processing unit 11 generates N sets of high-sensitivity image data and low-sensitivity image data with different exposure times, The image processing unit 11 further divides the illuminance area of the image obtained from the low-sensitivity image data or the high-sensitivity image data into a 2N-stage area from the lowest illuminance area to the highest illuminance area, and has the lowest illuminance. In each of the N low-illuminance image areas from the area to the Nth low-illuminance area, N sets of high-sensitivity image data corresponding to each of the N low-illuminance image areas are used.
- N 2 the imaging time for obtaining the first set of high-sensitivity image data and low-sensitivity image data, the second set of high-sensitivity image data, and low sensitivity. There is a double relationship between the imaging time for obtaining image data.
- the output of the first imaging element 41, the second imaging element 51, and the third imaging element 61 in the first imaging is “1”
- the first imaging element 41, the second imaging element in the second imaging is “1”
- the outputs of the image sensor 51 and the third image sensor 61 are “1 ⁇ 2”.
- the output of the fourth image sensor 71 in the first imaging is “1/8”
- the output of the second image sensor 71 in the second imaging is “1/16”. Then, two sets of high-sensitivity image data and low-sensitivity image data are generated based on outputs from these image sensors.
- the image processing unit 11 divides the illuminance area of the image obtained from the low-sensitivity image data or the high-sensitivity image data into four stages from the lowest illuminance area to the highest illuminance area. In each of the two low-illuminance image areas from the lowest illuminance area to the second lowest illuminance area, two sets of high-sensitivity image data corresponding to each of the two low-illuminance image areas are stored.
- the dynamic range of the HDR synthesized image is 24 dB.
- the high-sensitivity image data in the first imaging is saturated, while the high-sensitivity image data in the second imaging is not saturated.
- the range in which color information can be acquired is doubled, and the detection limit of the gray image is extended to half, so that it is possible to acquire information in a lower illuminance space region.
- the detection limit of the gray image is extended to half, so that it is possible to acquire information in a lower illuminance space region.
- the configuration and structure of the imaging apparatus of the second embodiment can be applied to the imaging apparatus of the fourth embodiment.
- the configuration and structure of the imaging apparatus of the third embodiment can be applied to the imaging apparatus of the fourth embodiment.
- the configurations and structures of the imaging apparatuses according to the second and third embodiments can be applied to the imaging apparatus according to the fourth embodiment.
- the first filter, the second filter, the third filter, and the fourth filter individually, that is, when forming the first filter, the second filter, the third filter, and the fourth filter having an isolated dot pattern
- the corner pattern of the resist material recedes, so that the filter should be originally formed in a rectangular shape in the order of the exposure wavelength.
- the four corners of the filter are rounded.
- a gap is formed between the corner portion of the filter and the corner portion of the filter, which may cause problems such as color mixing between the image pickup devices and deterioration of the reliability of the entire image pickup device.
- Example 5 relates to a modification of Example 1 to Example 4.
- the imaging apparatus according to the fifth embodiment there is no gap between the first filter 43, the second filter 53, the third filter 63, and the fourth filter 73.
- Such a configuration is shown in FIG. 12, when one of the first filter 43, the second filter 53, the third filter 63, and the fourth filter 73 is formed using a photolithography technique.
- This can be easily realized by using an exposure mask with a corner serif pattern as shown in a plan view. That is, a rectangular opening is formed in the exposure mask, and a small rectangular opening protrudes from the four corners of the opening.
- a negative type color filter resist material is used.
- photoelectric conversion elements 42, 52, 62, 72 are provided on the silicon semiconductor substrate 80, and then a first planarization film 82 and a light shielding layer 86 are formed thereon.
- a resist material containing a third pigment-based material constituting the third filter 63 is applied to the entire surface, and the resist material is exposed and developed using the exposure mask shown in FIG. A third filter 63 having four bulged corners as shown in FIG.
- a resist material containing a pigment-based second material constituting the second filter 53 is applied to the entire surface, and the resist material is exposed and developed using the exposure mask shown in FIG.
- a second filter 53 having four bulged corners can be obtained.
- a resist material containing a pigment-based first material constituting the first filter 43 is applied to the entire surface, and the resist material is exposed and developed using the exposure mask shown in FIG.
- a resist material including the pigment-based first material, the second material, and the third material (or the pigment-based first material and the third material) constituting the fourth filter 73 is applied to the entire surface, and FIG.
- a fourth filter 73 having four bulged corners as shown in FIG. 14B can be obtained. Since the four corners of each of the filters 43, 53, 63, and 73 are swollen and overlapped, no gap is generated in each of the filters 43, 53, 63, and 73.
- the formation method of the filters 43, 53, 63, 73 is not limited to the above method.
- a resist material containing a pigment-based second material constituting the second filter 53 is applied on the entire surface, and the resist material is exposed and developed using the exposure mask shown in FIG. A second filter 53 having four bulged corners can be obtained.
- a resist material containing a pigment-based third material constituting the third filter 63 is applied to the entire surface, and the resist material is exposed and developed using an exposure mask having a strip-shaped opening, whereby FIG. 3, the third filter 63 can be obtained, and the third material layer can be formed in the region where the fourth filter 73 is to be formed next.
- a resist material containing a pigment-based first material constituting the first filter 43 is applied to the entire surface, and the resist material is exposed and developed using an exposure mask having a band-shaped opening, whereby FIG.
- the first filter 43 as shown in FIG. 5 can be obtained, and the first material layer can be formed in the region where the fourth filter 73 is to be formed on the strip pattern 63 shown in FIG. 15B.
- a fourth filter 73 having a two-layer laminated structure of a first material layer made of the first material constituting the first filter 43 and a third material layer made of the third material constituting the third filter 63 is obtained. be able to.
- the ND filter (based on the first material constituting the first filter and the third material constituting the third filter, which is a magenta-like material but is an existing material) A fourth filter 73) can be formed.
- a resist material containing a pigment-based second material constituting the second filter 53 is applied to the entire surface, the resist material is exposed and developed using an exposure mask, and as shown in FIG. 16B.
- the second filter 53 with the four corners swelled can be obtained, and a layer made of the second material is formed in the region where the fourth filter 73 is to be formed next.
- the formation of the third filter 63 based on the formation of the layer made of the third belt-like material similar to that shown in FIG. 15B and the formation of the layer made of the first belt-like material similar to that shown in FIG. 16A.
- the first filter 43 By forming the first filter 43 based on the first filter 43, the first material layer made of the first material constituting the first filter 43, the second material layer made of the second material constituting the second filter 53, and the third filter 63 are formed.
- the 4th filter 73 which has the 3 layer laminated structure of the 3rd material layer which consists of a 3rd material to comprise can be obtained.
- Example 6 relates to an imaging apparatus according to the second and fourth aspects of the present disclosure.
- a conceptual diagram of the imaging unit in the imaging apparatus of Example 6 is shown in FIG. 17A, and conceptual diagrams for explaining the operation of the imaging unit are shown in FIGS. 18A, 18B, 19A, 19B, 20, and 21.
- FIG. 17B schematically shows the light receiving characteristics of the first image sensor, the second image sensor, the third image sensor, and the fourth image sensor.
- the conceptual diagram of the imaging device and the schematic partial cross-sectional view of the imaging device are the same as those shown in FIGS. 7A and 7B.
- the imaging device 2 is an imaging device for obtaining a monochrome image, and will be described according to the second aspect of the present disclosure.
- a first imaging element 141 and a third imaging element 161 that include a first photoelectric conversion element and receive visible light; and
- the image processing unit 11 generates high-sensitivity image data based on outputs from the first image sensor 141 and the third image sensor 161, and low-sensitivity image data based on outputs from the second image sensor 151 and the fourth image sensor 171. Produces The image processing unit 11 further uses high-sensitivity image data corresponding to the low-illuminance image region in the low-illuminance image region obtained from the low-sensitivity image data or the high-sensitivity image data. In the high illuminance image region obtained from the image data, a composite image is generated using low sensitivity image data corresponding to the high illuminance image region.
- the imaging device 2 of Example 6 is an imaging device for obtaining a monochrome image, and will be described according to the fourth aspect of the present disclosure.
- First imaging elements 141 and 161 that include a first photoelectric conversion element and receive light in the visible light band, and Second imaging elements 151, 171 including a neutral density filter and a second photoelectric conversion element, and receiving light in a visible light band;
- the imaging element units are arranged in all the rows of the imaging unit along the first direction, or The image sensor units are arranged at intervals in one row of the image pickup unit along the first direction, and further, the image sensor unit is M rows along the second direction (where M ⁇ 2). ).
- the part constituting the imaging unit other than the imaging element unit may be occupied by the first imaging element.
- the image processing unit 11 generates high-sensitivity image data based on the outputs from the first image sensors 141 and 161, and generates low-sensitivity image data based on the outputs from the second image sensors 151 and 171.
- the image processing unit 11 further uses high-sensitivity image data corresponding to the low-illuminance image region in the low-illuminance image region obtained from the low-sensitivity image data or the high-sensitivity image data. In the high illuminance image region obtained from the image data, a composite image is generated using low sensitivity image data corresponding to the high illuminance image region.
- the neutral density filter includes a pigment-based first material constituting the first filter 43 in Example 1, a pigment-based second material constituting the second filter 53, and a pigment constituting the third filter 63.
- the fourth filter 73 is a first material layer made of a pigment-based first material constituting the first filter 43, and the pigment constituting the second filter 53.
- the third material layer is composed of a three-layer structure of a second material layer made of a second material of the system and a third material layer made of a pigment-based third material constituting the third filter 63.
- the neutral density filter is made of a material in which a pigment-based second material constituting the second filter 53 in Example 1 and a magenta-based material are mixed.
- the neutral density filter 73 is a second filter.
- the filter 53 may be composed of a two-layer structure of a second material layer made of a pigment-based second material and a material layer made of a magenta-based material. It is possible to impart spectral characteristics equal to the spectral characteristics of a filter in which the first filter and the third filter are laminated to an appropriately set neutral density filter.
- FIG. 17A in the imaging device 2 of the sixth embodiment, four imaging elements 141, 151, 161, 171 (in FIG. 17A, solid lines and one in one imaging element unit 131 (indicated by a solid square in FIG. 17A)).
- the array (shown by dotted squares) is a Bayer array.
- the image sensor units 131 are arranged in a two-dimensional matrix, but the image sensor units 131 are arranged in a first direction and a second direction orthogonal to the first direction, so that one imaging In the element unit 131, the third imaging element 161 and the fourth imaging element 171 are arranged adjacent to each other along the first direction, and the first imaging element 141 and the second imaging element 151 are arranged adjacent to each other along the first direction.
- the first image sensor 141 and the fourth image sensor 171 are arranged adjacent to each other along the second direction, and the second image sensor 151 and the third image sensor 161 are arranged adjacent to each other along the second direction. It is arranged.
- the sensitivity of the second image sensor 151 and the fourth image sensor 171 is lower than the sensitivity of the first image sensor 141 and the third image sensor 161.
- the image processing unit 11 generates high-sensitivity image data based on outputs from the first image sensor 141 and the third image sensor 161, and low-sensitivity image data based on outputs from the second image sensor 151 and the fourth image sensor 171. Is generated.
- the first image sensor 141 and the third image sensor 161, and the second image sensor 151 and the fourth image sensor 171 are shown in FIGS. 18A and 18B, respectively. It is assumed that signal values “GY 1 ”, “GY 31 ”, “gy 2 ”, and “gy 4 ” are output.
- the light receiving sensitivities of the first photoelectric conversion element constituting the first image pickup element 141 and the photoelectric conversion element constituting the third image pickup element 161 are equally set to “1” in order to simplify the description for convenience.
- the light receiving sensitivity of the photoelectric conversion elements constituting the image sensor 151 and the fourth image sensor 171 is “1/8”.
- the ratio between the average accumulated charge amount of the first image sensor 141 and the third image sensor 161 and the accumulated charge amount of the second image sensor 151 and the fourth image sensor 171 is 8: 1 (18 dB). Due to the sensitivity difference of 18 dB, low-sensitivity image data for three exposure levels is acquired. The same applies to the following description.
- Each image sensor that has received light in the state illustrated in FIGS. 18A and 18B sends an output (electric signal) based on the amount of received light to the image processing unit 11.
- interpolation processing is performed based on the following method.
- the assumed first image sensor 141 (assumed first image sensor 141) will output.
- the output is interpolated from the output of the first image sensor 141 adjacent to the assumed first image sensor 141.
- the output that the (i + 1, j + 1) th assumed first image sensor 141 will output is the (i + 1, j) th first image sensor 141 and the (i + 1, j + 1) th image.
- the average value “GY ′ 4 ” of outputs from the two image sensors of the first image sensor 141 is obtained.
- the assumed third image sensor 161 (assumed third image sensor 161) will output.
- the output is interpolated from the output of the second image sensor 151 adjacent to the assumed third image sensor 161.
- the output that the (i + 1, j + 1) th assumed third image sensor 161 will output is the (i + 1, j + 1) th third image sensor 161 and the (i + 1, j + 2) th image.
- the average value “GY ′ 2 ” of outputs from the two image sensors of the third image sensor 161 is obtained.
- outputs (GY 1 , GY ′ 2 ) similar to the outputs from the Bayer array image sensors (one red image sensor, two green image sensors, and one blue image sensor) in a normal image pickup apparatus.
- GY 3 , GY ′ 4 (see FIG. 19A)
- high-sensitivity image data can be obtained from the entire obtained output.
- the assumed fourth image sensor 171 (assumed fourth image sensor 171) will output.
- the output is interpolated from the output of the fourth image sensor 171 adjacent to the assumed fourth image sensor 171.
- the output that the (i + 1, j + 1) th assumed fourth image sensor 171 will output is the (i + 1, j + 1) th fourth image sensor 171 and the (i + 1, j + 2) th image.
- the average value “gy ′ 1 ” of outputs from the two image sensors of the fourth image sensor 171 is obtained.
- the assumed second image sensor 151 (assumed second image sensor 151) will output.
- the output is interpolated from the output of the third image sensor 161 adjacent to the assumed second image sensor 151.
- the output that the (i + 1, j + 1) th assumed second image sensor 151 will output is the (i + 1, j) th second image sensor 151 and the (i + 1, j + 1) th image.
- the average value “gy ′ 3 ” of outputs from the two image sensors of the second image sensor 151 is obtained.
- outputs (gy ′ 1 , gy 2 ) similar to outputs from Bayer array image sensors (one red image sensor, two green image sensors, and one blue image sensor) in a normal image pickup apparatus.
- Gy ′ 3 , gy 4 (see FIG. 19B), and low-sensitivity image data can be obtained from the entire obtained output.
- the data structure and data structure of the low-sensitivity image data can be aligned with the data structure and data structure of the high-sensitivity image data without changing the original number of pixels.
- Simplification can be achieved, and an HDR synthesized image can be generated by a very low cost processing engine.
- image processing since image processing has the simplest periodicity and symmetry, image quality degradation such as false color and jaggy hardly occurs when a composite image is formed.
- the image processing unit 11 uses the high sensitivity image data corresponding to the low illuminance image region in the low illuminance image region obtained from the low sensitivity image data or the high sensitivity image data.
- a composite image is generated using the low sensitivity image data corresponding to the high illuminance image region. Specifically, for example, a low illuminance image region and a high illuminance image region are extracted from the high sensitivity image data.
- this region is a high illuminance image region, and the illuminance average value is less than the default value, This region may be a low illuminance image region.
- a region surrounded by double diagonal lines is a high illuminance image region. Then, a high-sensitivity image data corresponding to the low-illuminance image area is used in the low-illuminance image area, and a low-sensitivity image data corresponding to the high-illuminance image area is used to generate the composite image. To do. This state is schematically shown in FIG.
- high-sensitivity image data [1] based on the output from the first image sensor 141 and high-sensitivity image data [2] based on the output from the third image sensor 161 can be obtained.
- the final output ratio of high-sensitivity image data [1]: high-sensitivity image data [2] is 1: (1/2) (see FIGS. 24A and 24B).
- low-sensitivity image data [1] based on the output from the second image sensor 151 and low-sensitivity image data [2] based on the output from the fourth image sensor 171 can be obtained.
- the final output ratio of the low-sensitivity image data [1]: the low-sensitivity image data [2] is (1/8) :( 1/16) (see FIGS. 25A and 25B).
- the seventh embodiment four stages of information can be acquired by one imaging. That is, according to the seventh embodiment, the range in which information can be acquired is doubled, the detection lower limit of the gray image is greatly extended, and information can be acquired in a lower illuminance space region.
- the eighth embodiment is also a modification of the sixth embodiment and relates to the same form as the third embodiment.
- the image processing unit 11 includes a first gain adjustment unit that adjusts outputs from the first imaging element 141 and the third imaging element 161, and a second imaging element 151 and a fourth imaging element 171.
- a second gain adjusting unit that adjusts the output from.
- Gn 1 is an adjustment coefficient for the outputs from the first image sensor 141 and the third image sensor 161 by the first gain adjustment unit, and the outputs from the second image sensor 151 and the fourth image sensor 171 by the second gain adjustment unit.
- the range of the gain difference between the first image sensor 141 and the third image sensor 161 and the second image sensor 151 and the fourth image sensor 171 is 12 dB at the maximum.
- the sensitivity ratio of the first image sensor 141 and the third image sensor 161 and the sensitivity ratio of the second image sensor 151 and the fourth image sensor 171 are 1: (1/8), and the first image sensor 141 and the third image sensor.
- the gain ratio of the element 161, the second imaging element 151, and the fourth imaging element 171 is 4: 1. Therefore, the final output ratio of the first image sensor 141 and the third image sensor 161 and the second image sensor 151 and the fourth image sensor 171 is 1: (1/32), that is, 30 dB.
- the values of the adjustment coefficients Gn 1 and Gn 2 may be fixed values, or automatically or manually according to the shooting scene and the difference between the average luminance in the high-sensitivity image data and the average luminance in the low-sensitivity image data. It can also be changed, and this enables imaging with an optimum dynamic range according to various bright and dark scenes.
- the gain value for the first image sensor 141 and the gain value for the third image sensor 161 are different (for example, 4 times for the first image sensor 141 and 2 for the third image sensor 161). Double gain value), the gain value for the second image sensor 151 and the gain value for the fourth image sensor 171 may be different (for example, twice for the second image sensor 151 and the fourth image sensor). (1 time gain value for the element 171).
- Example 7 and Example 8 can also be combined.
- the ninth embodiment is also a modification of the sixth embodiment and relates to the same form as the fourth embodiment.
- the image processing unit 11 generates N sets of high-sensitivity image data and low-sensitivity image data with different exposure times, The image processing unit 11 further divides the illuminance area of the image obtained from the low-sensitivity image data or the high-sensitivity image data into 2N-level image areas from the image area with the lowest illuminance to the image area with the highest illuminance, In each of the N low-light image areas from the low-illuminance image area to the Nth low-light image area, N sets of high-sensitivity image data corresponding to the N low-illuminance image areas are used.
- N 2 is set, the imaging time for obtaining the first set of high-sensitivity image data and low-sensitivity image data, and the second set of high-sensitivity image data and low-sensitivity image data. There is a double relationship between the imaging time for obtaining the
- the image processing unit 11 more specifically sets the illuminance area of the image obtained from the low-sensitivity image data or the high-sensitivity image data from the lowest illuminance area to the highest illuminance area.
- the image processing unit 11 uses the high-sensitivity image data, that is, using the high-sensitivity image data obtained from “high sensitivity 1” and “high sensitivity 2” shown in FIG.
- the range in which information can be acquired is doubled, and the detection limit of the gray image is extended to half, so that information can be acquired in a lower illuminance space region.
- the high-definition HDR synthesized image is increased from the two unit frames by increasing the information of intermediate illuminance. Formation is possible.
- configuration and structure of the imaging device of the seventh embodiment can be applied to the imaging device of the ninth embodiment, and the configuration and structure of the imaging device of the eighth embodiment with respect to the imaging device of the ninth embodiment.
- configuration and structure of the imaging devices of the seventh and eighth embodiments can be applied to the imaging device of the ninth embodiment.
- Example 10 is a modification of Example 1 and Example 6. That is, in the tenth embodiment, a monochromatic image is obtained from the imaging device described in the first embodiment.
- the first gain adjustment unit provided in the image processing unit 11 adjusts the output from the first image sensor 41, the second image sensor 51, and the third image sensor 61, and based on these outputs.
- the spectral characteristics are matched as much as possible with the spectral characteristics of the fourth filter (attenuating filter) 73 provided in the fourth image sensor 71.
- the fourth filter is configured by stacking the first filter and the third filter as described above, the output of the first image sensor 41 is multiplied by 1.0, and the output of the second image sensor 51 is set to 0. 0.
- the output of the third image sensor 61 may be multiplied by 1.0.
- these output magnifications are merely examples, and may be set appropriately so as to be similar to the spectral characteristics of the fourth filter.
- the neutral density filter is made of a material in which the first material, the second material, and the third material are mixed, or alternatively, the first material layer, the second material layer, and the third material. It may be composed of a three-layer structure of material layers, is composed of a material in which the first material and the third material are mixed, or is composed of a two-layer structure of the first material layer and the third material layer.
- FIG. 17B schematically shows the spectral characteristics of the material in which the first material and the third material are mixed.
- Example 11 relates to an imaging apparatus according to the third aspect of the present disclosure.
- a conceptual diagram of the imaging unit in the imaging device of Example 11 is shown in FIG. 26, and conceptual diagrams for explaining the operation of the imaging unit are shown in FIGS. 27A, 27B, 28A, 28B, and 29.
- FIG. 27A, 27B, 28A, 28B, and 29 A conceptual diagram of the imaging unit in the imaging device of Example 11 is shown in FIG. 26, and conceptual diagrams for explaining the operation of the imaging unit are shown in FIGS. 27A, 27B, 28A, 28B, and 29.
- the image pickup apparatus is an image pickup apparatus for obtaining a composite image in which a part of an image is a monochrome image and the remaining part is a color image.
- a first imaging element 41 that includes a first filter 43 and a first photoelectric conversion element 42 and receives light in a first wavelength band
- a second imaging element 51 that includes a second filter 53 and a second photoelectric conversion element 52 and receives light in a second wavelength band having a peak wavelength longer than the peak wavelength of the first wavelength band
- a third imaging device 61 that includes a third filter 63 and a third photoelectric conversion element 62 and receives light in a third wavelength band having a peak wavelength longer than the peak wavelength of the second wavelength band
- a fourth imaging element 71 including a fourth filter 73 and a fourth photoelectric conversion element 72 and receiving light in the first wavelength band, the second wavelength band, and the third wavelength band
- An imaging unit 230 in which an imaging element unit 231A composed of four imaging element subunits 231B is arranged in a two-dimensional matrix
- the first image sensor 41, the second image sensor 51, the third image sensor 61, the fourth image sensor 71, the photoelectric conversion element, the filters 43, 53, 63, 73, and the like in the eleventh embodiment are the same as in the first embodiment. It has the structure and structure.
- the light transmittance of the fourth filter 73 is the same as that of the first embodiment, but the light transmittance of the first filter 43, the light transmittance of the second filter 53, and the third filter 63. Lower than light transmittance.
- the sensitivity of the fourth image sensor 71 is lower than the sensitivity of the first image sensor 41, the second image sensor 51, and the third image sensor 61.
- the image processing unit 11 is based on the total output of the first imaging element 41, the total output of the second imaging element 51, and the total output of the third imaging element 61 in the four imaging element subunits 231B constituting the imaging element unit 231A.
- Sensitivity image data is generated, and low sensitivity image data is generated by the output from the fourth image sensor 71 in each of the four image sensor subunits 231B. That is, the total output of the first image sensor 41, the total output of the second image sensor 51, and the total output of the third image sensor 61 is four times the output of each of the fourth image sensors 71, and the first image sensor 41, since the sensitivity ratio of the sensitivity of the second image sensor 51 and the third image sensor 61 to the sensitivity of the fourth image sensor 71 is 1: (1/8), the high-sensitivity image data and the low-sensitivity image data The final output ratio is 1: (1/32).
- the image processing unit 11 further uses the high sensitivity image data corresponding to the low illuminance image region in the low illuminance image region obtained from the low sensitivity image data or the high sensitivity image data.
- a composite image is generated using low sensitivity image data corresponding to the high illuminance image region.
- the arrangement of the four image sensors 41, 51, 61, 71 in one image sensor subunit 231B is a Bayer array.
- the image sensor unit 231A is surrounded by a solid line
- the image sensor subunit 231B is surrounded by a solid line and a dotted line. That is, for example, in one image sensor subunit 231B, the third image sensor 61 and the fourth image sensor 71 are arranged adjacent to each other along the first direction, and the first image sensor 41 along the first direction.
- the second image sensor 51 are arranged adjacent to each other, the first image sensor 41 and the fourth image sensor 71 are arranged adjacent to each other along the second direction, and the second image sensor 51 is arranged along the second direction.
- the 3rd image sensor 61 is distribute
- the four first image sensors 41 in one image sensor unit 231A have the signal values “B 1 ”, “B 2 ”, “B 3 ”, “B” shown in FIG. 27A. 4 ”is output.
- the four second image sensors 51 in one image sensor unit 231A output the signal values “G 1 1 ”, “G 1 2 ”, “G 1 3 ”, and “G 1 4 ” shown in FIG. 27A.
- the four third image sensors 61 in one image sensor unit 231A output the signal values “R 1 1 ”, “R 1 2 ”, “R 1 3 ”, “R 1 4 ” shown in FIG. 27A. Then.
- Each image sensor that has received the light in the state illustrated in FIGS. 27A and 27B sends an output (electric signal) based on the amount of received light to the image processing unit 11.
- interpolation processing is performed based on the following method.
- the image processing unit 11 includes an assembly of four first imaging elements 41 (first imaging element assembly) and an assembly of four second imaging elements 51 (second imaging element) in the four imaging element subunits 231B.
- Aggregation) Output totals B 1-4 , G 1 1-4 , and R 1-4 of an aggregation (third imaging element assembly) of four third imaging elements 61 are obtained (see FIG. 28A).
- the assumed second imaging element assembly Is output from the output of the second image sensor assembly adjacent to the hypothetical second image sensor assembly, so that the output similar to the output from the image sensor of the Bayer array in the normal image pickup apparatus is output.
- the data structure and data structure of the high-sensitivity image data can be made uniform, and various signal processing can be simplified. Further, since the high-sensitivity image data is subjected to the 4-pixel addition process and the low-sensitivity image data is not subjected to the addition process, there is an effect that the sensitivity difference can be increased four times.
- Example 11 the configuration and structure of the imaging apparatus described in Example 2 can be applied, and the configuration and structure of the imaging apparatus described in Example 3 can be applied.
- the configuration and structure of the imaging apparatus described in Embodiment 4 can be applied, and any combination thereof can be applied.
- the present disclosure has been described based on the preferred embodiments, the present disclosure is not limited to these embodiments.
- the configurations and structures of the image sensor and the imaging device described in the embodiments are examples, and can be changed as appropriate.
- the imaging device has a configuration in which a photoelectric conversion element provided on a silicon semiconductor substrate, and a first planarization film, an on-chip lens, a second planarization film, and a filter are stacked thereon. You can also Further, the imaging element may be not only the back side illumination type but also the front side illumination type.
- Imaging device first aspect >> A first imaging element that includes a first filter and a first photoelectric conversion element and receives light in a first wavelength band; A second imaging element that includes a second filter and a second photoelectric conversion element and receives light in a second wavelength band having a peak wavelength longer than the peak wavelength of the first wavelength band; A third imaging device comprising a third filter and a third photoelectric conversion element, and receiving light in a third wavelength band having a peak wavelength longer than the peak wavelength of the second wavelength band; and A fourth imaging element comprising a fourth filter and a fourth photoelectric conversion element and receiving light in the first wavelength band, the second wavelength band, and the third wavelength band; An image pickup unit in which image pickup device units are arranged in a two-dimensional matrix, and Image processing unit, An imaging device comprising: The light transmittance of the fourth filter is lower than the light transmittance of the first filter, the light transmittance of the second filter, and the light transmittance of the third filter, The light transmittance of the fourth filter is lower than the light transmittance of
- An imaging device that generates a composite image using low-sensitivity image data corresponding to a high-illuminance image region in a high-illuminance image region obtained from data.
- a first opening is formed between the first filter and the first photoelectric conversion element
- a second opening is formed between the second filter and the second photoelectric conversion element
- a third opening is formed between the third filter and the third photoelectric conversion element
- a fourth opening is formed between the fourth filter and the fourth photoelectric conversion element
- the imaging apparatus including a unit.
- Gn 1 is an adjustment coefficient for the output from the first image sensor, the second image sensor, and the third image sensor by the first gain adjustment unit, and an adjustment coefficient for the output from the fourth image sensor by the second gain adjustment unit.
- Gn 2 Gn 1 / Gn 2 ⁇ 1 The imaging device according to [3], wherein [5] The image processing unit generates N sets of high-sensitivity image data and low-sensitivity image data with different exposure times, The image processing unit further divides the illuminance area of the image obtained from the low-sensitivity image data or the high-sensitivity image data into 2N-level areas from the lowest illuminance area to the highest illuminance area, and the lowest illuminance area.
- N sets of high-sensitivity image data corresponding to each of the N low-illuminance image areas are used.
- N sets of low-sensitivity image data corresponding to each of the N stages of high illuminance image areas are used.
- the imaging apparatus according to any one of [1] to [4], which generates a composite image.
- [6] N 2, imaging time for obtaining the first set of high-sensitivity image data and low-sensitivity image data, and imaging for obtaining the second set of high-sensitivity image data and low-sensitivity image data
- the fourth filter includes a first material layer made of the first material constituting the first filter, a second material layer made of the second material constituting the second filter, and a third filter constituting the third filter.
- the imaging device according to any one of [1] to [7], wherein the imaging device has a three-layer stacked structure of third material layers made of a material.
- the fourth filter is made of a material in which the first material constituting the first filter, the second material constituting the second filter, and the third material constituting the third filter are mixed. [7] The imaging device according to any one of [7]. [10] The fourth filter has a two-layer structure of a first material layer made of the first material constituting the first filter and a third material layer made of the third material constituting the third filter [1] ] To [7] The imaging device according to any one of [7].
- the imaging device which includes a unit.
- Gn 1 is an adjustment coefficient for the output from the first image sensor and the third image sensor by the first gain adjustment unit, and an adjustment coefficient for the output from the second image sensor and the fourth image sensor by the second gain adjustment unit.
- Gn 2 Gn 1 / Gn 2 ⁇ 1 The imaging device according to [14], wherein [16] The image processing unit generates N sets of high-sensitivity image data and low-sensitivity image data with different exposure times, The image processing unit further divides the illuminance area of the image obtained from the low-sensitivity image data or the high-sensitivity image data into 2N-level image areas from the image area with the lowest illuminance to the image area with the highest illuminance.
- N sets of high-sensitivity image data corresponding to the N low-light image areas are used.
- the imaging device according to any one of [12] to [15], which generates a composite image using low-sensitivity image data.
- Imaging device Fourth aspect >> A first imaging element that includes a first photoelectric conversion element and receives light in the visible light band; and A second image sensor comprising a neutral density filter and a second photoelectric conversion element and receiving light in the visible light band; An image pickup unit in which image pickup device units each including an array are arranged, and Image processing unit, An imaging device comprising: The image processing unit generates high-sensitivity image data by output from the first image sensor, generates low-sensitivity image data by output from the second image sensor, The image processing unit further uses high-sensitivity image data corresponding to the low-illuminance image area in the low-illuminance image area obtained from the low-sensitivity image data or the high-sensitivity image data, and also uses the low-sensitivity image data or the high-sensitivity image data. An imaging device that generates a composite image using low-sensitivity image data corresponding to a high-illuminance image region in a high-illuminance image region obtained from data.
Abstract
Description
第1フィルター及び第1光電変換素子を備え、第1波長帯の光を受光する第1撮像素子、
第2フィルター及び第2光電変換素子を備え、第1波長帯のピーク波長よりも長いピーク波長を有する第2波長帯の光を受光する第2撮像素子、
第3フィルター及び第3光電変換素子を備え、第2波長帯のピーク波長よりも長いピーク波長を有する第3波長帯の光を受光する第3撮像素子、及び、
第4フィルター及び第4光電変換素子を備え、第1波長帯、第2波長帯及び第3波長帯の光を受光する第4撮像素子、
から成る撮像素子ユニットが2次元マトリクス状に配列された撮像部、並びに、
画像処理部、
を備えた撮像装置であって、
第4フィルターの光透過率は、第1フィルターの光透過率、第2フィルターの光透過率及び第3フィルターの光透過率よりも低く、
画像処理部は、第1撮像素子、第2撮像素子及び第3撮像素子からの出力によって高感度画像データを生成し、第4撮像素子からの出力によって低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する。本開示の第1の態様に係る撮像装置によって、画像の内の一部分は単色の画像であり、残りの部分はカラーの画像である合成画像を得ることができる。
第1光電変換素子を備え、可視光帯の光を受光する第1撮像素子及び第3撮像素子と、
減光フィルター及び第2光電変換素子を備え、可視光帯の光を受光する第2撮像素子及び第4撮像素子、
とから成る撮像素子ユニットが2次元マトリクス状に配列された撮像部、並びに、
画像処理部、
を備えた撮像装置であって、
画像処理部は、第1撮像素子及び第3撮像素子からの出力によって高感度画像データを生成し、第2撮像素子及び第4撮像素子からの出力によって低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する。本開示の第2の態様に係る撮像装置によって、単色の画像を得ることができる。
第1フィルター及び第1光電変換素子を備え、第1波長帯の光を受光する第1撮像素子、
第2フィルター及び第2光電変換素子を備え、第1波長帯のピーク波長よりも長いピーク波長を有する第2波長帯の光を受光する第2撮像素子、
第3フィルター及び第3光電変換素子を備え、第2波長帯のピーク波長よりも長いピーク波長を有する第3波長帯の光を受光する第3撮像素子、及び、
第4フィルター及び第4光電変換素子を備え、第1波長帯、第2波長帯及び第3波長帯の光を受光する第4撮像素子、
から成る撮像素子サブユニットの4つから構成された撮像素子ユニットが2次元マトリクス状に配列された撮像部、並びに、
画像処理部、
を備えた撮像装置であって、
第4フィルターの光透過率は、第1フィルターの光透過率、第2フィルターの光透過率及び第3フィルターの光透過率よりも低く、
画像処理部は、撮像素子ユニットを構成する4つの撮像素子サブユニットにおける第1撮像素子の出力合計、第2撮像素子の出力合計及び第3撮像素子の出力合計に基づき高感度画像データを生成し、4つの撮像素子サブユニットのそれぞれにおける第4撮像素子からの出力によって低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する。本開示の第3の態様に係る撮像装置によって、画像の内の一部分は単色の画像であり、残りの部分はカラーの画像である合成画像を得ることができる。
第1光電変換素子を備え、可視光帯の光を受光する第1撮像素子、及び、
減光フィルター及び第2光電変換素子を備え、可視光帯の光を受光する第2撮像素子、
から成る撮像素子ユニットが配列された撮像部、並びに、
画像処理部、
を備えた撮像装置であって、
画像処理部は、第1撮像素子からの出力によって高感度画像データを生成し、第2撮像素子からの出力によって低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する。本開示の第4の態様に係る撮像装置によって、単色の画像を得ることができる。
1.本開示の第1の態様~第4の態様に係る撮像装置、全般に関する説明
2.実施例1(本開示の第1の態様に係る撮像装置)
3.実施例2(実施例1の変形)
4.実施例3(実施例1の別の変形)
5.実施例4(実施例1の更に別の変形)
6.実施例5(実施例1~実施例4の変形)
7.実施例6(本開示の第2の態様及び第4の態様に係る撮像装置)
8.実施例7(実施例6の変形)
9.実施例8(実施例6の別の変形)
10.実施例9(実施例6の更に別の変形)
11.実施例10(実施例1及び実施例6の変形)
12.実施例11(本開示の第3の態様に係る撮像装置)、その他
本開示の第1の態様に係る撮像装置において、
第1フィルターと第1光電変換素子との間には第1開口部が形成されており、
第2フィルターと第2光電変換素子との間には第2開口部が形成されており、
第3フィルターと第3光電変換素子との間には第3開口部が形成されており、
第4フィルターと第4光電変換素子との間には第4開口部が形成されており、
第4開口部は、第1開口部、第2開口部及び第3開口部よりも小さい形態とすることができる。このような形態とすることで、第1撮像素子、第2撮像素子及び第3撮像素子が受光する光量よりも、第4撮像素子の受光する光量を低下させることができる結果、ダイナミックレンジの拡大を図ることができる。開口部は、フィルターと光電変換素子との間に形成される遮光層に開口部を形成することで得ることができる。開口部の平面形状として、円形、あるいは、正多角形(例えば、正五角形、正六角形、正七角形、正八角形)を挙げることができる。尚、正多角形には、疑似正多角形(各辺が曲線で構成された正多角形、頂点が丸みを帯びた正多角形)が包含される。次に説明する本開示の第2の態様に係る撮像装置においても同様とすることができる。
第1光電変換素子の光入射領域には第1開口部が形成されており、
減光フィルターと第2光電変換素子との間には第2開口部が形成されており、
第3光電変換素子の光入射領域には第3開口部が形成されており、
減光フィルターと第4光電変換素子との間には第4開口部が形成されており、
第3開口部は第1開口部よりも小さく、第4開口部は第2開口部よりも小さい形態とすることができる。このような形態とすることで、第1撮像素子が受光する光量よりも第3撮像素子の受光する光量を低下させることができ、また、第2撮像素子が受光する光量よりも第4撮像素子の受光する光量を低下させることができる結果、ダイナミックレンジの拡大を図ることができる。開口部は、減光フィルターと光電変換素子との間に形成される遮光層に開口部を形成することで得ることができる。
Gn1/Gn2≧1
を満足することが好ましい。(Gn1/Gn2)の値として、例えば、具体的には、2、4、8等を例示することができる。このような形態によっても、ダイナミックレンジの拡大を図ることができる。Gn1/Gn2の値は、撮影シーンの明暗差が小さい場合には「1」に固定してよいが、撮影シーンの明暗差が大きい場合には、撮影シーン中の低照度領域の高感度画像データにおける輝度と撮影シーン中の高照度領域の低感度画像データにおける輝度との差に応じて、自動的にあるいは手動で変更可能とすることもできる。次に説明する本開示の第2の態様に係る撮像装置においても同様とすることができる。第1ゲイン調整部における第1撮像素子、第2撮像素子及び第3撮像素子からの出力の調整は、同じであってもよいし、場合によっては、異なっていてもよい。
Gn1/Gn2≧1
を満足することが好ましい。(Gn1/Gn2)の値として、例えば、具体的には、2、4、8等を例示することができる。このような形態によっても、ダイナミックレンジの拡大を図ることができる。第1ゲイン調整部における第1撮像素子及び第3撮像素子からの出力の調整は、同じであってもよいし、場合によっては、異なっていてもよい。また、第2ゲイン調整部における第2撮像素子及び第4撮像素子からの出力の調整は、同じであってもよいし、場合によっては、異なっていてもよい。
画像処理部は、露出時間の異なるN組の高感度画像データ及び低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる画像の照度領域を、最も照度の低い領域から最も照度の高い領域まで2N段階の領域に区画し、最も照度が低い領域から第N番目に照度が低い領域までのN段階の低照度画像領域のそれぞれにおいては、N段階の低照度画像領域のそれぞれに対応するN組の高感度画像データを用いて、また、第(N+1)番目に照度が低い領域から最も照度が高い領域までのN段階の高照度画像領域のそれぞれにおいては、N段階の高照度画像領域のそれぞれに対応するN組の低感度画像データを用いて、合成画像を生成する形態とすることができる。そして、この場合、N=2であり、第1組目の高感度画像データ及び低感度画像データを得るための撮像時間と、第2組目の高感度画像データ及び低感度画像データを得るための撮像時間との間には、2倍の関係がある形態とすることができる。尚、このような形態は、Nが3以上の場合に対しても拡張することができる。そして、このような形態によっても、ダイナミックレンジの拡大を図ることができる。しかも、N組の画像データから2N段階にダイナミックレンジが拡大された画像を得ることができ、動画の撮影にも容易に対応することができる。従来の技術にあっては、前述したとおり、N組の画像データからN段階にダイナミックレンジが拡大された画像しか得ることができない。
画像処理部は、露出時間の異なるN組の高感度画像データ及び低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる画像の照度領域を、最も照度の低い画像領域から最も照度の高い画像領域まで2N段階の画像領域に区画し、最も照度が低い画像領域から第N番目に照度が低い画像領域までのN段階の低照度画像領域のそれぞれにおいては、N段階の低照度画像領域のそれぞれに対応するN組の高感度画像データを用いて、また、第(N+1)番目に照度が低い画像領域から最も照度が高い画像領域までのN段階の高照度画像領域のそれぞれにおいては、N段階の高照度画像領域のそれぞれに対応するN組の低感度画像データを用いて、合成画像を生成する形態とすることができる。そして、この場合、N=2であり、第1組目の高感度画像データ及び低感度画像データを得るための撮像時間と、第2組目の高感度画像データ及び低感度画像データを得るための撮像時間との間には、2倍の関係がある形態とすることができる。尚、このような形態は、Nが3以上の場合に対しても拡張することができる。そして、このような形態によっても、ダイナミックレンジの拡大を図ることができる。しかも、N組の画像データから2N段階にダイナミックレンジが拡大された画像を得ることができ、動画の撮影にも容易に対応することができる。従来の技術にあっては、前述したとおり、N組の画像データからN段階にダイナミックレンジが拡大された画像しか得ることができない。
0.01≦T4/T123≦0.25
を例示することができる。
第1フィルター43及び第1光電変換素子42を備え、第1波長帯の光を受光する第1撮像素子41、
第2フィルター53及び第2光電変換素子52を備え、第1波長帯のピーク波長よりも長いピーク波長を有する第2波長帯の光を受光する第2撮像素子51、
第3フィルター63及び第3光電変換素子62を備え、第2波長帯のピーク波長よりも長いピーク波長を有する第3波長帯の光を受光する第3撮像素子61、及び、
第4フィルター73及び第4光電変換素子72を備え、第1波長帯、第2波長帯及び第3波長帯の光を受光する第4撮像素子71、
から成る撮像素子ユニット31が2次元マトリクス状に配列された撮像部30、並びに、
画像処理部11、
を備えている。尚、撮像部30の画素数を、例えば、VGA(640×480)とし、動画を撮像するとする。以下の実施例における撮像装置においても同様とする。
T4/T123≒0.05
である。そして、第4フィルター73は、第1フィルター43を構成する顔料系の第1材料、第2フィルター53を構成する顔料系の第2材料、及び、第3フィルター63を構成する顔料系の第3材料が混合された材料から成り、あるいは又、第4フィルター73は、第1フィルター43を構成する顔料系の第1材料、及び、第3フィルター63を構成する顔料系の第3材料が混合された材料から成る。第4フィルター73の厚さは約1μmであり、通常の紫外線照射によるリソグラフィ技術に基づき形成することができる。第4撮像素子71はグレイ・撮像素子であり、第4撮像素子71からの出力は、カラー情報を持たないグレイ階調の情報である。
第1フィルター43と第1光電変換素子42との間には第1開口部47が形成されており、
第2フィルター53と第2光電変換素子52との間には第2開口部57が形成されており、
第3フィルター63と第3光電変換素子62との間には第3開口部67が形成されており、
第4フィルター73と第4光電変換素子72との間には第4開口部77が形成されており、
第4開口部77は、第1開口部47、第2開口部57及び第3開口部67よりも小さい。
Gn1/Gn2=4
である。即ち、第1撮像素子41、第2撮像素子51及び第3撮像素子61と、第4撮像素子71とのゲイン差のレンジは、最大、12dBである。
画像処理部11は、露出時間の異なるN組の高感度画像データ及び低感度画像データを生成し、
画像処理部11は、更に、低感度画像データ又は高感度画像データから得られる画像の照度領域を、最も照度の低い領域から最も照度の高い領域まで2N段階の領域に区画し、最も照度が低い領域から第N番目に照度が低い領域までのN段階の低照度画像領域のそれぞれにおいては、N段階の低照度画像領域のそれぞれに対応するN組の高感度画像データを用いて、また、第(N+1)番目に照度が低い領域から最も照度が高い領域までのN段階の高照度画像領域のそれぞれにおいては、N段階の高照度画像領域のそれぞれに対応するN組の低感度画像データを用いて、合成画像を生成する。尚、実施例4にあっては、N=2としており、第1組目の高感度画像データ及び低感度画像データを得るための撮像時間と、第2組目の高感度画像データ及び低感度画像データを得るための撮像時間との間には、2倍の関係がある。
第1光電変換素子を備え、可視光帯の光を受光する第1撮像素子141及び第3撮像素子161、及び、
減光フィルター及び第2光電変換素子を備え、可視光帯の光を受光する第2撮像素子151及び第4撮像素子171、
から成る撮像素子ユニット131が2次元マトリクス状に配列された撮像部130、並びに、
画像処理部11、
を備えた撮像装置である。
画像処理部11は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する。
第1光電変換素子を備え、可視光帯の光を受光する第1撮像素子141,161、及び、
減光フィルター及び第2光電変換素子を備え、可視光帯の光を受光する第2撮像素子151,171、
から成る撮像素子ユニットが配列された撮像部、並びに、
画像処理部11、
を備えた撮像装置である。尚、特に図示はしていないが、係る第4の態様の撮像装置にあっては、例えば、撮像素子ユニットが、第1方向に沿って撮像部の1行の全てに配されており、あるいは又、撮像素子ユニットが、第1方向に沿って撮像部の1行において間隔を空けて配されており、更には、撮像素子ユニットが、第2方向に沿ってM行(但し、M≧2)毎に配されている。撮像素子ユニット以外の撮像部を構成する部分は、第1撮像素子で占めればよい。
画像処理部11は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する。
第1光電変換素子の光入射領域には第1開口部147が形成されており、
減光フィルターと第2光電変換素子との間には第2開口部157が形成されており、
第3光電変換素子の光入射領域には第3開口部167が形成されており、
減光フィルターと第4光電変換素子との間には第4開口部177が形成されており、
第3開口部167は第1開口部147よりも小さく、第4開口部177は第2開口部157よりも小さい。
Gn1/Gn2≧1
を満足し、具体的には、
Gn1/Gn2=4
である。即ち、第1撮像素子141及び第3撮像素子161と、第2撮像素子151及び第4撮像素子171とのゲイン差のレンジは、最大、12dBである。第1撮像素子141及び第3撮像素子161の感度比と、第2撮像素子151及び第4撮像素子171の感度比は1:(1/8)であり、第1撮像素子141及び第3撮像素子161と、第2撮像素子151と第4撮像素子171とのゲイン比は、4:1である。従って、第1撮像素子141及び第3撮像素子161と、第2撮像素子151及び第4撮像素子171との最終出力比は1:(1/32)、即ち、30dBとなる。
画像処理部11は、露出時間の異なるN組の高感度画像データ及び低感度画像データを生成し、
画像処理部11は、更に、低感度画像データ又は高感度画像データから得られる画像の照度領域を、最も照度の低い画像領域から最も照度の高い画像領域まで2N段階の画像領域に区画し、最も照度が低い画像領域から第N番目に照度が低い画像領域までのN段階の低照度画像領域のそれぞれにおいては、N段階の低照度画像領域のそれぞれに対応するN組の高感度画像データを用いて、また、第(N+1)番目に照度が低い画像領域から最も照度が高い画像領域までのN段階の高照度画像領域のそれぞれにおいては、N段階の高照度画像領域のそれぞれに対応するN組の低感度画像データを用いて、合成画像を生成する形態とすることができる。尚、実施例9においても、N=2としており、第1組目の高感度画像データ及び低感度画像データを得るための撮像時間と、第2組目の高感度画像データ及び低感度画像データを得るための撮像時間との間には、2倍の関係がある。
第1フィルター43及び第1光電変換素子42を備え、第1波長帯の光を受光する第1撮像素子41、
第2フィルター53及び第2光電変換素子52を備え、第1波長帯のピーク波長よりも長いピーク波長を有する第2波長帯の光を受光する第2撮像素子51、
第3フィルター63及び第3光電変換素子62を備え、第2波長帯のピーク波長よりも長いピーク波長を有する第3波長帯の光を受光する第3撮像素子61、及び、
第4フィルター73及び第4光電変換素子72を備え、第1波長帯、第2波長帯及び第3波長帯の光を受光する第4撮像素子71、
から成る撮像素子サブユニット231Bの4つから構成された撮像素子ユニット231Aが2次元マトリクス状に配列された撮像部230、並びに、
画像処理部11、
を備えた撮像装置である。尚、実施例11における第1撮像素子41、第2撮像素子51、第3撮像素子61及び第4撮像素子71、光電変換素子、フィルター43,53,63,73等は、実施例1と同様の構成、構造を有する。
[1]《撮像装置:第1の態様》
第1フィルター及び第1光電変換素子を備え、第1波長帯の光を受光する第1撮像素子、
第2フィルター及び第2光電変換素子を備え、第1波長帯のピーク波長よりも長いピーク波長を有する第2波長帯の光を受光する第2撮像素子、
第3フィルター及び第3光電変換素子を備え、第2波長帯のピーク波長よりも長いピーク波長を有する第3波長帯の光を受光する第3撮像素子、及び、
第4フィルター及び第4光電変換素子を備え、第1波長帯、第2波長帯及び第3波長帯の光を受光する第4撮像素子、
から成る撮像素子ユニットが2次元マトリクス状に配列された撮像部、並びに、
画像処理部、
を備えた撮像装置であって、
第4フィルターの光透過率は、第1フィルターの光透過率、第2フィルターの光透過率及び第3フィルターの光透過率よりも低く、
画像処理部は、第1撮像素子、第2撮像素子及び第3撮像素子からの出力によって高感度画像データを生成し、第4撮像素子からの出力によって低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する撮像装置。
[2]第1フィルターと第1光電変換素子との間には第1開口部が形成されており、
第2フィルターと第2光電変換素子との間には第2開口部が形成されており、
第3フィルターと第3光電変換素子との間には第3開口部が形成されており、
第4フィルターと第4光電変換素子との間には第4開口部が形成されており、
第4開口部は、第1開口部、第2開口部及び第3開口部よりも小さい[1]に記載の撮像装置。
[3]画像処理部は、第1撮像素子、第2撮像素子及び第3撮像素子からの出力を調整する第1ゲイン調整部、並びに、第4撮像素子からの出力を調整する第2ゲイン調整部を備えている[1]又は[2]に記載の撮像装置。
[4]第1ゲイン調整部による第1撮像素子、第2撮像素子及び第3撮像素子からの出力に対する調整係数をGn1、第2ゲイン調整部による第4撮像素子からの出力に対する調整係数をGn2としたとき、
Gn1/Gn2≧1
を満足する[3]に記載の撮像装置。
[5]画像処理部は、露出時間の異なるN組の高感度画像データ及び低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる画像の照度領域を、最も照度の低い領域から最も照度の高い領域まで2N段階の領域に区画し、最も照度が低い領域から第N番目に照度が低い領域までのN段階の低照度画像領域のそれぞれにおいては、N段階の低照度画像領域のそれぞれに対応するN組の高感度画像データを用いて、また、第(N+1)番目に照度が低い領域から最も照度が高い領域までのN段階の高照度画像領域のそれぞれにおいては、N段階の高照度画像領域のそれぞれに対応するN組の低感度画像データを用いて、合成画像を生成する[1]乃至[4]のいずれか1項に記載の撮像装置。
[6]N=2であり、第1組目の高感度画像データ及び低感度画像データを得るための撮像時間と、第2組目の高感度画像データ及び低感度画像データを得るための撮像時間との間には、2倍の関係がある[5]に記載の撮像装置。
[7]第1フィルター、第2フィルター、第3フィルター及び第4フィルターの間には隙間が存在しない[1]乃至[6]のいずれか1項に記載の撮像装置。
[8]第4フィルターは、第1フィルターを構成する第1材料から成る第1材料層、第2フィルターを構成する第2材料から成る第2材料層、及び、第3フィルターを構成する第3材料から成る第3材料層の3層積層構造を有する[1]乃至[7]のいずれか1項に記載の撮像装置。
[9]第4フィルターは、第1フィルターを構成する第1材料、第2フィルターを構成する第2材料、及び、第3フィルターを構成する第3材料が混合された材料から成る[1]乃至[7]のいずれか1項に記載の撮像装置。
[10]第4フィルターは、第1フィルターを構成する第1材料から成る第1材料層、及び、第3フィルターを構成する第3材料から成る第3材料層の2層積層構造を有する[1]乃至[7]のいずれか1項に記載の撮像装置。
[11]第4フィルターは、第1フィルターを構成する第1材料、及び、第3フィルターを構成する第3材料が混合された材料から成る[1]乃至[7]のいずれか1項に記載の撮像装置。
[12]《撮像装置:第2の態様》
第1光電変換素子を備え、可視光帯の光を受光する第1撮像素子及び第3撮像素子と、
減光フィルター及び第2光電変換素子を備え、可視光帯の光を受光する第2撮像素子及び第4撮像素子、
とから成る撮像素子ユニットが2次元マトリクス状に配列された撮像部、並びに、
画像処理部、
を備えた撮像装置であって、
画像処理部は、第1撮像素子及び第3撮像素子からの出力によって高感度画像データを生成し、第2撮像素子及び第4撮像素子からの出力によって低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する撮像装置。
[13]第1光電変換素子の光入射領域には第1開口部が形成されており、
減光フィルターと第2光電変換素子との間には第2開口部が形成されており、
第3光電変換素子の光入射領域には第3開口部が形成されており、
減光フィルターと第4光電変換素子との間には第4開口部が形成されており、
第3開口部は第1開口部よりも小さく、第4開口部は第2開口部よりも小さい[12]に記載の撮像装置。
[14]画像処理部は、第1撮像素子及び第3撮像素子からの出力を調整する第1ゲイン調整部、並びに、第2撮像素子及び第4撮像素子からの出力を調整する第2ゲイン調整部を備えている[12]又は[13]に記載の撮像装置。
[15]第1ゲイン調整部による第1撮像素子及び第3撮像素子からの出力に対する調整係数をGn1、第2ゲイン調整部による第2撮像素子及び第4撮像素子からの出力に対する調整係数をGn2としたとき、
Gn1/Gn2≧1
を満足する[14]に記載の撮像装置。
[16]画像処理部は、露出時間の異なるN組の高感度画像データ及び低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる画像の照度領域を、最も照度の低い画像領域から最も照度の高い画像領域まで2N段階の画像領域に区画し、最も照度が低い画像領域から第N番目に照度が低い画像領域までのN段階の低照度画像領域のそれぞれにおいては、N段階の低照度画像領域のそれぞれに対応するN組の高感度画像データを用いて、また、第(N+1)番目に照度が低い画像領域から最も照度が高い画像領域までのN段階の高照度画像領域のそれぞれにおいては、N段階の高照度画像領域のそれぞれに対応するN組の低感度画像データを用いて、合成画像を生成する[12]乃至[15]のいずれか1項に記載の撮像装置。
[17]N=2であり、第1組目の高感度画像データ及び低感度画像データを得るための撮像時間と、第2組目の高感度画像データ及び低感度画像データを得るための撮像時間との間には、2倍の関係がある[16]に記載の撮像装置。
[18]減光フィルターは、第1フィルターと第3フィルターを積層したフィルターの分光特性と等しい分光特性を有する[12]乃至[17]のいずれか1項に記載の撮像装置。
[19]《撮像装置:第3の態様》
第1フルター及び第1光電変換素子を備え、第1波長帯の光を受光する第1撮像素子、
第2フィルター及び第2光電変換素子を備え、第1波長帯のピーク波長よりも長いピーク波長を有する第2波長帯の光を受光する第2撮像素子、
第3フィルター及び第3光電変換素子を備え、第2波長帯のピーク波長よりも長いピーク波長を有する第3波長帯の光を受光する第3撮像素子、及び、
第4フィルター及び第4光電変換素子を備え、第1波長帯、第2波長帯及び第3波長帯の光を受光する第4撮像素子、
から成る撮像素子サブユニットの4つから構成された撮像素子ユニットが2次元マトリクス状に配列された撮像部、並びに、
画像処理部、
を備えた撮像装置であって、
第4フィルターの光透過率は、第1フィルターの光透過率、第2フィルターの光透過率及び第3フィルターの光透過率よりも低く、
画像処理部は、撮像素子ユニットを構成する4つの撮像素子サブユニットにおける第1撮像素子の出力合計、第2撮像素子の出力合計及び第3撮像素子の出力合計に基づき高感度画像データを生成し、4つの撮像素子サブユニットのそれぞれにおける第4撮像素子からの出力によって低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する撮像装置。
[20]《撮像装置:第4の態様》
第1光電変換素子を備え、可視光帯の光を受光する第1撮像素子、及び、
減光フィルター及び第2光電変換素子を備え、可視光帯の光を受光する第2撮像素子、
から成る撮像素子ユニットが配列された撮像部、並びに、
画像処理部、
を備えた撮像装置であって、
画像処理部は、第1撮像素子からの出力によって高感度画像データを生成し、第2撮像素子からの出力によって低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する撮像装置。
Claims (20)
- 第1フィルター及び第1光電変換素子を備え、第1波長帯の光を受光する第1撮像素子、
第2フィルター及び第2光電変換素子を備え、第1波長帯のピーク波長よりも長いピーク波長を有する第2波長帯の光を受光する第2撮像素子、
第3フィルター及び第3光電変換素子を備え、第2波長帯のピーク波長よりも長いピーク波長を有する第3波長帯の光を受光する第3撮像素子、及び、
第4フィルター及び第4光電変換素子を備え、第1波長帯、第2波長帯及び第3波長帯の光を受光する第4撮像素子、
から成る撮像素子ユニットが2次元マトリクス状に配列された撮像部、並びに、
画像処理部、
を備えた撮像装置であって、
第4フィルターの光透過率は、第1フィルターの光透過率、第2フィルターの光透過率及び第3フィルターの光透過率よりも低く、
画像処理部は、第1撮像素子、第2撮像素子及び第3撮像素子からの出力によって高感度画像データを生成し、第4撮像素子からの出力によって低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する撮像装置。 - 第1フィルターと第1光電変換素子との間には第1開口部が形成されており、
第2フィルターと第2光電変換素子との間には第2開口部が形成されており、
第3フィルターと第3光電変換素子との間には第3開口部が形成されており、
第4フィルターと第4光電変換素子との間には第4開口部が形成されており、
第4開口部は、第1開口部、第2開口部及び第3開口部よりも小さい請求項1に記載の撮像装置。 - 画像処理部は、第1撮像素子、第2撮像素子及び第3撮像素子からの出力を調整する第1ゲイン調整部、並びに、第4撮像素子からの出力を調整する第2ゲイン調整部を備えている請求項1に記載の撮像装置。
- 第1ゲイン調整部による第1撮像素子、第2撮像素子及び第3撮像素子からの出力に対する調整係数をGn1、第2ゲイン調整部による第4撮像素子からの出力に対する調整係数をGn2としたとき、
Gn1/Gn2≧1
を満足する請求項3に記載の撮像装置。 - 画像処理部は、露出時間の異なるN組の高感度画像データ及び低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる画像の照度領域を、最も照度の低い領域から最も照度の高い領域まで2N段階の領域に区画し、最も照度が低い領域から第N番目に照度が低い領域までのN段階の低照度画像領域のそれぞれにおいては、N段階の低照度画像領域のそれぞれに対応するN組の高感度画像データを用いて、また、第(N+1)番目に照度が低い領域から最も照度が高い領域までのN段階の高照度画像領域のそれぞれにおいては、N段階の高照度画像領域のそれぞれに対応するN組の低感度画像データを用いて、合成画像を生成する請求項1に記載の撮像装置。 - N=2であり、第1組目の高感度画像データ及び低感度画像データを得るための撮像時間と、第2組目の高感度画像データ及び低感度画像データを得るための撮像時間との間には、2倍の関係がある請求項5に記載の撮像装置。
- 第1フィルター、第2フィルター、第3フィルター及び第4フィルターの間には隙間が存在しない請求項1に記載の撮像装置。
- 第4フィルターは、第1フィルターを構成する第1材料から成る第1材料層、第2フィルターを構成する第2材料から成る第2材料層、及び、第3フィルターを構成する第3材料から成る第3材料層の3層積層構造を有する請求項1に記載の撮像装置。
- 第4フィルターは、第1フィルターを構成する第1材料、第2フィルターを構成する第2材料、及び、第3フィルターを構成する第3材料が混合された材料から成る請求項1に記載の撮像装置。
- 第4フィルターは、第1フィルターを構成する第1材料から成る第1材料層、及び、第3フィルターを構成する第3材料から成る第3材料層の2層積層構造を有する請求項1に記載の撮像装置。
- 第4フィルターは、第1フィルターを構成する第1材料、及び、第3フィルターを構成する第3材料が混合された材料から成る請求項1に記載の撮像装置。
- 第1光電変換素子を備え、可視光帯の光を受光する第1撮像素子及び第3撮像素子と、
減光フィルター及び第2光電変換素子を備え、可視光帯の光を受光する第2撮像素子及び第4撮像素子、
とから成る撮像素子ユニットが2次元マトリクス状に配列された撮像部、並びに、
画像処理部、
を備えた撮像装置であって、
画像処理部は、第1撮像素子及び第3撮像素子からの出力によって高感度画像データを生成し、第2撮像素子及び第4撮像素子からの出力によって低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する撮像装置。 - 第1光電変換素子の光入射領域には第1開口部が形成されており、
減光フィルターと第2光電変換素子との間には第2開口部が形成されており、
第3光電変換素子の光入射領域には第3開口部が形成されており、
減光フィルターと第4光電変換素子との間には第4開口部が形成されており、
第3開口部は第1開口部よりも小さく、第4開口部は第2開口部よりも小さい請求項12に記載の撮像装置。 - 画像処理部は、第1撮像素子及び第3撮像素子からの出力を調整する第1ゲイン調整部、並びに、第2撮像素子及び第4撮像素子からの出力を調整する第2ゲイン調整部を備えている請求項12に記載の撮像装置。
- 第1ゲイン調整部による第1撮像素子及び第3撮像素子からの出力に対する調整係数をGn1、第2ゲイン調整部による第2撮像素子及び第4撮像素子からの出力に対する調整係数をGn2としたとき、
Gn1/Gn2≧1
を満足する請求項14に記載の撮像装置。 - 画像処理部は、露出時間の異なるN組の高感度画像データ及び低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる画像の照度領域を、最も照度の低い画像領域から最も照度の高い画像領域まで2N段階の画像領域に区画し、最も照度が低い画像領域から第N番目に照度が低い画像領域までのN段階の低照度画像領域のそれぞれにおいては、N段階の低照度画像領域のそれぞれに対応するN組の高感度画像データを用いて、また、第(N+1)番目に照度が低い画像領域から最も照度が高い画像領域までのN段階の高照度画像領域のそれぞれにおいては、N段階の高照度画像領域のそれぞれに対応するN組の低感度画像データを用いて、合成画像を生成する請求項12に記載の撮像装置。 - N=2であり、第1組目の高感度画像データ及び低感度画像データを得るための撮像時間と、第2組目の高感度画像データ及び低感度画像データを得るための撮像時間との間には、2倍の関係がある請求項16に記載の撮像装置。
- 減光フィルターは、第1フィルターと第3フィルターを積層したフィルターの分光特性と等しい分光特性を有する請求項12に記載の撮像装置。
- 第1フィルター及び第1光電変換素子を備え、第1波長帯の光を受光する第1撮像素子、
第2フィルター及び第2光電変換素子を備え、第1波長帯のピーク波長よりも長いピーク波長を有する第2波長帯の光を受光する第2撮像素子、
第3フィルター及び第3光電変換素子を備え、第2波長帯のピーク波長よりも長いピーク波長を有する第3波長帯の光を受光する第3撮像素子、及び、
第4フィルター及び第4光電変換素子を備え、第1波長帯、第2波長帯及び第3波長帯の光を受光する第4撮像素子、
から成る撮像素子サブユニットの4つから構成された撮像素子ユニットが2次元マトリクス状に配列された撮像部、並びに、
画像処理部、
を備えた撮像装置であって、
第4フィルターの光透過率は、第1フィルターの光透過率、第2フィルターの光透過率及び第3フィルターの光透過率よりも低く、
画像処理部は、撮像素子ユニットを構成する4つの撮像素子サブユニットにおける第1撮像素子の出力合計、第2撮像素子の出力合計及び第3撮像素子の出力合計に基づき高感度画像データを生成し、4つの撮像素子サブユニットのそれぞれにおける第4撮像素子からの出力によって低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する撮像装置。 - 第1光電変換素子を備え、可視光帯の光を受光する第1撮像素子、及び、
減光フィルター及び第2光電変換素子を備え、可視光帯の光を受光する第2撮像素子、
から成る撮像素子ユニットが配列された撮像部、並びに、
画像処理部、
を備えた撮像装置であって、
画像処理部は、第1撮像素子からの出力によって高感度画像データを生成し、第2撮像素子からの出力によって低感度画像データを生成し、
画像処理部は、更に、低感度画像データ又は高感度画像データから得られる低照度画像領域においては低照度画像領域に対応する高感度画像データを用いて、また、低感度画像データ又は高感度画像データから得られる高照度画像領域においては高照度画像領域に対応する低感度画像データを用いて、合成画像を生成する撮像装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/356,022 US9681059B2 (en) | 2011-12-16 | 2012-12-07 | Image-capturing device |
EP12857773.1A EP2793470A1 (en) | 2011-12-16 | 2012-12-07 | Image pickup device |
CN201280060655.7A CN103999458A (zh) | 2011-12-16 | 2012-12-07 | 图像捕获设备 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-276508 | 2011-12-16 | ||
JP2011276508 | 2011-12-16 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013089036A1 true WO2013089036A1 (ja) | 2013-06-20 |
Family
ID=48612487
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/081798 WO2013089036A1 (ja) | 2011-12-16 | 2012-12-07 | 撮像装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9681059B2 (ja) |
EP (1) | EP2793470A1 (ja) |
JP (1) | JPWO2013089036A1 (ja) |
CN (1) | CN103999458A (ja) |
WO (1) | WO2013089036A1 (ja) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160004827A (ko) * | 2014-07-04 | 2016-01-13 | 삼성전자주식회사 | 이미지 센서, 이미지 센싱 방법, 그리고 이미지 센서를 포함하는 이미지 촬영 장치 |
US9467628B2 (en) | 2014-08-26 | 2016-10-11 | Sensors Unlimited, Inc. | High dynamic range image sensor |
WO2017026449A1 (ja) * | 2015-08-13 | 2017-02-16 | 株式会社日立国際電気 | 撮像装置 |
JP2017135220A (ja) * | 2016-01-26 | 2017-08-03 | 日本放送協会 | 撮像素子および撮像装置 |
JP2020039130A (ja) * | 2015-09-30 | 2020-03-12 | 株式会社ニコン | 撮像装置 |
JP2021114611A (ja) * | 2014-10-23 | 2021-08-05 | パナソニックIpマネジメント株式会社 | 撮像装置および画像取得装置 |
Families Citing this family (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101871945B1 (ko) * | 2013-01-17 | 2018-08-02 | 한화에어로스페이스 주식회사 | 영상 처리 장치 및 방법 |
JP2014175553A (ja) * | 2013-03-11 | 2014-09-22 | Canon Inc | 固体撮像装置およびカメラ |
JP6568719B2 (ja) * | 2014-08-29 | 2019-08-28 | 株式会社 日立産業制御ソリューションズ | 撮像方法及び撮像装置 |
WO2016046959A1 (ja) * | 2014-09-26 | 2016-03-31 | 株式会社日立国際電気 | 撮像方法および撮像装置 |
KR20160109694A (ko) * | 2015-03-12 | 2016-09-21 | 삼성전자주식회사 | 이미지 센서 및 상기 이미지 센서를 포함하는 이미지 처리 시스템 |
KR20180054733A (ko) * | 2015-09-17 | 2018-05-24 | 세미컨덕터 콤포넨츠 인더스트리즈 엘엘씨 | 광 분리를 이용하는 고명암비 픽셀 |
US9843746B2 (en) * | 2016-05-03 | 2017-12-12 | Altasens, Inc. | Image sensor combining high dynamic range techniques |
US10757320B2 (en) | 2017-12-28 | 2020-08-25 | Waymo Llc | Multiple operating modes to expand dynamic range |
US20190208136A1 (en) * | 2017-12-29 | 2019-07-04 | Waymo Llc | High-speed image readout and processing |
US10694112B2 (en) * | 2018-01-03 | 2020-06-23 | Getac Technology Corporation | Vehicular image pickup device and image capturing method |
US10516831B1 (en) * | 2018-07-12 | 2019-12-24 | Getac Technology Corporation | Vehicular image pickup device and image capturing method |
US10728462B2 (en) * | 2018-10-08 | 2020-07-28 | Pixart Imaging Inc. | Image sensor, image sensing system, image sensing method and material recognition system |
CN112770020A (zh) * | 2019-11-05 | 2021-05-07 | 北京小米移动软件有限公司 | 图像传感模组、方法、装置、电子设备及介质 |
CN110913101A (zh) * | 2019-11-14 | 2020-03-24 | 维沃移动通信有限公司 | 一种拍摄装置及电子设备 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000244797A (ja) | 1999-02-23 | 2000-09-08 | Sony Corp | 画像信号処理装置および方法 |
JP2002135792A (ja) * | 2000-10-18 | 2002-05-10 | Sony Corp | 固体撮像素子 |
JP2003153291A (ja) * | 2001-11-08 | 2003-05-23 | Canon Inc | 撮像装置及びシステム |
JP2003199117A (ja) * | 2001-12-25 | 2003-07-11 | Sony Corp | 撮像装置及び固体撮像素子の信号処理方法 |
JP2005323331A (ja) | 2004-02-23 | 2005-11-17 | Sony Corp | Ad変換方法およびad変換装置並びに物理量分布検知の半導体装置および電子機器 |
JP2006253876A (ja) | 2005-03-09 | 2006-09-21 | Sony Corp | 物理量分布検知装置および物理量分布検知装置の駆動方法 |
JP2006270364A (ja) | 2005-03-23 | 2006-10-05 | Fuji Photo Film Co Ltd | 固体撮像素子および固体撮像装置、ならびにその駆動方法 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6803955B1 (en) * | 1999-03-03 | 2004-10-12 | Olympus Corporation | Imaging device and imaging apparatus |
JP4341691B2 (ja) * | 2007-04-24 | 2009-10-07 | ソニー株式会社 | 撮像装置、撮像方法、露光制御方法、プログラム |
US8432466B2 (en) * | 2011-09-29 | 2013-04-30 | International Business Machines Corporation | Multiple image high dynamic range imaging from a single sensor array |
-
2012
- 2012-12-07 US US14/356,022 patent/US9681059B2/en active Active
- 2012-12-07 JP JP2013549237A patent/JPWO2013089036A1/ja active Pending
- 2012-12-07 WO PCT/JP2012/081798 patent/WO2013089036A1/ja active Application Filing
- 2012-12-07 CN CN201280060655.7A patent/CN103999458A/zh active Pending
- 2012-12-07 EP EP12857773.1A patent/EP2793470A1/en not_active Withdrawn
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000244797A (ja) | 1999-02-23 | 2000-09-08 | Sony Corp | 画像信号処理装置および方法 |
JP2002135792A (ja) * | 2000-10-18 | 2002-05-10 | Sony Corp | 固体撮像素子 |
JP2003153291A (ja) * | 2001-11-08 | 2003-05-23 | Canon Inc | 撮像装置及びシステム |
JP2003199117A (ja) * | 2001-12-25 | 2003-07-11 | Sony Corp | 撮像装置及び固体撮像素子の信号処理方法 |
JP2005323331A (ja) | 2004-02-23 | 2005-11-17 | Sony Corp | Ad変換方法およびad変換装置並びに物理量分布検知の半導体装置および電子機器 |
JP2006253876A (ja) | 2005-03-09 | 2006-09-21 | Sony Corp | 物理量分布検知装置および物理量分布検知装置の駆動方法 |
JP2006270364A (ja) | 2005-03-23 | 2006-10-05 | Fuji Photo Film Co Ltd | 固体撮像素子および固体撮像装置、ならびにその駆動方法 |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160004827A (ko) * | 2014-07-04 | 2016-01-13 | 삼성전자주식회사 | 이미지 센서, 이미지 센싱 방법, 그리고 이미지 센서를 포함하는 이미지 촬영 장치 |
KR101689665B1 (ko) * | 2014-07-04 | 2016-12-26 | 삼성전자 주식회사 | 이미지 센서, 이미지 센싱 방법, 그리고 이미지 센서를 포함하는 이미지 촬영 장치 |
US9635277B2 (en) | 2014-07-04 | 2017-04-25 | Samsung Electronics Co., Ltd. | Image sensor, image sensing method, and image photographing apparatus including image sensor |
US9467628B2 (en) | 2014-08-26 | 2016-10-11 | Sensors Unlimited, Inc. | High dynamic range image sensor |
BE1023383B1 (fr) * | 2014-08-26 | 2017-03-01 | Sensors Unlimited, Inc. | Capteur d'image à grande gamme dynamique |
JP2021114611A (ja) * | 2014-10-23 | 2021-08-05 | パナソニックIpマネジメント株式会社 | 撮像装置および画像取得装置 |
JP7178644B2 (ja) | 2014-10-23 | 2022-11-28 | パナソニックIpマネジメント株式会社 | 撮像装置および画像取得装置 |
WO2017026449A1 (ja) * | 2015-08-13 | 2017-02-16 | 株式会社日立国際電気 | 撮像装置 |
JPWO2017026449A1 (ja) * | 2015-08-13 | 2018-06-28 | 株式会社日立国際電気 | 撮像装置 |
JP2020039130A (ja) * | 2015-09-30 | 2020-03-12 | 株式会社ニコン | 撮像装置 |
JP2017135220A (ja) * | 2016-01-26 | 2017-08-03 | 日本放送協会 | 撮像素子および撮像装置 |
Also Published As
Publication number | Publication date |
---|---|
US20140320695A1 (en) | 2014-10-30 |
CN103999458A (zh) | 2014-08-20 |
JPWO2013089036A1 (ja) | 2015-04-27 |
US9681059B2 (en) | 2017-06-13 |
EP2793470A1 (en) | 2014-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013089036A1 (ja) | 撮像装置 | |
CN102365861B (zh) | 在产生数字图像时曝光像素组 | |
EP2087725B1 (en) | Improved light sensitivity in image sensors | |
US9392241B2 (en) | Image processing apparatus and image processing method | |
US8164651B2 (en) | Concentric exposure sequence for image sensor | |
JP5016715B2 (ja) | 画像のダイナミック・レンジを改善するためのマルチ露光パターン | |
JP6528974B2 (ja) | 撮像素子、撮像方法、並びにプログラム | |
KR100827238B1 (ko) | 고화질 영상을 위한 영상 표시 방법 및 장치 | |
CN102687502B (zh) | 减少彩色图像中的噪声 | |
US20090051984A1 (en) | Image sensor having checkerboard pattern | |
JP2007329721A (ja) | 固体撮像装置 | |
GB2548687A (en) | Automotive imaging system including an electronic image sensor having a sparse color filter array | |
WO2011063063A1 (en) | Sparse color pixel array with pixel substitutes | |
US8111298B2 (en) | Imaging circuit and image pickup device | |
JP5526673B2 (ja) | 固体撮像装置及び電子機器 | |
JP4119566B2 (ja) | カラー撮像素子及びカラー撮像装置 | |
JP4119565B2 (ja) | カラー撮像素子及びカラー撮像装置 | |
JP2006115147A (ja) | 撮像装置 | |
JP2000253413A (ja) | 撮像素子及び撮像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12857773 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14356022 Country of ref document: US |
|
ENP | Entry into the national phase |
Ref document number: 2013549237 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012857773 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |