WO2018012492A1 - 撮像装置、撮像素子、および画像処理装置 - Google Patents
撮像装置、撮像素子、および画像処理装置 Download PDFInfo
- Publication number
- WO2018012492A1 WO2018012492A1 PCT/JP2017/025255 JP2017025255W WO2018012492A1 WO 2018012492 A1 WO2018012492 A1 WO 2018012492A1 JP 2017025255 W JP2017025255 W JP 2017025255W WO 2018012492 A1 WO2018012492 A1 WO 2018012492A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- pixel output
- image
- light
- incident angle
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 386
- 238000012545 processing Methods 0.000 title claims abstract description 81
- 238000001514 detection method Methods 0.000 claims description 193
- 238000000034 method Methods 0.000 claims description 27
- 230000008569 process Effects 0.000 claims description 18
- 230000000875 corresponding effect Effects 0.000 description 87
- 239000011159 matrix material Substances 0.000 description 52
- 230000035945 sensitivity Effects 0.000 description 42
- 230000003287 optical effect Effects 0.000 description 35
- 230000004048 modification Effects 0.000 description 25
- 238000012986 modification Methods 0.000 description 25
- 230000006870 function Effects 0.000 description 22
- 230000008859 change Effects 0.000 description 17
- 229910052774 Proactinium Inorganic materials 0.000 description 16
- 229910052745 lead Inorganic materials 0.000 description 16
- 238000012546 transfer Methods 0.000 description 15
- 238000004364 calculation method Methods 0.000 description 13
- 238000012937 correction Methods 0.000 description 13
- 230000014509 gene expression Effects 0.000 description 11
- 230000003321 amplification Effects 0.000 description 9
- 238000003199 nucleic acid amplification method Methods 0.000 description 9
- 238000006243 chemical reaction Methods 0.000 description 8
- 239000000758 substrate Substances 0.000 description 8
- 230000006835 compression Effects 0.000 description 7
- 238000007906 compression Methods 0.000 description 7
- 230000007423 decrease Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000009792 diffusion process Methods 0.000 description 4
- 238000004519 manufacturing process Methods 0.000 description 3
- 229910052751 metal Inorganic materials 0.000 description 3
- 239000002184 metal Substances 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 239000004065 semiconductor Substances 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 239000010949 copper Substances 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 206010034960 Photophobia Diseases 0.000 description 1
- 239000000956 alloy Substances 0.000 description 1
- 229910045601 alloy Inorganic materials 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 229910052782 aluminium Inorganic materials 0.000 description 1
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- -1 for example Substances 0.000 description 1
- 208000013469 light sensitivity Diseases 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- WFKWXMTUELFFGS-UHFFFAOYSA-N tungsten Chemical compound [W] WFKWXMTUELFFGS-UHFFFAOYSA-N 0.000 description 1
- 229910052721 tungsten Inorganic materials 0.000 description 1
- 239000010937 tungsten Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/702—SSIS architectures characterised by non-identical, non-equidistant or non-planar pixel layout
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B5/00—Optical elements other than lenses
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/12—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/55—Optical parts specially adapted for electronic image sensors; Mounting thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/88—Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/44—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array
- H04N25/443—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by partially reading an SSIS array by reading pixels from selected 2D regions of the array, e.g. for windowing or digital zooming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/71—Charge-coupled device [CCD] sensors; Charge-transfer registers specially adapted for CCD sensors
- H04N25/75—Circuitry for providing, modifying or processing image signals from the pixel array
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/76—Addressed sensors, e.g. MOS or CMOS sensors
- H04N25/77—Pixel circuitry, e.g. memories, A/D converters, pixel amplifiers, shared circuits or shared components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
- H04N25/79—Arrangements of circuitry being divided between different or multiple substrates, chips or circuit boards, e.g. stacked image sensors
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/1462—Coatings
- H01L27/14621—Colour filter arrangements
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/1462—Coatings
- H01L27/14623—Optical shielding
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/14625—Optical elements or arrangements associated with the device
- H01L27/14627—Microlenses
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/1464—Back illuminated imager structures
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14643—Photodiode arrays; MOS imagers
- H01L27/14645—Colour imagers
Definitions
- the present disclosure relates to an imaging device, an imaging device, and an image processing device, and more particularly, to an imaging device, an imaging device, and an image processing device that can improve the degree of freedom in designing a configuration for realizing an imaging function.
- the configuration of the imaging device is generally well-known to a configuration in which an imaging lens and an imaging device are combined and a configuration in which a pinhole and an imaging device are combined.
- the configuration in which the imaging lens and the imaging element are combined is adopted in most of current imaging apparatuses, and the light from the subject is efficiently collected by the imaging lens, so that An image corresponding to the final image is formed on the imaging surface, and this is imaged by the imaging device.
- the combination of a pinhole and an image sensor is a configuration that does not involve an imaging lens, but the amount of light that reaches the imaging surface is small, so processing such as increasing the exposure time or increasing the gain is required. It is not resistant to general use and is not particularly suitable for high-speed imaging.
- Non-Patent Document 1 and Patent Document 1 light from the same point light source is incident on a plurality of adjacent pixels via an optical filter, so that an arbitrary characteristic is obtained in units of pixels. I can't.
- the present disclosure has been made in view of such a situation, and is intended to provide diversity for each pixel when an imaging lens is not used.
- the imaging device includes a plurality of pixel output units that receive incident light without any imaging lens and pinhole, and at least two of the plurality of pixel output units.
- the image pickup apparatus includes image pickup elements having different output pixel values of one pixel output unit with respect to an incident angle of incident light from a subject.
- the characteristic may be incident angle directivity indicating directivity with respect to an incident angle of incident light from the subject.
- an image restoration unit that restores a restored image in which the subject can be visually recognized, using a detection image composed of a plurality of detection signals output from the plurality of pixel output units.
- the image restoration unit can restore the restored image by selectively using a detection signal of a part of the plurality of pixel output units.
- the image restoration unit includes a restoration process for restoring the restored image using a detection signal of a part of the plurality of pixel output units, and all the pixels of the plurality of pixel output units. It is possible to selectively execute a restoration process for restoring the restored image using a detection signal in units of output.
- the plurality of pixel output units include a wide-angle compatible pixel output unit having the incident angle directivity suitable for a wide-angle image and a narrow-angle compatible pixel output unit that is narrower than the wide-angle compatible pixel output unit.
- the image restoration unit can restore the restored image by selectively using the wide-angle corresponding pixel output unit and the narrow-angle corresponding pixel output unit.
- the plurality of pixel output units may be provided with a configuration capable of independently setting characteristics with respect to an incident angle of incident light from the subject.
- the first aspect of the present disclosure there is a plurality of pixel output units that receive incident light that does not pass through either the imaging lens or the pinhole, and at least two of the plurality of pixel output units The characteristics of the output pixel value of the pixel output unit with respect to the incident angle of the incident light from the subject are different from each other.
- the imaging device includes a plurality of pixel output units that receive incident light without any imaging lens and pinhole, and at least two of the plurality of pixel output units.
- the output pixel values of the one pixel output unit have different characteristics with respect to the incident angle of the incident light from the subject.
- At least two of the pixel output units can have different incident angle directivities indicating directivities with respect to an incident angle of incident light from a subject.
- Each of the plurality of pixel output units may be configured by one photodiode, and one detection signal may be output from each of the plurality of pixel output units.
- the at least two pixel output units may each be provided with a light-shielding film that blocks the incidence of subject light that is incident light from the subject on the photodiode, and the two pixel outputs of the subject light are provided.
- the range in which the incidence to the unit is blocked by the light shielding film may be different from each other in the at least two pixel output units.
- Each of the plurality of pixel output units includes a plurality of photodiodes, and one detection signal can be output from each of the plurality of pixel output units.
- the at least two pixel output units may be configured such that, among the plurality of photodiodes, photodiodes contributing to the detection signal are different from each other.
- the plurality of pixel output units have a wide-angle compatible pixel output unit having an incident angle directivity suitable for a wide-angle image, and an incident angle directivity suitable for a narrow-angle image compared to the wide-angle compatible pixel output unit. It is possible to include a pixel output unit corresponding to a narrow angle.
- a plurality of on-chip lenses respectively corresponding to the plurality of pixel output units can be provided.
- the incident angle directivity can have characteristics according to the curvature of the on-chip lens.
- the incident angle directivity can have characteristics according to the light shielding region.
- the curvature of at least some of the on-chip lenses among the plurality of on-chip lenses can be different from the curvature of other on-chip lenses.
- the second aspect of the present disclosure there is a plurality of pixel output units that receive incident light that does not pass through either the imaging lens or the pinhole, and at least two of the plurality of pixel output units The characteristics of the output pixel value of the pixel output unit with respect to the incident angle of the incident light from the subject are different from each other.
- the plurality of pixel output units may be provided with a configuration capable of independently setting characteristics with respect to an incident angle of incident light from the subject.
- the image processing apparatus includes a plurality of pixel output units that receive incident light without any imaging lens and pinhole, and at least of the plurality of pixel output units.
- a plurality of detection signals respectively output from the plurality of pixel output units of the imaging elements having different incident angle directivities indicating the directivities of the output pixel values of the two pixel output units with respect to the incident angle of incident light from the subject.
- the image processing apparatus includes an image restoration unit that restores a restored image in which the subject can be visually recognized using a detected image.
- the image restoration unit can restore the restored image by selectively using a detection signal of a part of the plurality of pixel output units.
- the image restoration unit includes a restoration process for restoring the restored image using a detection signal of a part of the plurality of pixel output units, and all the pixels of the plurality of pixel output units. It is possible to selectively execute a restoration process for restoring the restored image using a detection signal in units of output.
- the plurality of pixel output units include a wide-angle compatible pixel output unit having the incident angle directivity suitable for a wide-angle image and a narrow-angle compatible pixel output unit that is narrower than the wide-angle compatible pixel output unit.
- the image restoration unit can restore the restored image by selectively using the wide-angle corresponding pixel output unit and the narrow-angle corresponding pixel output unit.
- An image is used to restore a restored image in which the subject is visible.
- region in a to-be-photographed surface It is a figure explaining incident light being parallel light. It is a figure explaining the incident angle in each pixel of a directional image sensor. It is a figure explaining the light reception sensitivity characteristic of a perpendicular direction. It is a figure explaining the light reception sensitivity characteristic of a horizontal direction. It is a figure explaining the light reception sensitivity characteristic according to the incident angle of each pixel of a directional image sensor. It is a figure explaining the light reception sensitivity of each pixel of the directional image sensor of the light from the area
- FIG. 6 It is a figure explaining the relationship between a to-be-photographed object distance and the coefficient expressing incident angle directivity. It is a figure explaining the relationship between a narrow view angle pixel and a wide view angle pixel. It is a figure explaining the relationship between a narrow view angle pixel and a wide view angle pixel. It is a figure explaining the relationship between a narrow view angle pixel and a wide view angle pixel. 7 is a flowchart for describing imaging processing by the imaging apparatus of the present disclosure in FIG. 6. It is a figure explaining a 1st modification. It is a figure explaining the 2nd modification. It is a figure explaining the 2nd modification. It is a figure explaining the example which changes a field angle by applying the 2nd modification.
- the configuration of the imaging apparatus is a configuration including an imaging lens 11 and an imaging element D for one pixel.
- the point light source P is imaged by the imaging element D, as shown in the middle stage of FIG.
- the light beams L1 to L5 emitted from the point light source P are condensed as indicated by the light beams L1 ′ to L5 ′, and an image of the point light source P is formed on the image sensor D, which is captured by the image sensor D.
- an image composed of light having a light intensity 5a that is the sum of all the light intensities of the light beams L1 to L5 emitted from the point light source P is formed and incident on the image sensor D.
- a light intensity 5a that is the sum of all the light intensities of the light beams L1 to L5 emitted from the point light source P
- this set of point light sources P constitutes a subject. Therefore, the subject is imaged by imaging the subject formed by converging the light rays emitted from the plurality of point light sources P on the subject surface.
- the subject on the subject surface 31 is composed of point light sources PA, PB, PC, and is shown in the upper part of FIG. 1 from each of the point light sources PA, PB, PC.
- the total light intensity becomes the light intensities 5a, 5b, and 5c, respectively.
- the light beams from the point light sources PA, PB, and PC are caused by the imaging lens 11 so that the positions Pa and Pb on the imaging surface of the imaging element 32 composed of a plurality of pixels. , Pc, respectively, and an image of the subject is formed and picked up.
- the detection signal levels of the pixels at the positions Pa, Pb, and Pc on the image sensor 32 are as shown in the right part of FIG. 2 when the detection signal levels of the respective rays are a, b, and c.
- the detection signal levels are 5a, 5b, and 5c, respectively.
- the vertical axis represents the position on the image sensor 32
- the horizontal axis represents the detection signal level in the image sensor at each position. That is, the positions Pa, Pb, and Pc on the image sensor 32 are in an inverted positional relationship with respect to the point light sources PA, PB, and PC on the subject surface 31, and the detection signal levels of the positions Pa, Pb, and Pc are set.
- an image on which an image of the subject is formed which is the light intensity of the light emitted from the point light sources PA, PB, and PC, is captured.
- the configuration of the imaging device is a configuration including a pinhole 12a provided as a hole with respect to the light shielding film 12 and the imaging device D, as shown in the lower stage of FIG. Of the light beams L1 to L5, only the light beam L3 that passes through the pinhole 12a is imaged on the image sensor D and imaged.
- an image of the point light source P is formed only by the light beam L3 having the light intensity a emitted from the point light source P and is incident on the image pickup device D.
- the image is picked up as a dark image having a light quantity of 1/5 as compared with the case where the imaging lens 11 is used.
- the subject on the subject surface 31 is composed of point light sources PA, PB, and PC, and the light rays emitted from the respective point light sources are light intensities a, b, and c, respectively.
- the positions Pa, Pb, and Pc on the imaging surface of the imaging element 32 correspond to one light beam from the point light sources PA, PB, and PC, respectively.
- the object image is formed and picked up at the detection signal levels a, b, and c.
- the detection signal levels at the positions Pa, Pb, and Pc are a, b, and c (for example, b> a> c), as shown in the right part of FIG.
- the detection signal levels at the positions Pa, Pb, and Pc are a, b, and c, respectively.
- the vertical axis represents the position on the image sensor 32
- the horizontal axis represents the detection signal level in the image sensor at each position.
- the detection signal level shown in the right part of FIG. 3 is also a pixel value because it is a detection signal level corresponding to an image on which an image of the subject is formed.
- the essence of imaging the subject is to measure the luminance of each point light source on the subject surface 31 by photoelectric conversion, and the light intensities a and b of the point light sources PA, PB, and PC in the right part of FIG. , C.
- the role of the imaging lens 11 is to guide each light beam emitted from the point light sources PA, PB, and PC, that is, diffused light, onto the imaging device 32. For this reason, an image corresponding to the final image is formed on the image sensor 32, and an image formed from the detection signal becomes a captured image on which the image is formed.
- the size of the imaging device is determined by the size of the imaging lens, there is a limit to downsizing.
- an imaging device having a configuration of a pinhole and an imaging device does not need to be provided with an imaging lens, there is a possibility that the device configuration can be made smaller than an imaging device having a configuration of an imaging lens and an imaging device.
- the brightness of the captured image is not sufficient, it is essential to increase the exposure time or increase the gain so that an image with a certain level of brightness can be captured. May occur, or natural color expression may not be achieved.
- the subject on the subject surface 31 is imaged using only the imaging element 32 without providing an imaging lens or a pinhole.
- light beams having light intensities a, b, and c are incident on positions Pa, Pb, and Pc on the image sensor 32 from point light sources PA, PB, and PC, respectively. Then, at each of the positions Pa, Pb, and Pc on the image sensor 32, the light beams from the point light sources PA, PB, and PC are incident as they are.
- the detection signal level shown in the right part of FIG. 4 is not a detection signal level corresponding to an image on which the subject image is formed, and thus does not indicate a restored pixel value.
- the image of the subject on the subject surface 31 cannot be formed using only the image pickup element 32 having no special configuration without providing an image pickup lens or a pinhole. For this reason, with only the configuration using only the image sensor 32, it is not possible to capture an image formed by forming an image of the subject.
- an imaging element 51 in which the detection sensitivity of each pixel has an incident angle directivity is provided.
- providing the detection sensitivity of each pixel with the incident angle directivity means that the light receiving sensitivity characteristic corresponding to the incident angle of the incident light with respect to the pixel is different for each pixel.
- the light reception sensitivity characteristics of all the pixels are completely different. Some pixels may have the same light reception sensitivity characteristics, and some pixels may have different light reception sensitivity. It may have a characteristic.
- each pixel is incident at a different incident angle. Since each pixel has different light receiving sensitivity characteristics according to the incident angle of incident light, that is, incident angle directivity, even a light beam having the same light intensity is detected with different sensitivity in each pixel. Thus, detection signals having different detection signal levels are detected for each pixel.
- the sensitivity characteristic according to the incident angle of the incident light received at each pixel of the image sensor 51 that is, the incident angle directivity according to the incident angle at each pixel has the light receiving sensitivity according to the incident angle.
- the detection signal level corresponding to the incident light in each pixel is obtained by multiplying the coefficient set corresponding to the light receiving sensitivity corresponding to the incident angle of the incident light.
- the detection signal levels DA, DB, and DC at the positions Pa, Pb, and Pc are expressed by the following equations (1) to (3), respectively.
- DA ⁇ 1 ⁇ a + ⁇ 1 ⁇ b + ⁇ 1 ⁇ c ...
- DB ⁇ 2 ⁇ a + ⁇ 2 ⁇ b + ⁇ 2 ⁇ c ...
- DC ⁇ 3 ⁇ a + ⁇ 3 ⁇ b + ⁇ 3 ⁇ c ...
- ⁇ 1 is a coefficient for the detection signal level a set in accordance with the incident angle of the light beam from the point light source PA on the object surface 31 to be restored at the position Pa on the image sensor 51, in other words, the position
- This is a coefficient for the detection signal level a representing the incident angle directivity according to the incident angle of the light beam from the point light source PA at Pa.
- ⁇ 1 is a coefficient for the detection signal level b set in accordance with the incident angle of the light beam from the point light source PB on the object surface 31 to be restored at the position Pa on the image sensor 51.
- ⁇ 1 is a coefficient for the detection signal level c set according to the incident angle of the light beam from the point light source PC on the object surface 31 to be restored at the position Pa on the image sensor 51.
- ( ⁇ 1 ⁇ a) of the detection signal level DA indicates the detection signal level due to the light beam from the point light source PA at the position Pc, and the light intensity a of the light beam from the point light source PA at the position Pc.
- the coefficient ⁇ 1 indicating the incident angle directivity corresponding to the incident angle is multiplied.
- ( ⁇ 1 ⁇ b) of the detection signal level DA indicates the detection signal level by the light beam from the point light source PB at the position Pc, and the light intensity b of the light beam from the point light source PB at the position Pc.
- the coefficient ⁇ 1 indicating the incident angle directivity corresponding to the incident angle is multiplied.
- ( ⁇ 1 ⁇ c) of the detection signal level DA indicates the detection signal level by the light beam from the point light source PC at the position Pc, and the light intensity c of the light beam from the point light source PC at the position Pc.
- the coefficient ⁇ 1 indicating the incident angle directivity corresponding to the incident angle is multiplied.
- the detection signal level DA is expressed as a composite value obtained by multiplying the components of the point light sources PA, PB, and PC at the position Pa by the coefficients ⁇ 1, ⁇ 1, and ⁇ 1 indicating the incident angle directivities corresponding to the respective incident angles. Is done.
- the coefficients ⁇ 1, ⁇ 1, and ⁇ 1 are collectively referred to as a coefficient set.
- the coefficient sets ⁇ 2, ⁇ 2, and ⁇ 2 correspond to the coefficient sets ⁇ 1, ⁇ 1, and ⁇ 1 for the detection signal level DA in the point light source PA, respectively.
- the coefficient sets ⁇ 3, ⁇ 3, and ⁇ 3 correspond to the coefficient sets ⁇ 1, ⁇ 1, and ⁇ 1 for the detection signal level DA in the point light source PA, respectively.
- the detection signal levels of the pixels at the positions Pa, Pb, and Pc are values expressed by the product sum of the light intensities a, b, and c of the light beams emitted from the point light sources PA, PB, and PC, respectively. It is. For this reason, these detection signal levels are a mixture of the light intensities a, b, and c of the light beams emitted from the point light sources PA, PB, and PC, respectively, so that the image of the subject is formed. Is different.
- the coefficient sets ⁇ 1, ⁇ 1, ⁇ 1, the coefficient sets ⁇ 2, ⁇ 2, ⁇ 2, the coefficient sets ⁇ 3, ⁇ 3, ⁇ 3 and the simultaneous equations using the detection signal levels DA, DB, DC are constructed, and the light intensities a, b , C, the pixel values at the positions Pa, Pb, Pc are obtained as shown in the lower right part of FIG. As a result, a restored image that is a set of pixel values is restored.
- the detection signal level shown in the upper right part of FIG. 5 is not a pixel value because it is not the detection signal level corresponding to the image on which the subject image is formed. Further, the detection signal level shown in the lower right part of FIG. 5 is a pixel value because it is a signal value for each pixel corresponding to the image on which the image of the subject is formed, that is, the value of each pixel of the restored image.
- an imaging device that includes an imaging element 51 that does not require an optical filter including an imaging lens, a diffraction grating, and the like, and does not require a pinhole and that has an incident angle directivity in each pixel.
- an optical filter or pinhole made up of an imaging lens, a diffraction grating, or the like is not an essential configuration, so the imaging device is reduced in height, that is, the thickness with respect to the incident direction of light in a configuration that realizes an imaging function is reduced. It becomes possible.
- the imaging apparatus 101 includes a directional imaging element 121, a signal processing unit 122, a demosaic processing unit 123, a ⁇ correction unit 124, a white balance adjustment unit 125, an image output unit 126, a storage unit 127, a display unit 128, and a subject distance determination unit 129.
- the operation unit 130 and the coefficient set selection unit 131 and does not include an imaging lens.
- the directional imaging device 121 corresponds to the imaging device 51 described with reference to FIG. 5, includes pixels having incident angle directivity, and includes detection signal level signals corresponding to the amount of incident light.
- An image sensor that outputs an image.
- the directional imaging device 121 may have a basic structure similar to that of a general imaging device such as a complementary (Metal) Oxide (Semiconductor) image sensor.
- the configuration of each pixel constituting the pixel array is different from a general one. That is, in each pixel, a light shielding film is provided in a part of the light receiving region (light receiving surface) of each photodiode and in a different range for each pixel.
- the light receiving sensitivity differs (changes) in accordance with the incident angle of incident light for each pixel, and as a result, an imaging element having an incident angle directivity with respect to the incident angle of incident light is realized in units of pixels.
- the directional imaging element 121 may not be configured as a pixel array, and may be configured as a line sensor, for example.
- the light shielding film is not limited to a case where all of the light shielding films are different from each other.
- the signal processing unit 122 configures simultaneous equations using the detection signal level of each pixel supplied from the directional imaging device 121 and the coefficient set stored in the coefficient set selection unit 131, and the configured simultaneous equations are expressed. By solving, the pixel value of each pixel constituting the restored image is obtained and output to the demosaic processing unit 123.
- the number of pixels of the directional imaging element 121 and the number of pixels constituting the restored image are not necessarily the same.
- the demosaic processing unit 123 is provided as a configuration for performing color separation processing.
- the demosaic processing unit 123 is replaced with a configuration that performs a corresponding color separation process, and a monochrome image sensor or a multi-color image sensor is provided for each color. In the case of a plate imaging device, the demosaic processing unit 123 is omitted.
- the color filter may be one that transmits colors other than RGB (red, green, blue) used in the Bayer array, for example, one that transmits yellow, white, etc., ultraviolet light, It may be one that transmits infrared light or the like, or one that transmits colors of various wavelengths.
- RGB red, green, blue
- the detection image formed of the signal output from the directional imaging element 121 in FIG. 6 is an image composed of a detection signal in which the image of the subject is not formed as shown in the upper right side of FIG. Therefore, the image cannot be recognized visually. That is, the detection image composed of the detection signals output from the directional imaging element 121 in FIG. 6 is an image that is a set of pixel signals but cannot be recognized as an image even when viewed by the user.
- an image composed of a detection signal in which an image of a subject is not formed, that is, an image captured by the directional imaging element 121 is referred to as a detection image. Shall be called.
- a restored image An image that is composed of pixel values on which an image of a subject to be formed and can be visually recognized as an image by a user is referred to as a restored image.
- the restored image is not an image that can identify the subject as in the normal image, but in this case, the restored image is also restored. This is called an image.
- a restored image that is an image in which a subject image is formed, and an image before demosaic processing is referred to as a Raw image
- a detected image captured by the directional imaging element 121 is a color filter.
- the demosaic processing unit 123 generates an image of a plane for each RGB by performing a demosaic process that generates a pixel signal of a missing color in accordance with an array of color filters such as a Bayer array, and sends it to the ⁇ correction unit 124. Supply.
- the ⁇ correction unit 124 performs ⁇ correction on the demosaiced image and supplies the image to the white balance adjustment unit 125.
- the white balance adjustment unit 125 adjusts the white balance of the image that has been subjected to the ⁇ correction, and outputs the adjusted image to the image output unit 126.
- the image output unit 126 converts the image with the adjusted white balance into an image signal of a predetermined compression format such as JPEG (Joint-Photographic Experts Group), TIFF (Tag Image Image Format), GIF (Graphics Interchange Format), or the like. To do. Then, the image output unit 126 converts the image signal thus converted into a predetermined format into any one of HDD (Hard Disk Drive), SSD (SolidState Drive), semiconductor memory, or a combination thereof.
- One of the processes to be stored in the storage unit 127, to be displayed on the display unit 128 including an LCD (Liquid Crystal Display), and to be output to the subject distance determination unit 129 is executed.
- the subject distance determination unit 129 is a subject that is a distance from the imaging position to the subject based on an operation signal from the operation unit 130 including an operation dial, an operation button, an external remote controller configured separately from the imaging device 101, and the like. The distance is determined, and information on the determined subject distance is supplied to the coefficient set selection unit 131. That is, since the restored image is displayed on the display unit 128, a user (not shown) adjusts the subject distance by operating the operation unit 130 while viewing the through image that is the restored image displayed on the display unit 128.
- the coefficient set selection unit 131 associates the above-described coefficients ⁇ 1 to ⁇ 3, ⁇ 1 to ⁇ 1 with various subject distances corresponding to the distance from the image sensor 51 to the subject plane 31 (subject plane corresponding to the restored image) in FIG. Coefficient sets corresponding to ⁇ 3, ⁇ 1 to ⁇ 3 are stored. Therefore, the coefficient set selection unit 131 selects a coefficient set based on the subject distance information supplied from the subject distance determination unit 129. Thereby, the signal processing unit 122 restores the restored image from the detected image using the selected coefficient set.
- the subject distance determination unit 129 may not be provided when only a restored image of one subject distance is obtained.
- the subject distance determination unit 129 determines the optimum subject distance by a hill-climbing method similar to the contrast AF (Auto-Focus) method based on the restored image supplied from the image output unit 126, thereby enabling the autofocus function. Can be realized.
- the subject distance is not limited to the restored image supplied from the image output unit 126 but may be determined based on the outputs of the demosaic processing unit 123, the ⁇ correction unit 124, and the white balance adjustment unit 125.
- the subject distance determination unit 129 may determine the subject distance based on the output of a distance measuring sensor provided separately.
- the detection image output from the directional imaging element 121 may be stored in the storage unit 127 without being restored.
- the detected image stored in the storage unit 127 at the time of reproduction is supplied to the signal processing unit 122, and the signal processing unit 122 generates a restored image.
- it is stored in a recording medium, stored, or output to another device by communication or the like, which is different from the imaging device, such as a PC (personal computer) or
- the detected image may be restored by another device such as a playback device.
- the coefficient set selection unit 131 selects one of a plurality of coefficient sets associated with the plurality of subject distances based on a user selection or the like, and different subjects depending on the coefficient set selected by the signal processing unit 122 You may make it obtain by switching the restoration image of distance. Thereby, refocusing may be realized.
- the demosaic processing unit 123, the ⁇ correction unit 124, and the white balance adjustment unit 125 are all shown for the configuration example.
- a configuration order other than the configuration of FIG. 6 may be used.
- the image output unit 126 may not perform compression or format conversion, and may output a raw image as it is. Further, when the demosaic processing unit 123, the ⁇ correction unit 124, and the white balance adjustment unit 125 are omitted, and the image output unit 126 does not perform compression or format conversion, the Raw image is directly HDMI (registered trademark) (High Definition Multimedia). (Interface) or the like.
- the imaging device 151 differs from the configuration of the imaging device 101 in FIG. 1 in that the imaging device 151, the optical block, and the directional imaging device 121, the signal processing unit 122, and the coefficient set selection unit 131 are replaced. 152 and a focus adjustment unit 153 are provided.
- the image sensor 151 is an image sensor made up of pixels having no incident angle directivity
- the optical block 152 composed of a plurality of image pickup lenses is subject distance supplied from the subject distance determination unit 129, that is, It is adjusted by the focus adjustment unit 153 according to the focal length, and the incident light is collected and imaged on the imaging surface of the imaging element 151.
- the image sensor 151 captures the restored image on which the subject image is formed in this way, and outputs it to the demosaic processing unit 123.
- Imaging processing by the imaging device 141 including the optical block of FIG. 7 will be described with reference to the flowchart of FIG.
- step S11 the subject distance determination unit 129 determines the distance to the subject based on the operation signal supplied from the operation unit 130 or a plurality of images captured immediately before, and the determined subject distance is determined. Information is supplied to the focus adjustment unit 153.
- the focus adjustment unit 153 adjusts the optical block 152 based on the subject distance, that is, the focal length.
- step S12 the optical block 152 condenses the incident light and forms an image of the subject on the subject surface at a position corresponding to the corresponding subject distance on the imaging surface of the image sensor 151.
- step S ⁇ b> 13 the image sensor 151 captures an image in which an image of a subject is formed by the optical block 152, and supplies a raw image to be a restored image to the demosaic processing unit 123.
- step S14 the demosaic processing unit 123 performs demosaic processing on the raw images constituting the restored image, and supplies the demosaic processing unit 123 with the demosaic processing.
- step S15 the ⁇ correction unit 124 performs ⁇ correction on the demosaiced restored image and supplies the corrected image to the white balance adjustment unit 125.
- step S16 the white balance adjustment unit 125 adjusts the white balance of the restored image that has been subjected to the ⁇ correction, and supplies the adjusted image to the image output unit 126.
- step S17 the image output unit 126 converts the image formed on the white balance to a predetermined compression format.
- step S ⁇ b> 18 the image output unit 126 stores at least one of the process of storing the restored image converted into the predetermined compression format in the storage unit 127, displaying it on the display unit 128, and supplying it to the subject distance determination unit 129. Do something.
- the restored image is captured by the above processing. That is, in the imaging process by the imaging device including the optical block, the light incident on the imaging element 151 is collected by the optical block 152, and a restored image on which an image of the subject is formed is captured.
- This difference is caused by a difference in structure between the directional image sensor 121 and the image sensor 151.
- FIG. 9 shows a front view of a part of the pixel array part of the imaging device 151 of the imaging device 141 including the optical block of FIG. 7, and the right part of FIG. 9 shows the imaging of the present disclosure of FIG.
- a front view of a part of the pixel array section of the directional imaging element 121 of the apparatus 101 is shown.
- FIG. 9 shows an example in which the pixel array unit has a configuration in which the number of pixels in the horizontal direction ⁇ vertical direction is 6 pixels ⁇ 6 pixels.
- the configuration of the number of pixels is not limited to this. is not.
- the imaging device 151 used in the imaging device 141 including the optical block of FIG. 7 shows that the pixels 151a that do not have incident angle directivity are arranged in an array. ing.
- the directional imaging element 121 is part of the light receiving region of the photodiode for each pixel 121a, and the light shielding film 121b is provided in a different range for each pixel 121a. By differentiating the light receiving sensitivity with respect to the incident angle of light, the structure has an incident angle directivity with respect to the incident angle.
- the light-shielding film 121b-1 and the light-shielding film 121b-2 are provided with different light-shielding ranges (light-shielding regions (positions)). ) And at least one of the light shielding areas is different). That is, in the pixel 121a-1, a light shielding film 121b-1 is provided so as to shield a part of the left side in the light receiving region of the photodiode by a predetermined width. In the pixel 121a-2, the right side in the light receiving region is provided.
- a light shielding film 121b-2 is provided so that a part of the light is shielded by a wider width in the horizontal direction than the light shielding film 121b-1.
- the light shielding film 121b is provided so that different ranges in the light receiving region are shielded for each pixel, and are randomly arranged in the pixel array.
- the range of the light shielding film 121b is preferably set to an area that can secure a desired amount of light because the amount of light that can be received decreases as the ratio of covering the light receiving region of each pixel increases.
- the area may be limited to, for example, up to about 3/4 of the entire light receiving range. By doing in this way, it becomes possible to ensure the light quantity more than desired amount.
- each pixel has an unshielded range with a width corresponding to the wavelength of light to be received, it is possible to receive a minimum amount of light. That is, for example, in the case of a B pixel (blue pixel), the wavelength is about 500 nm, but it is possible to receive a minimum amount of light as long as the light is not shielded beyond a width corresponding to this wavelength.
- FIG. 10 is a side cross-sectional view of the first configuration example of the directional imaging element 121
- the middle stage of FIG. 10 is a top view of the first configuration example of the directional imaging element 121.
- the side sectional view of the upper stage of FIG. 10 is an AB section in the middle stage of FIG.
- the lower part of FIG. 10 is a circuit configuration example of the directional imaging element 121.
- each of the adjacent pixels 121a-15 and 121a-16 is a so-called back-illuminated type in which a wiring layer Z12 is provided in the lowermost layer in the drawing and a photoelectric conversion layer Z11 is provided thereon.
- FIG. 10 is a side view and a top view of two pixels constituting the pixel array of the directional imaging element 121. Needless to say, a larger number of pixels 121a are arranged. Is omitted.
- each of the pixels 121a-15 and 121a-16 includes photodiodes 121e-15 and 121e-16 in the photoelectric conversion layer Z11.
- photodiodes 121e-15 and 121e-16 on-chip lenses 121c-15 and 121c-16 and color filters 121d-15 and 121d-16 are formed from above.
- the on-chip lenses 121c-15 and 121c-16 condense incident light onto the photodiodes 121e-15 and 121e-16.
- the color filters 121d-15 and 121d-16 are optical filters that transmit light of specific wavelengths such as red, green, blue, infrared, and white. In the case of white, the color filters 121d-15 and 121d-16 may or may not be transparent filters.
- a part of the light shielding films 121b-15 and 121b-16 shields the light receiving surface S as viewed from above.
- different ranges are shielded by the light shielding films 121b-15 and 121b-16. Different incident angle directivities are set.
- the shaded range is not limited to the case where all the pixels 121a of the directional imaging element 121 are different, and there may be pixels 121a in which the same range is partially shaded.
- the right end of the light shielding film 121p-15 and the upper end of the light shielding film 121b-15 are connected, and the left end of the light shielding film 121b-16 and the light shielding film 121p are connected.
- the upper end of ⁇ 16 is connected, and is configured in an L shape when viewed from the side.
- the light shielding films 121b-15 to 121b-17 and the light shielding films 121p-15 to 121p-17 are made of metal, for example, tungsten (W), aluminum (Al), or Al and copper (Cu). And an alloy. Further, the light shielding films 121b-15 to 121b-17 and the light shielding films 121p-15 to 121p-17 are formed at the same time using the same metal as the wiring in the same process as that for forming the wiring in the semiconductor process. It may be. Note that the thicknesses of the light shielding films 121b-15 to 121b-17 and the light shielding films 121p-15 to 121p-17 need not be the same depending on the position.
- the pixel 121a includes a photodiode 161 (corresponding to the photodiode 121e), a transfer transistor 162, an FD (FloatingloDiffusion) unit 163, a selection transistor 164, and an amplification transistor 165. And a reset transistor 166, and is connected to a current source 168 via a vertical signal line 167.
- the photodiode 161 is configured such that the anode electrode is grounded and the cathode electrode is connected to the gate electrode of the amplification transistor 165 via the transfer transistor 162.
- the transfer transistor 162 is driven according to the transfer signal TG. For example, when the transfer signal TG supplied to the gate electrode of the transfer transistor 162 becomes high level, the transfer transistor 162 is turned on. As a result, charges accumulated in the photodiode 161 are transferred to the FD unit 163 via the transfer transistor 162.
- the amplification transistor 165 serves as an input unit of a source follower that is a reading circuit that reads out a signal obtained by photoelectric conversion in the photodiode 161, and outputs a pixel signal having a level corresponding to the charge accumulated in the FD unit 163 to the vertical signal line 23. Output to. That is, the amplifying transistor 165 has a drain terminal connected to the power supply voltage VDD and a source terminal connected to the vertical signal line 167 via the selection transistor 164, so that the current source 168 connected to one end of the vertical signal line 167. And configure the source follower.
- An FD (Floating Diffusion) unit 163 is a floating diffusion region having a charge capacitance C ⁇ b> 1 provided between the transfer transistor 162 and the amplification transistor 165, and charges transferred from the photodiode 161 via the transfer transistor 162. Is temporarily stored.
- the FD unit 163 is a charge detection unit that converts charges into voltage, and the charge accumulated in the FD unit 163 is converted into voltage in the amplification transistor 165.
- the selection transistor 164 is driven according to the selection signal SEL, and is turned on when the selection signal SEL supplied to the gate electrode becomes high level, and connects the amplification transistor 165 and the vertical signal line 167.
- the reset transistor 166 is driven according to the reset signal RST. For example, the reset transistor 166 is turned on when the reset signal RST supplied to the gate electrode becomes a high level, discharges the charge accumulated in the FD portion 163 to the power supply voltage VDD, and resets the FD portion 163.
- the pixel circuit shown in the lower part of FIG. 10 operates as follows.
- the reset transistor 166 and the transfer transistor 162 are turned on, the electric charge accumulated in the FD unit 163 is discharged to the power supply voltage VDD, and the FD unit 163 is reset.
- the reset transistor 166 and the transfer transistor 162 are turned off, and an exposure period is started. Charges corresponding to the amount of incident light are accumulated by the photodiode 161.
- the reset transistor 166 is turned on and the FD unit 163 is reset, the reset transistor 166 is turned off. By this operation, the FD unit 163 is reset and set to the reference potential.
- the reset potential of the FD unit 163 is output from the amplification transistor 165 as a reference potential.
- the transfer transistor 162 is turned on, and the charge accumulated in the photodiode 161 is transferred to the FD unit 163.
- the potential of the FD portion 163 to which the photodiode charge has been transferred is output from the amplification transistor 165 as a signal potential.
- the reference potential is subtracted from the signal potential and output as a detection signal by CDS (correlated double sampling).
- FIG. 11 is a diagram illustrating a side cross-section, an upper surface, and a circuit configuration example in the second configuration example of the directional imaging element 121. That is, the upper part of FIG. 11 shows a side cross-sectional view of the pixel 121a of the directional imaging element 121 as the second configuration example, and the middle part of FIG. It is shown. Moreover, the side sectional view of the upper stage of FIG. 11 is an AB section in the middle stage of FIG. Furthermore, the lower part of FIG. 11 is a circuit configuration example of the directional imaging element 121.
- the directional imaging device 121 in the directional imaging device 121, four photodiodes 121f-11s to 121f-4 are formed in a pixel 121a, and a light shielding film 121p separates the photodiodes 121f-11 to 121f-4 from each other.
- the configuration is different from that of the directional imaging element 121 in FIG. That is, in the directional imaging element 121 of FIG. 11, the light shielding film 121p is formed in a “+” shape when viewed from the top.
- symbol is attached
- the photodiodes 121f-1 to 121f-4 are separated into the photodiodes 121f-1 to 121f-4 by the light shielding film 121p, so that the electrical and optical connections between the photodiodes 121f-1 to 121f-4 are achieved.
- Crosstalk can be prevented. That is, the light shielding film 121p in FIG. 11 is for preventing crosstalk, and is not for providing incident angle directivity, like the light shielding film 121p of the directional imaging element 121 in FIG.
- one FD portion 163 is shared by four photodiodes 121f-1 to 121f-4.
- the lower part of FIG. 11 shows a circuit configuration example in which one FD portion 163 is shared by four photodiodes 121f-1 to 121f-4.
- the description of the same configuration as the lower part of FIG. 10 is omitted.
- the lower stage of FIG. 11 differs from the circuit configuration of the lower stage of FIG. 10 in that photodiodes 161-1 to 161-are replaced with the photodiode 161 (corresponding to the photodiode 121 e in the upper stage of FIG. 10) and the transfer transistor 162. 4 (corresponding to the photodiodes 121f-1 to 121f-4 in the upper stage of FIG. 11) and transfer transistors 162-1 to 162-4 are provided, and the FD portion 163 is shared.
- the charges accumulated in the photodiodes 121f-1 to 121f-4 have a predetermined capacitance provided at the connection portion between the photodiodes 121f-1 to 121f-4 and the gate electrode of the amplification transistor 165.
- the data is transferred to the common FD unit 163. Then, a signal corresponding to the level of charge held in the FD unit 163 is read out as a detection signal in pixel output units.
- the charges accumulated in the photodiodes 121f-1 to 121f-4 can be selectively contributed to the detection signal in the unit of pixel output in various combinations (the degree of contribution to the detection signal in the unit of pixel output). Can be different).
- each pixel output unit can have a different incident angle directivity.
- the incident angle directivity in the left-right direction is added. Can be obtained.
- the incident angle directivity in the vertical direction can be obtained. it can.
- the charges of the photodiodes 121f-1 and 121f-3 and the charges of the photodiodes 121f-2 and 121f-4 are simultaneously transferred to the FD unit 163, and are added by the FD unit 163. And the incident angle directivity of the left-right direction can be obtained with the signal obtained by reading. Similarly, the charges of the photodiode 121f-1 and the photodiode 121f-2 and the charges of the photodiode 121f-3 and the photodiode 121f-4 are simultaneously transferred to the FD unit 163 and added in the FD unit 163, The incident angle directivity in the vertical direction can be obtained from the signal obtained by reading.
- the signal obtained based on the electric charges selectively read out independently from the four photodiodes 121f-1 to 121f-4 is a detection signal of one pixel output unit corresponding to one pixel constituting the detection image. Become.
- the incident angle directivity that differs for each pixel. can have sex.
- one photodiode 121e is provided for one pixel output unit.
- the state of light shielding by the light shielding film 121p of each pixel (pixel output unit) different for each pixel, the incident angle directivity for each pixel can be made different.
- the photodiode 121f-1 to 121f-4 are provided for one pixel output unit.
- the photodiode 121f that contributes to the detection signal is different for each pixel output unit, so that different incident angle directivities can be provided for each pixel output unit. it can.
- the degree of contribution depends on whether or not the detection value of each photodiode 121f is read out to FD (floating diffusion), and the detection value accumulated in the photodiode 121f before the charge is read out to FD using the electronic shutter function ( This can be realized by resetting the charge.
- the photodiode 121f can be in a state where it does not contribute to the pixel output unit. If there is a time between readings, it is possible to make a partial contribution.
- the signal detected by the photodiode 121f in FIG. 11 four signals of the photodiodes 121f-11 to 121f-4 are selectively used, and the output is equivalent to one pixel output unit constituting the detection image. Detection signal.
- each pixel 121a in FIG. 10 is provided with one photodiode 121e, and a different range for each pixel 121a is shielded by the light shielding film 121b, and optical modulation using the light shielding film 121b is performed.
- the detection signal for one pixel of the detection image having the incident angle directivity can be expressed by one pixel 121a.
- the pixel 121a in FIG. 11 is provided with four photodiodes 121f for each pixel, and the light-shielding film 121b is not formed on the light-receiving surface.
- the four photodiodes 121f-1 to 121f-4 are formed to represent a detection signal for one pixel of a detection image having incident angle directivity.
- one of the photodiodes 121f-11 to 121f-14 that does not contribute to the output functions in the same manner as a light-shielded region and has detection of one pixel of the detection image having incident angle directivity.
- the signal is expressed.
- the detection signals for one pixel are expressed using the photodiodes 121f-11 to 121f-14, the light shielding film 121b is not used, so the detection signals are not signals obtained by optical modulation. .
- a configuration including at least one photodiode 121e or 121f for outputting a detection signal of one pixel 121a of the detection image output from the directional imaging element 121 is referred to as a one-pixel output unit. That is, in the case of the directional imaging element 121 of FIG. 10, one pixel output unit is composed of one photodiode 121e. Further, in the case of the directional imaging element 121 of FIG. 11, one pixel output unit is constituted by four photodiodes 121f-1 to 121f-4.
- the four photodiodes 121f-1 to 121f-4 are arranged in the same shape and divided into equal areas for each pixel output unit.
- the photodiodes 121f-1 to 121f-4 may have different shapes and different division positions so as to have unequal areas.
- the division positions so that the division positions of the four photodiodes 121f-1 to 121f-4 are non-identical and have an unequal area, the upper left corner is similarly obtained in a plurality of pixel output units.
- the incident angle directivity may be different.
- the incident angle directivity can be set more freely. Furthermore, the above may be combined.
- the incident angle directivity of each pixel in the directional imaging element 121 is generated according to the principle shown in FIG. 12, for example.
- the upper left part and the upper right part of FIG. 12 are diagrams for explaining the principle of incidence angle directivity in the directional imaging element 121 of FIG. 10, and the lower left part and the lower right part of FIG. It is a figure explaining the generation
- each one-pixel output unit in the upper left part and the upper right part of FIG. 12 is constituted by one photodiode 121e.
- each one-pixel output unit in the lower left part and the lower right part of FIG. 12 is composed of two photodiodes 121f.
- an example in which one pixel output unit is configured by two photodiodes 121f has been described, but this is for convenience of explanation, and the number of photodiodes 121f that configure one pixel output unit is as follows. Other numbers may be used.
- the light shielding film 121b-11 is formed so as to shield the right half of the light receiving surface of the photodiode 121e-11 when incident light enters from the upper side to the lower side in the figure. ing.
- a light shielding film 121b-12 is formed so as to shield the left half of the light receiving surface of the photodiode 121e-12.
- the alternate long and short dash line in the figure indicates the center position in the horizontal direction in the figure of the light receiving surface of the photodiode 121e and the direction perpendicular to the light receiving surface.
- the incident light from the upper right direction in the figure indicated by the arrow forming the incident angle ⁇ 1 with respect to the dashed line in the figure is blocked by the photodiode 121e-11.
- incident light from the upper left direction in the figure, indicated by an arrow that forms an incident angle ⁇ 2 with respect to the alternate long and short dash line in the figure is a photodiode. It is difficult to receive light in the left half range that is not shielded by the light shielding film 121b-11 of 121e-11.
- the incident angle directivity has a high light receiving sensitivity characteristic with respect to the incident light from the upper right in the figure and has a low light receiving sensitivity characteristic with respect to the incident light from the upper left. Will be provided.
- the incident light from the upper right direction in the figure, indicated by the arrow forming the incident angle ⁇ 11 with respect to the alternate long and short dash line in the figure is photodiode 121e-12.
- the incident light from the upper left direction in the figure indicated by the arrow forming the incident angle ⁇ 12 with respect to the dashed line in the figure is It is easy to receive light in the right half range that is not shielded by the light shielding film 121b-12 of the photodiode 121e-12.
- the incident angle directivity has a low light receiving sensitivity characteristic with respect to incident light from the upper right in the figure and a high light receiving sensitivity characteristic with respect to the incident light from the upper left. Will be provided.
- photodiodes 121f-1 and 121f-2 are provided on the left and right in the drawing, and the light shielding film 121b is provided by reading one of the detection signals. Without incident angle directivity.
- the photodiode 121f-1 when the pixel 121a is configured with two photodiodes 121f-1 and 121f-2 as a pixel output unit, the photodiode provided on the left side in FIG.
- the detection signal 121f-1 By reading the detection signal 121f-1, it is possible to provide the same incident angle directivity as the configuration in the upper left part of FIG. That is, incident light from the upper right direction in the figure, which is indicated by an arrow having an incident angle ⁇ 21 with respect to the one-dot chain line in the figure, enters the photodiode 121f-1 and is received and read.
- incident light from the upper left direction in the figure which is indicated by an arrow having an incident angle ⁇ 22 with respect to the one-dot chain line in the figure, enters the photodiode 121f-2, but the signal is in pixel output units. Does not contribute.
- the photodiode 121f-11 and 121f-12 when the pixel 121a composed of two photodiodes 121f-11 and 121f-12 is set as one pixel output unit, the photodiode provided on the left side in the figure.
- the detection signal 121f-12 contribute to the pixel output unit, it is possible to provide the same incident angle directivity as the configuration in the upper right part of FIG. That is, incident light from the upper right direction in the figure, which is indicated by an arrow having an incident angle ⁇ 31 with respect to the one-dot chain line in the figure, enters the photodiode 121f-11 but does not contribute to the pixel output unit.
- incident light from the upper left direction in the figure which is indicated by an arrow having an incident angle ⁇ 32 with respect to the one-dot chain line in the figure, enters the photodiode 121f-12 and contributes to the output pixel implication.
- the example in which the one-dot chain line in the vertical direction is the center position in the horizontal direction in the drawing of the light receiving surface of the photodiode 121e has been described.
- Different incident angle directivities can be generated by the difference in the horizontal position of the light shielding film 121b indicated by the one-dot chain line in the vertical direction.
- the incident angle directivity of each pixel in the directional imaging element 121 is set as shown in FIG. 13 by using the on-chip lens 121c in addition to the above-described light shielding film 121b. That is, in the middle left part of FIG. 13, an on-chip lens 121c-11 that collects incident light from the upper incident direction in the figure, a color filter 121d-11 that transmits light of a predetermined wavelength, and a pixel by photoelectric conversion. The photodiodes 121e-11 that generate the signals are stacked in this order. In the middle right part of FIG. 13, the on-chip lens 121c-12, the color filter 121d-12, and the photodiode 121e-12 are arranged from the upper incident direction in the figure. It is structured in order.
- the on-chip lenses 121c are simply used. , Color filter 121d and photodiode 121e.
- the light shielding films 121b-11 and 121b-12 that shield a part of the region that receives the incident light. Is provided.
- the incident angle ⁇ which is the angle formed by the incident light
- the incident angle ⁇ increases with respect to the alternate long and short dash line at the center position of the photodiode 121e and the on-chip lens 121c
- the incident angle ⁇ is positive.
- the light is condensed in a range where the light shielding film 121b-11 is not provided, so that the detection signal level of the photodiode 121e-11 increases.
- the smaller the incident angle ⁇ the larger the incident angle ⁇ is in the negative direction (when it is tilted to the left in the figure)
- the detection signal level of the photodiode 121e-11 decreases.
- the incident angle ⁇ is 0 degree when the direction of the incident light coincides with the alternate long and short dash line, and incident light from the upper right side in the figure is incident on the incident angle ⁇ 21 side on the left side of the middle stage in FIG.
- the incident angle ⁇ is a positive value
- the incident angle ⁇ on the right side of the middle stage in FIG. 13 is a negative value. Therefore, in FIG. 13, the incident angle of the incident light incident on the on-chip lens 121 c from the upper right is larger than the incident light incident from the upper left. That is, in FIG. 13, the incident angle ⁇ increases as the traveling direction of incident light tilts to the right (increases in the positive direction) and decreases as it tilts to the left (increases in the negative direction).
- the waveform of the dotted line in the upper part of FIG. As shown, the detection signal level of the photodiode 121e-12 changes according to the incident angle ⁇ of the incident light.
- the angle between the incident light and the center line of the photodiode 121e and the on-chip lens 121c is perpendicular to each other.
- the smaller the incident angle ⁇ (the larger the incident angle ⁇ is in the negative direction), the more light enters the range where the light-shielding film 121b-12 is not provided, thereby detecting the detection signal level of the photodiode 121e-12. Becomes larger.
- the horizontal axis represents the incident angle ⁇
- the vertical axis represents the detection signal level in the photodiode 121e.
- the waveform indicated by the solid line and the dotted line indicating the detection signal level corresponding to the incident angle ⁇ shown in the upper part of FIG. 13 can be changed according to the range of the light shielding film 121b. It becomes possible to have different incident angle directivities.
- the solid line waveform in the upper part of FIG. 13 corresponds to the solid line arrow indicating that incident light in the middle left part and the lower left part of FIG. 13 is collected by changing the incident angle ⁇ .
- the dotted waveform in the upper part of FIG. 13 corresponds to a dotted arrow indicating that incident light in the middle right part and the lower right part of FIG. 13 is collected by changing the incident angle ⁇ . .
- the incident angle directivity is a characteristic (detection sensitivity characteristic) of the detection signal level of each pixel output unit corresponding to the incident angle ⁇ .
- the light shielding value corresponding to the incident angle ⁇ In the middle example of FIG. 13, the light shielding value corresponding to the incident angle ⁇ . It can be said that this is a characteristic. That is, the light shielding film 121b shields incident light in a specific direction at a high level, but cannot sufficiently shield incident light from directions other than the specific direction. This change in level that can be shielded from light causes different detection signal levels corresponding to the incident angle ⁇ as shown in the upper part of FIG.
- each pixel output unit has different incident angle directivities. This means that they have different light shielding directions.
- two photodiodes 121f-1 and 121f-2 are provided for one on-chip lens 121c-11 (the pixel output unit is 2).
- the photodiode in the middle left part of FIG. The same detection signal level as when the right side of 121e-11 is shielded can be obtained.
- the incident angle ⁇ that is the angle formed by the incident light with respect to the one-dot chain line that is the center position of the on-chip lens 121c and is perpendicular to each of them is increased (when the incident angle ⁇ is increased in the positive direction).
- the light is condensed in the range of the photodiode 121f-1 from which the detection signal is read, so that the detection signal level is increased.
- the smaller the incident angle ⁇ (the larger the incident angle ⁇ is in the negative direction), the smaller the detection signal level is because the light is condensed in the range of the photodiode 121f-2 from which the detection value is not read out. Become.
- the pixel output unit of the same detection signal level as that in the state where the left side of the photodiode 121e-12 in the middle part of the middle part of FIG. A detection signal can be obtained.
- the incident angle ⁇ that is the angle formed by the incident light with respect to the one-dot chain line that is the center position of the on-chip lens 121c and is perpendicular to each of them is increased (when the incident angle ⁇ is increased in the positive direction).
- the detection signal level of the detection signal of the pixel output unit is reduced.
- the smaller the incident angle ⁇ the larger the incident angle ⁇ is in the negative direction
- the more focused the light is in the range of the photodiode 121f-12 where the detection signal contributes to the detection signal in pixel output units.
- the detection signal level of the detection signal in pixel output units is increased.
- each of the one photodiode 121e-11 and the photodiode 121e-12 constitutes one pixel output unit.
- each of the one photodiode 121e-11 and the photodiode 121e-12 constitutes one pixel output unit.
- two photodiodes 121f-1 and 121f-2, and photodiodes 121f-11 and 121f-12 constitute one pixel output unit. Therefore, for example, in the lower stage of FIG. 13, a single pixel output unit is not configured by the single photodiode 121f.
- one pixel output unit when one pixel output unit is configured from a plurality of photodiodes 121f, it can be considered that the output pixel value of the pixel output unit is modulated according to the incident angle. it can. Therefore, the characteristics of the output pixel value (incident angle directivity) can be made different for each pixel output unit, and the incident angle directivity for each pixel output unit is set. Further, when one pixel output unit is constituted by a plurality of photodiodes 121f, one on-chip lens 121c is essential for one pixel output unit in order to generate the incident angle directivity in one pixel output unit. It becomes composition.
- one pixel output unit is constituted according to the incident angle.
- the output pixel value is modulated by modulating the incident light to the one photodiode 121e-11 or the photodiode 121e-12. Therefore, it is possible to vary the characteristics (incident angle directivity) of the output pixel value, and the incident angle directivity in units of one pixel output is set. Further, when each of the photodiodes 121e-11 or 121e-12 constitutes one pixel output unit, the incident angle directivity is independently manufactured by the light shielding film 121b provided for each pixel output unit. Set at time.
- the number of the plurality of photodiodes 121 f for setting the incident angle directivity for each pixel output unit is configured by a plurality of photodiodes 121 f.
- the (division number of the photodiode 121f constituting one pixel output unit) and the division position are set independently at the time of manufacturing for each pixel output unit, and among these, the photodiode 121f is used to determine the incident angle directivity. Whether or not to be set can be switched during imaging.
- the setting range of the light shielding film 121b is a range from the left end to the position A in the horizontal direction in the pixel 121a, and a range from the upper end to the position B in the vertical direction. To do.
- the weight Wh is set so that the weight Wx becomes ( ⁇ ( ⁇ x ⁇ a) / 2 ⁇ + 1/2) when ⁇ ⁇ a + ⁇ and the weight Wx becomes 0 when the incident angle ⁇ x> ⁇ a + ⁇ .
- the weight Wh is 0, 0.5, 1 when an ideal condition is satisfied.
- the incident angle directivity of each pixel 121a that is, a coefficient corresponding to the light receiving sensitivity characteristic can be obtained.
- the on-chip lens 121c having a different focal length is used for the gradient (1 / 2 ⁇ ) indicating the change in weight in the range in which the horizontal weight Wx and the vertical weight Wy are around 0.5. Can be set.
- the incident angle directivity of the pixel 121a can be set to a different value by making the range in which the photodiode 121e is shielded by the light shielding film 121b different from the curvature of the on-chip lens 121c.
- the curvature of the on-chip lens may be the same for all pixel output units in the directional imaging element 121, or may be different for some pixel output units.
- the directional imaging element 121 has a configuration that does not require the optical block 152 including an imaging lens, but an on-chip lens 121c may be provided.
- the on-chip lens 121c is an essential configuration.
- the on-chip lens 121c and the imaging lens have different physical actions.
- optical block 152 is the imaging lens 152.
- the imaging lens 152 captures the imaging element by the principal ray L101 passing through the point light source P101 and the center 152a of the imaging lens 152.
- the pixel position P111 that forms an image on 151 is specified.
- the imaging lens 152 is designed so that light incident from the point light source P101 at an incident angle different from the principal ray L101 due to diffusion can be condensed at the pixel position P111 on the imaging element 151. Yes.
- the pixel position P112 adjacent to the pixel position P111 is specified by a principal ray L102 passing through a point light source P102 different from the point light source P101 and the center position 152a of the imaging lens 152.
- the imaging lens 152 forms images of different point light sources P101 and P102 having different chief rays at different pixel positions P111 and P1121 on the imaging element 151, respectively.
- the incident light at infinity is parallel light, and therefore, for example, parallel light L121 with respect to the principal ray L101 assuming the point light source P101. , L122. Therefore, the imaging lens 152 forms an image of point light sources having different chief ray incident angles at pixel positions on the different imaging elements 151.
- the imaging lens 152 has a condensing function for causing diffused light having different chief ray incident angles to enter a plurality of adjacent pixel output units.
- the light passing through the on-chip lens 121c is incident only on the light receiving surface of the photodiode 121e or 121f constituting the corresponding one-pixel output unit. Is done.
- the on-chip lens 121c is provided for each pixel output unit, and condenses subject light incident on itself on only the corresponding pixel output unit. That is, the on-chip lens 121c does not have a condensing function for causing the diffused light emitted from the virtual point light source to enter a plurality of adjacent pixel output units.
- the directional imaging element 121 is composed of a total of 9 pixels of 3 pixels ⁇ 3 pixels, and a different light shielding film 121b is provided for each pixel, and each pixel is a pixel P11.
- the light incident on the directional imaging element 121 is assumed to be point light sources G1 to G3 located at infinity, as shown in FIG.
- the incident light from each of the point light sources G1 to G3 becomes parallel light, and the light emitted from the same point light source has the same incident angle at any pixel of the directional imaging element 121. It can be considered that it was incident at. Therefore, for example, the incident angle of the light from the point light source G2 is 0 deg at any pixel of the directional imaging element 121.
- the three pixels P11, P21, and P31, the three pixels P12, P22, and P32, and the three pixels P13, P23, and P33 are in the vertical direction on the paper surface of the light shielding film 121b.
- the height of is unified.
- each of the three pixels P11, P21, and P31 has an area of height A1 from the upper end of each pixel that is shielded by the light shielding film 121b, and the remaining height of A2
- the region is a region that is not shielded from light.
- the area of the height A11 from the upper end of each pixel is not shielded, and the remaining area of the height A12 is shielded by the light shielding film 121b. Has been in the area.
- the area of the height A21 from the upper end of each pixel is an area that is not shielded from light, and the remaining area of the height A22 is shielded by the light shielding film 121b. Has been in the area.
- the light receiving sensitivity characteristics in the vertical direction are unified for the three pixels P11, P21 and P31, the three pixels P12, P22 and P32, and the three pixels P13, P23 and P33.
- the weight Wy is 0 and the incident angle ⁇ y is When -6 deg to -4 deg, the weight Wy is ( ⁇ y + 6) / 2, and when the incident angle ⁇ y is larger than -4 deg, the weight Wy is set to 1.
- the weight Wy is 1, and the incident angle ⁇ y is When ⁇ 1 deg to +1 deg, the weight Wy is ( ⁇ y ⁇ 1) / 2, and when the incident angle ⁇ y is larger than +1 deg, the weight Wy is set to 0.
- the weight Wy is 1, and the incident angle ⁇ y is +3 deg.
- the weight Wy is set to ( ⁇ ( ⁇ y ⁇ 5) / 2) in the case of through +5 deg, and the weight Wy is set to 0 when the incident angle ⁇ y is larger than +5 deg.
- the horizontal width of the light shielding film 121b is unified for the three pixels P11 to P13, the three pixels P21 to P23, and the three pixels P31 to P33. .
- the area of the width B1 from the left end of each pixel is made an area shielded by the light shielding film 121b, and the remaining area of the width B2 is shielded from light. It is not an area.
- the area of the width B11 from the left end of each pixel is an unshielded area, and the remaining area of the width B12 is an area shielded by the light shielding film 121b. It is said that.
- the area of the width B21 from the left end of each pixel is an unshielded area, and the remaining area of the width B22 is an area shielded by the light shielding film 121b. It is said that.
- the light receiving sensitivity characteristics in the horizontal direction are unified for the three pixels P11 to P13, the three pixels P21 to P23, and the three pixels P31 to P33, respectively.
- the weight Wx is 1, and the incident angle ⁇ x is 4 deg to 6 deg.
- the weight Wx is ( ⁇ ( ⁇ x ⁇ 6) / 2) and the incident angle ⁇ x is larger than 6 deg, the weight Wx is set to zero.
- the weight Wx is 0 and the incident angle ⁇ x is ⁇ 1 deg.
- the weight Wx is ( ⁇ x + 1) / 2 when it is 1 to +1 deg, and the weight Wx is set to 1 when the incident angle ⁇ x is larger than +1 deg.
- the weight Wx is 0 and the incident angle ⁇ x is ⁇ 5 deg. If the incident angle ⁇ x is larger than ⁇ 3 deg, the weight Wx is set to 1 when the weight Wx is ( ⁇ x + 5) / 2.
- Wx ( ⁇ xj) is a horizontal weight with respect to the incident angle ⁇ x of the j-th pixel in the horizontal direction in the directional image sensor 121
- Wy ( ⁇ yi) is i pixels in the vertical direction in the directional image sensor 121. It is a weight in the vertical direction with respect to the incident angle ⁇ y of the eye
- Oij is the light intensity of a point light source consisting of a representative point of each region on the subject surface 31.
- the weights Wx and Wy take values as shown in FIG. 23 for each pixel Pij. It will be.
- the weight Wx is 1 when the horizontal incident angle ⁇ x is +5 deg, and the weight Wx is 0.5 when the horizontal incident angle ⁇ x is 0 deg.
- the weight Wx is zero.
- the weight Wx is 1 when the horizontal incident angle ⁇ x is +5 deg, and the weight Wx is 1 when the horizontal incident angle ⁇ x is 0 deg.
- the weight Wx is 0.
- the horizontal weight Wx is set to 1
- the vertical weight Wy is set to 1, so that Wx ⁇ Wy is set to 1.
- the selected weights Wx and Wy are framed, and in the pixel Pij, the selected horizontal weight Wx and vertical weight Wy, respectively.
- the multiplication result of is displayed.
- the horizontal weight Wx is 0 and the vertical weight Wy is 1, so that Wx ⁇ Wy is 0.
- the horizontal weight Wx is 0 and the vertical weight Wy is 0, so Wx ⁇ Wy is 0.
- the horizontal weight Wx is 0, and the vertical weight Wy is 0, so Wx ⁇ Wy is 0.
- the horizontal weight Wx is 0, and the vertical weight Wy is 1, so Wx ⁇ Wy is 0.
- the horizontal weight Wx is 0, and the vertical weight Wy is 0, so Wx ⁇ Wy is 0.
- Wx ⁇ Wy is 0.
- the signal processing unit 122 uses the pixel Pij at the detection signal level of the directional imaging element 121 as the light intensity with the representative point of each region Oij on the subject surface 31 as a point light source. Calculated as the product sum with the weight.
- the value represented by the pixel Pij is a signal level of a detection signal that constitutes a detection image that is captured by the directional imaging element 121 and cannot be recognized as an image even when viewed by the user.
- the signal processing unit 122 restores the restored image corresponding to the subject surface 31 by, for example, obtaining the luminance (light intensity) Oij of each region on the subject surface 31 using the nine simultaneous equations described above.
- a value obtained by multiplying the horizontal weight Wx and the vertical weight Wy obtained for each of the above-described pixels Pij is a coefficient set. More specifically, the coefficient sets are the coefficients ⁇ 1, ⁇ 1, ⁇ 1, ⁇ 2, ⁇ 2, ⁇ 2, ⁇ 3, ⁇ 3, and ⁇ 3 of the above formulas (1) to (3), respectively, on the subject plane 31. Further, the horizontal weight Wx and the vertical weight Wy differ according to the difference in the subject plane 31, and the coefficient set is switched according to the distance and angle of view of the restored image for specifying the subject plane. Thus, the restored image of the desired subject surface can be restored. However, it is necessary to set the incident angle directivity so as to ensure the independence of the simultaneous equations.
- ensuring the independence of the simultaneous equations means, for example, the independence of the linearity when the coefficient set ( ⁇ s, ⁇ s, ⁇ s) and the coefficient set ( ⁇ t, ⁇ t, ⁇ t) are considered. Is to ensure that the vector ( ⁇ s, ⁇ s, ⁇ s) and the vector ( ⁇ t, ⁇ t, ⁇ t) do not double each other.
- the detection signal levels DA, DB, and DC at the corresponding positions Pa, Pb, and Pc on the directional imaging element 121 are expressed by the above-described equation (1). Or can be expressed by the same expression as expression (3).
- DA ⁇ 1 ⁇ a + ⁇ 1 ⁇ b + ⁇ 1 ⁇ c ...
- DB ⁇ 2 ⁇ a + ⁇ 2 ⁇ b + ⁇ 2 ⁇ c ...
- DC ⁇ 3 ⁇ a + ⁇ 3 ⁇ b + ⁇ 3 ⁇ c ...
- the subject surface whose subject distance from the directional imaging element 121 is a distance d2 larger than the distance d1 by d.
- the detection signal levels are indicated by the upper center portion and the lower center portion of FIG. As described above, the detection signal levels DA, DB, and DC are the same.
- the light beams of the light intensities a ′, b ′, and c ′ from the point light sources PA ′, PB ′, and PC ′ on the subject surface 31 ′ are received by each pixel of the directional imaging element 121.
- the incident angles of the light beams having the light intensities a ′, b ′, and c ′ received on the directional imaging element 121 are different (change), different coefficient sets are required, and the positions Pa and Pb are required.
- Pc, the detection signal levels DA, DB, DC are expressed as shown in the following formulas (14) to (16), for example.
- DA ⁇ 11 ⁇ a ′ + ⁇ 11 ⁇ b ′ + ⁇ 11 ⁇ c ′
- DB ⁇ 12 ⁇ a ′ + ⁇ 12 ⁇ b ′ + ⁇ 12 ⁇ c ′
- DC ⁇ 13 ⁇ a ′ + ⁇ 13 ⁇ b ′ + ⁇ 13 ⁇ c ′ ...
- coefficient set groups including coefficient sets ⁇ 11, ⁇ 11, ⁇ 11, coefficient sets ⁇ 12, ⁇ 12, ⁇ 12, coefficient sets ⁇ 13, ⁇ 13, ⁇ 13 are coefficient sets ⁇ 1, ⁇ 1, ⁇ 1, coefficient set ⁇ 2, respectively on the object plane 31.
- This is a coefficient set group of the subject surface 31 ′ corresponding to ⁇ 2, ⁇ 2, and coefficient sets ⁇ 3, ⁇ 3, and ⁇ 3.
- the point light source is the same as the method for obtaining the light intensities a, b, and c of the point light sources PA, PB, and PC in the case of the subject surface 31 shown in FIG. It is possible to obtain the light intensities a ′, b ′, and c ′ of the light beams from PA ′, PB ′, and PC ′, and as a result, a restored image of the subject on the subject surface 31 ′ can be obtained.
- a coefficient set group for each distance from the directional imaging element 121 to the subject surface is stored in advance, and the simultaneous equations are configured by switching the coefficient set groups to form simultaneous equations.
- a restored image at an arbitrary distance can be generated by switching the coefficient set group according to the distance to the subject surface and obtaining a restored image by capturing the detected image once. It is also possible to do.
- machine learning such as deep learning is applied to the detection signal of the image sensor and image recognition is performed using the detection signal itself without performing image recognition based on the restored image after obtaining the restored image. It is also possible.
- the pixel having the incident angle directivity suitable for imaging the subject surface corresponding to the specified subject distance and the angle of view is used without using all the pixels.
- a restored image may be generated using a detection image made up of detection signals. In this way, a restored image can be obtained using a detection signal of a pixel suitable for imaging the subject surface corresponding to the specified subject distance and angle of view, so that the specified subject distance and angle of view can be obtained.
- a restored image can be obtained with high accuracy.
- a pixel 121a that is shielded by the light shielding film 121b by a width d1 from each end of the four sides, and as shown in the lower part of FIG. 34, each of the four sides.
- a pixel 121a ′ that is shielded from light by the light shielding film 121b from the end by a width d2 (> d1).
- the pixel 121a is used, for example, to restore the image I1 in FIG. 34 corresponding to the angle of view SQ1 including the entire person H101 as the subject as shown in the upper part of FIG.
- the pixel 121a ' is used to restore the image I2 in FIG. 34 corresponding to the angle of view SQ2 in which the periphery of the face of the person H101 is zoomed up.
- the pixel 121a in FIG. 34 has an incident light incident angle range A1 with respect to the directional imaging element 121 as shown in the left part of FIG. This is because incident light corresponding to the subject width W1 can be received.
- the pixel 121a ′ in FIG. 34 has a wider light-shielding range than the pixel 121a in FIG. 34. Therefore, as shown in the left part of FIG. This is because the incident angle range A2 ( ⁇ A1) is satisfied, so that incident light corresponding to the subject width W2 ( ⁇ W1) is received on the subject surface 31 in the horizontal direction.
- the pixel 121a in FIG. 34 with a narrow light-shielding range is a wide-angle pixel suitable for imaging a wide range on the subject surface 31, whereas the pixel 121a ′ in FIG. This is a narrow field angle pixel suitable for imaging a narrow range on the subject surface 31.
- the wide-angle pixel and the narrow-angle pixel here are expressions that compare both the pixels 121a and 121a 'in FIG. 34, and are not limited to this when comparing pixels having other angles of view.
- FIG. 36 shows the relationship between the position on the subject surface 31 with respect to the center position C1 of the directional imaging element 121 and the incident angle of incident light from each position.
- FIG. 36 the relationship between the position on the subject surface 31 and the incident angle of the incident light from each position on the subject surface with respect to the horizontal direction is shown.
- the pixels 121a and 121a 'in FIG. 34 are shown on the right side of FIG.
- the pixel 121a in FIG. 34 is placed in the range ZA surrounded by the dotted line, and the pixel 121a is placed in the range ZB surrounded by the alternate long and short dash line.
- 34 is configured by collecting a predetermined number of pixels, and when restoring an image having an angle of view SQ1 corresponding to the subject width W1, the pixel 121a in FIG. 34 that captures the angle of view SQ1 is used. In addition, the image of the subject width W1 on the subject surface 31 can be restored.
- a configuration is shown in which a predetermined number of pixels 121a 'are provided on the left side in the figure, and a predetermined number of pixels 121 are provided on the right side.
- the pixel 121a and the pixel 121a ′ are desirably arranged in a mixed manner at random.
- the angle of view SQ2 is narrower than the angle of view SQ1
- the angle of view is narrower than the image of the angle of view SQ1.
- a restored image with higher image quality can be obtained by restoring the image of the angle of view SQ2 as the angle of view.
- a restored image with higher image quality can be obtained by restoring an image with a narrower angle of view.
- step S31 the directional imaging element 121 obtains a detection signal corresponding to the amount of incident light received for each pixel output unit having different incident angle directivities in pixel output units, and as a detection image.
- the signal is supplied to the signal processing unit 122.
- the subject distance determination unit 129 determines the subject distance determined based on the operation signal from the operation unit 130 or the autofocus function.
- the coefficient set selection unit 131 reads out the coefficient set group stored in association with each other based on the subject distance, and supplies it to the signal processing unit 122.
- the coefficient set group to be read out is, for example, the coefficients ⁇ 1 to ⁇ 3, ⁇ 1 to ⁇ 3, ⁇ 1 to ⁇ 3, ⁇ 11 to ⁇ 13 in the above formulas (1) to (3) or (14) to (16). , ⁇ 11 to ⁇ 13, and ⁇ 11 to ⁇ 13, a coefficient set group composed of a plurality of coefficients.
- step S33 the signal processing unit 122 calculates the pixel value of the restored image using the detection signal of each pixel output unit in the detection image and the coefficient set group selected by the coefficient set selection unit 131. More specifically, the signal processing unit 122, for example, the simultaneous equations described with reference to the above-described Expressions (1) to (3), Expressions (14) to (16), and FIGS. An equation is constructed, and further, the simultaneous image is solved to obtain a restored image, which is output to the demosaic processing unit 123.
- the restored image is demosaiced by the demosaic processing unit 123, ⁇ corrected by the ⁇ correction unit 124, white balance adjusted by the white balance adjustment unit 125, and converted into a predetermined compression format by the image output unit 126. Is done. If necessary, the restored image converted into a predetermined compression format is stored in the storage unit 127, displayed on the display unit 128, and / or output to the subject distance determination unit 129. You may make it do.
- each pixel has an incident angle directivity as an essential component.
- an optical element composed of an imaging lens, a diffraction grating, or the like, a pinhole, or the like is not required, so that the degree of freedom in designing the apparatus can be increased, and the imaging apparatus is configured separately from the imaging element. Since the optical element that is supposed to be mounted together with the image sensor is not required at the stage of configuration, it is possible to reduce the size of the apparatus with respect to the incident direction of incident light, and to reduce the manufacturing cost. Is possible. In addition, a lens corresponding to an imaging lens for forming an optical image, such as a focus lens, becomes unnecessary. However, a zoom lens that changes the magnification may be provided.
- the restored image can be restored by switching the coefficient set group, so that it is not necessary to repeatedly perform imaging while changing the focal length, that is, the subject distance and the angle of view. become.
- a restored image may be generated using a coefficient set group. By doing so, it is possible to generate a restored image on a subject surface having an arbitrary subject distance and an arbitrary angle of view during reproduction.
- a restored image using a detection image picked up by a directional image pickup device 121 having an incident angle directivity in pixel units. Can be generated, so that the number of pixels can be increased, and an image with high resolution and high angular resolution can be taken.
- the imaging device 101 of the present disclosure includes the directional imaging element 121 and does not require, for example, an optical filter made of a diffraction grating, so that the use environment becomes high temperature and the optical filter is distorted by heat. Therefore, it is possible to realize an imaging device with high environmental resistance.
- the imaging apparatus 101 of the present disclosure does not require an optical element such as a diffraction grating or an imaging lens, it is possible to improve the degree of freedom in designing a configuration having an imaging function.
- the entire pixel 121a is shielded in the vertical direction, and the pixel 121a is shielded with a predetermined width in the horizontal direction.
- the light shielding film 121b is referred to as a horizontal band type light shielding film 121b.
- the entire pixel 121a is shielded from light in the horizontal direction, and the vertical direction
- the light shielding film 121b that shields the pixels 121a at a predetermined height is referred to as a vertical belt type light shielding film 121b.
- a vertical band type and a horizontal band type light shielding film 121b are combined, for example, an L-shaped light shielding film 121b is applied to each pixel in the Bayer array. You may make it provide.
- the black range represents the light-shielding film 121b, and the same applies to the subsequent drawings unless otherwise specified.
- Each pixel has an incident angle directivity as shown in the right part of FIG. That is, in the right part of FIG. 38, the light receiving sensitivity in each pixel is shown, the horizontal axis represents the incident angle ⁇ x in the horizontal direction (x direction) of the incident light, and the vertical axis represents the vertical direction (y Direction) incident angle ⁇ y.
- the light receiving sensitivity within the range C4 is higher than outside the range C4
- the light receiving sensitivity within the range C3 is higher than outside the range C3
- the light receiving sensitivity within the range C2 is higher than outside the range C2.
- the light receiving sensitivity within the range C1 is higher than outside the range C1.
- the detection signal level of the incident light that satisfies the condition of the incident angle ⁇ x in the horizontal direction (x direction) and the incident angle ⁇ y in the vertical direction (y direction) that is within the range C1 is the highest, It is shown that the detection signal level decreases in the order of conditions in the range C2, the range C3, the range C4, and the range other than the range C4. Note that the light receiving sensitivity shown in the right part of FIG. 38 is determined by the range shielded by the light shielding film 121b in each pixel 121a, regardless of the Bayer array.
- the light shielding film 121b having a shape arranged in a point symmetry is collectively referred to as an L-shaped light shielding film 121b.
- the directional imaging element 121 of the first modification described with reference to FIG. 38 uses the light shielding film 121b for the single photodiode 121e described with reference to FIGS.
- the directional imaging element 121 of the first modification example is similar to the directional imaging element 121 described with reference to FIG. 11, and the number of divisions and divisions of the plurality of photodiodes 121 f constituting the pixel output unit.
- the directional imaging element 121 may be configured to set the incident angle directivity by changing the position so that the range that does not contribute to the output functions in the same manner as the light-shielded region.
- the light shielding film 121b may be provided so as to have incident angle directivity.
- the horizontal width of the light shielding film 121b is changed to the widths dx1, dx2,... Dxn with respect to the horizontal pixel arrangement, and this is dx1 ⁇ dx2 ⁇ . - ⁇ Dxn relationship.
- the vertical height of the light-shielding film 121b is changed to the heights dy1, dy2,... Dym with respect to the vertical pixel arrangement, and this is dy1 ⁇ dy2 ⁇ ... ⁇ dxm. It becomes a relationship.
- the interval between changes in the horizontal width and the vertical width of the light shielding film 121b depends on the subject resolution (angle resolution) to be restored.
- each pixel 121a in the directional imaging element 121 ′ in FIG. 39 changes the light shielding range so as to correspond to the pixel arrangement in the directional imaging element 121 ′ in the horizontal direction and the vertical direction. It can be said that the incident angle directivity is given.
- each pixel 121a in FIG. 39 is determined according to the rules described using the pixel 121a shown on the left side of FIG.
- FIG. 40 shows the configuration of the directional imaging element 121 ′ that is the same as FIG. 39. 40 shows the configuration of the pixel 121a of the directional imaging element 121 'in the right part of FIG. 40 (same as FIG. 39).
- the light-shielding film 121b As shown in the left part of FIG. 40, light is shielded by the light-shielding film 121b by the width dx1 from the upper and lower edges of the pixel 121a toward the pixel 121a, and from the left and right edges to the pixel 121a.
- the light is shielded by the light shielding film 121b by the height dy1.
- the light shielding film 121b is in a range shown in black.
- the range in which the light shielding film 121b is shielded in this way is hereinafter referred to as a main light shielding part Z101 of the pixel 121a (black part in the left part of FIG. 40), and otherwise.
- This square-shaped range is referred to as a range Z102.
- the pixel arrangement in the directional imaging element 121 ′ in FIG. 39 is the left end portion and the upper end pixel 121a-1 has a rectangular opening as shown in the right portion (same as FIG. 39) in FIG. Z111 is configured such that its left side is a width dx1 from the left side of the pixel 121a and its upper side is arranged at a distance of dy1 from the upper side of the pixel 121a.
- the pixel 121a-2 on the right side of the pixel 121a-1 has a rectangular opening Z111 with a left side having a width dx2 from the left side of the pixel 121a and an upper side having a distance dy1 from the upper side of the pixel 121a.
- the light shielding film 121b shields the area other than the rectangular opening Z111.
- the rectangular portion of the upper right dotted line in the range Z102 is a rectangular opening Z111, the left side of which is a width dxn from the left side of the pixel 121a, and the upper side is a distance dy1 from the upper side of the pixel 121a. It shows the state when arranged. Further, the intervals of the widths dx1, dx2,...
- Dxn are values obtained by dividing the width of the range Z102 in the horizontal direction by subtracting the width of the rectangular opening Z111 by the number of pixels n in the horizontal direction. That is, the horizontal change interval is determined by dividing by the number of pixels n in the horizontal direction.
- the horizontal position of the rectangular opening Z111 in the pixel 121a in the directional imaging element 121 ′ is within the pixel 121a (pixel 121a in the same column) whose horizontal position in the directional imaging element 121 ′ is the same. At the same time.
- the pixel 121a-3 adjacent immediately below the pixel 121a-1 has a rectangular opening Z111, the left side of which is a width dx1 from the left side of the pixel 121a, and the upper side of the pixel 121a-1 is a distance having a height dy2 from the upper side of the pixel 121a.
- the light shielding film 121b shields the area other than the rectangular opening Z111.
- the upper side of the rectangular opening Z111 moves from the upper side of the pixel 121a to heights dy1, dy2,. .
- the rectangular part of the lower left dotted line in the range Z102 in FIG. 40 is the rectangular opening Z111, the left side of which is a width dx1 from the left side of the pixel 121a, and the upper side is a distance dym from the upper side of the pixel 121a. It shows the state when arranged. Further, the respective intervals of the heights dy1, dy2,...
- Dym are obtained by dividing the vertical height of the range Z102 by the height of the rectangular opening Z111 and dividing it by the number of pixels m in the vertical direction. Become. That is, the interval of change in the vertical direction is determined by dividing by the number of pixels m in the vertical direction.
- the vertical position of the rectangular opening Z111 in the pixel 121a in the directional imaging element 121 ′ is within the pixel 121a (the pixel 121a in the same row) whose vertical position in the directional imaging element 121 ′ is the same. At the same time.
- the angle of view can be changed by changing the main light-shielding part Z101 and the rectangular opening Z111 of each pixel 121a constituting the directional imaging element 121 ′ shown in FIG. 40 (FIG. 39).
- the right part of FIG. 41 shows the configuration of the directional image sensor 121 'when the angle of view is wider than that of the directional image sensor 121' of FIG. 40 (FIG. 39). Further, the left part of FIG. 41 shows the configuration of the pixel 121a of the directional imaging element 121 'in the right part of FIG.
- a main light-shielding part Z151 black part in the left part of FIG. 41
- a rectangular opening Z161 having a larger opening area than the rectangular opening Z111 is set in the range Z152.
- the light is shielded by the light shielding film 121b by the width dx1 ′ ( ⁇ dx1) from the upper and lower edges of the pixel 121a toward the pixel 121a.
- the rectangular opening Z161 is formed by shielding light by the light shielding film 121b by the height dy1 ′ ( ⁇ dy1) from the end of the right side into the pixel 121a.
- the pixel 121a-1 at the left end and at the upper end has a rectangular opening Z161 whose left side is a width dx1 ′ from the left side of the pixel 121a, and The upper side is arranged at a distance of height dy1 ′ from the upper side of the pixel 121a, and a range other than the rectangular opening Z161 is shielded by the light shielding film 121b.
- the pixel 121a-2 on the right side of the pixel 121a-1 has a rectangular opening Z161 whose left side is a width dx2 ′ from the left side of the pixel 121a and whose upper side is a height dy1 ′ from the upper side of the pixel 121a. And a range other than the rectangular opening Z161 is shielded by the light shielding film 121b.
- the right side of the rectangular opening Z161 moves from the right side of the pixel 121a to the widths dx1 ′, dx2 ′... Dxn ′ as the arrangement proceeds to the right side in the figure.
- the intervals of the widths dx1 ′, dx2 ′,..., Dxn ′ are obtained by subtracting the horizontal width of the rectangular opening Z161 from the horizontal width of the range Z152 by the number of pixels n in the horizontal direction. Divided value. That is, the interval of change in the vertical direction is determined by dividing by the number of pixels n in the horizontal direction. Accordingly, the change intervals of the widths dx1 ', dx2' ... dxn 'are larger than the change intervals of the widths dx1, dx2, ... dxn.
- the horizontal position of the rectangular opening Z161 in the pixel 121a in the directional imaging element 121 ′ in FIG. 41 is the same as the pixel 121a (pixels in the same column) in the horizontal position in the directional imaging element 121 ′. 121a).
- the pixel 121a-3 adjacent immediately below the pixel 121a-1 has a rectangular opening Z161 whose left side is a width dx1 ′ from the left side of the pixel 121a and whose upper side is a height dy2 ′ from the upper side of the pixel 121a. And a range other than the rectangular opening Z161 is shielded by the light shielding film 121b.
- the upper side of the rectangular opening Z161 has a height dy1 ′, dy2 ′.
- the change interval of the heights dy1 ′, dy2 ′... Dym ′ is the height obtained by subtracting the height of the rectangular opening Z161 from the vertical height of the range Z152 in terms of the number of pixels m in the vertical direction. Divided value. That is, the interval of change in the vertical direction is determined by dividing by the number of pixels m in the vertical direction. Accordingly, the change intervals of the heights dy1 ', dy2' ... dym 'are larger than the change intervals of the width heights dy1, dy2 ... dym.
- the vertical position of the rectangular opening Z161 in the pixel 121a in the directional image sensor 121 ′ in FIG. 41 is the same as the pixel 121a in the vertical direction in the directional image sensor 121 ′ (pixels in the same row). 121a).
- the directional imaging element 121 including the pixels 121a having various angles of view (having various incident angle directivities). 'Can be realized.
- the directional imaging element 121 may be realized by combining not only the pixels 121a having the same angle of view but also the pixels 121a having various angles of view.
- each unit U includes wide-angle pixels 121a-W and medium-angle angles.
- the pixel 121a-M, the narrow-angle pixel 121a-N, and the extremely narrow-angle pixel 121a-AN are constituted by four pixels.
- the restoration image of the restored angle of view is restored using the detection image obtained from the pixel suitable for imaging the angle of view to be restored, thereby restoring an appropriate restored image corresponding to the four types of angle of view. It becomes possible.
- the directional imaging element 121 ′ of the second modified example described with reference to FIGS. 39 to 42 has a light-shielding film for one photodiode 121e described with reference to FIGS.
- the description has been given assuming that the directional imaging element 121 sets the incident angle directivity for each pixel output unit using the 121b.
- the directional imaging element 121 ′ of the second modified example like the directional imaging element 121 described with reference to FIG.
- the directional imaging element 121 includes the number of divisions of the plurality of photodiodes 121f constituting the pixel output unit, and
- the directional imaging element 121 may be configured to set the incident angle directivity by changing the division position so that the range that does not contribute to the output functions in the same manner as the light-shielded region.
- an L-shaped light-shielding film 121b combining a vertical belt type and a horizontal belt type is configured, and a horizontal-band light shielding film 121b having the same width is combined in a predetermined column direction.
- the vertical band type light shielding film 121b having the same height is combined so that the pixel unit changes randomly while maintaining regularity in the column direction and the row direction.
- the randomness in the incident angle directivity of each pixel may be reduced, and the processing load on the signal processing unit 122 may be reduced.
- the horizontal band type light shielding film 121b having the same width X0 is used for the pixels in the same column indicated by the range Z130.
- the vertical band type light shielding film 121b having the same height Y0 is used, and for the pixels 121a specified by the respective matrices, an L-shaped light shielding is combined.
- a film 121b is set.
- the horizontal band type light shielding film 121b having the same width X1 is used, and the same row indicated by the range Z151 adjacent to the range Z150.
- the vertical band type light-shielding film 121b having the same height Y1 is used for these pixels, and an L-shaped light-shielding film 121b in which these are combined is set for the pixels 121a specified by each matrix. .
- horizontal band type light-shielding films having the same width X2 are used, and pixels in the same row indicated by the range Z152 adjacent to the range Z151.
- vertical band type light shielding films having the same height Y2 are used, and for the pixels 121a specified by each matrix, an L-shaped light shielding film 121b in which these are combined is set.
- the range of the light shielding film can be changed in units of pixels while maintaining regularity in the horizontal width and position of the light shielding film 121b and the height and position in the vertical direction.
- the randomness of the incident angle directivity can be suppressed.
- the coefficient set pattern can be reduced, and the processing load of the arithmetic processing in the signal processing unit 122 can be reduced.
- each pixel of the restored image of N ⁇ N rows and 1 column is obtained.
- each element in the first column indicated by the range Z201 of the matrix A corresponds to the element of the first row of the vector X
- the N ⁇ Nth column indicated by the range Z202 of the matrix A This indicates that each element corresponds to an element in the N ⁇ Nth row of the vector X.
- a restored image is obtained by finding each element of the vector X by solving simultaneous equations based on the determinant shown in FIG.
- the matrix A is a diagonal matrix in which all elements on the diagonal line descending to the right are 1s.
- the determinant of FIG. 44 is modified as shown in FIG. 45 by multiplying both sides by the inverse matrix A-1 of the matrix A from the left, and the vector Y of the detected image is transformed to the inverse matrix A ⁇ . By multiplying 1 from the right, each element of the vector X, which is a detected image, is obtained.
- the real matrix A cannot be obtained accurately, cannot be measured accurately, cannot be solved in the case where the basis vector of the matrix A is close to linear dependence, and each element of the detected image includes noise. Or, in combination, the simultaneous equations may not be solved.
- the vector X is the matrix A
- Y is the vector Y
- ⁇ is the parameter
- the first term is a norm when both sides of FIG. 44 are minimized, and the second term is a regularization term.
- the matrix A is decomposed into an N-by-N matrix AL and an N-by-N matrix ART, and an N-by-N matrix X representing a restored image, respectively.
- the result obtained by multiplying the preceding stage and the succeeding stage is an N-row N-column matrix Y representing the detected image.
- the matrix AL and ART having the number of elements (N ⁇ N) with respect to the matrix A having the number of elements (N ⁇ N) ⁇ (N ⁇ N) are reduced to 1 / (N ⁇ N).
- the calculation amount and the memory capacity can be reduced.
- AT is a transposed matrix of matrix A
- ⁇ is a parameter
- I is a unit matrix.
- the matrix in parentheses is the matrix AL
- the inverse matrix of the transposed matrix of the matrix A is the matrix ART.
- the determinant shown in FIG. 46 is realized.
- the calculation as shown in FIG. 46 is performed by multiplying the element of interest Xp in the matrix X by the element group Z221 of the corresponding column of the matrix AL, as shown in FIG. Group Z222 is determined. Further, by multiplying the element group Z222 by the element in the row corresponding to the target element Xp of the matrix ART, a two-dimensional response Z224 corresponding to the target element Xp is obtained. Then, the matrix Y is obtained by integrating the two-dimensional responses Z224 corresponding to all elements of the matrix X.
- the element group Z221 corresponding to each row of the matrix AL corresponds to the incident angle directivity of the horizontal band type pixel 121a set to the same width for each column of the directional imaging element 121 shown in FIG. Have a coefficient set.
- the incident angle directivity of the vertical band type pixel 121a set to the same height set for each row of the directional imaging element 121 shown in FIG. Has a coefficient set corresponding to.
- the matrix used when restoring the restored image based on the detected image since it is possible to reduce the matrix used when restoring the restored image based on the detected image, the calculation amount is reduced, thereby improving the processing speed and reducing the power consumption related to the calculation. It becomes possible to make it.
- the matrix can be reduced, the capacity of the memory used for the calculation can be reduced, and the device cost can be reduced.
- the light-shielded range (light-receiving range) is changed in units of pixels while having predetermined regularity in the horizontal direction and the vertical direction.
- the light-shielded range in pixels (the range where light can be received) is not set completely at random, it is also set at random to some extent. I reckon.
- the light-shielded range (light-receiving range) in units of pixels is set completely at random, but also to some extent random (for example, some of all pixels Including the range with regularity, but other ranges are random), or the one that seems not regular to some extent (all pixels are arranged according to the rules described with reference to FIG. 43) It is also considered to be random in the case of an arrangement where it is not possible to confirm that it is done.
- the directional imaging element 121 ′′ of the third modification described with reference to FIGS. 43 to 47 shields against one photodiode 121e described with reference to FIGS.
- the description has been given assuming that the directional imaging element 121 uses the film 121b to set the incident angle directivity for each pixel output unit.
- the directional imaging element 121 '' of the third modified example is similar to the directional imaging element 121 described with reference to FIG. 11, and the number of divisions of the plurality of photodiodes 121f that constitute the pixel output unit,
- the directional imaging element 121 may be configured to set the incident angle directivity by changing the division position so that a range that does not contribute to output functions in the same manner as the light-shielded region.
- the light shielding film 121b may be set in a triangle and the range may be different so as to have different incident angle directivities.
- the light shielding film 12b may be set in a circular shape, and the range may be different to provide different incident angle directivities.
- other patterns may be used.
- a linear light shielding film in an oblique direction may be used.
- the light shielding film 121b shown in the third pattern of the fourth stage in FIG. 48 is referred to as a triangular type light shielding film 121b, and the light shielding film 121b shown in the lowermost three patterns in FIG. It is assumed to be referred to as a light shielding film 121b.
- the directional imaging element 121 of the fourth modified example described with reference to FIG. 48 uses the light shielding film 121b for one photodiode 121e described with reference to FIGS.
- the description has been made assuming that the directional imaging element 121 sets the incident angle directivity for each pixel output unit.
- the directional imaging element 121 of the fourth modification example is similar to the directional imaging element 121 described with reference to FIG. 11, and the number of divisions and divisions of the plurality of photodiodes 121 f constituting the pixel output unit.
- the directional imaging element 121 may be configured to set the incident angle directivity by changing the position so that the range that does not contribute to the output functions in the same manner as the light-shielded region.
- the band type light shielding film 121b may be arranged in a matrix.
- a horizontal band type light shielding film 121b having a different width for each one pixel output unit in a plurality of pixel output units constituting a unit composed of four pixel output units constituting a Bayer array in a matrix form. You may make it arrange
- each pixel is composed of a plurality of pixel output units constituting a unit composed of four pixel output units constituting the Bayer array, with each pixel output unit being centered on the center position of the four pixel output unit.
- the horizontal band type and vertical band type light shielding films 121b may be arranged in a matrix at different positions symmetrically with respect to the output position.
- the pixel output unit of the same color arrangement is set by, for example, a 4-pixel output unit of 2 pixel output unit ⁇ 2 pixel output unit, and the unit of the same color arrangement of 4 pixel output unit is composed of 2 units ⁇ 2 units, for a total of 4 units.
- the light shielding film 121b may be arranged in a matrix.
- horizontal band type light shielding films 121b having different widths for each one pixel output unit are arranged in a matrix in a four pixel output unit constituting a unit composed of four pixel output units of the same color. You may do it.
- the 4-pixel output unit in the 4-pixel output unit constituting the unit composed of the 4-pixel output unit of the same color, the 4-pixel output unit is centered on the center position of the 4-pixel output unit for each pixel output unit.
- the horizontal band type and vertical band type light shielding films 121b may be arranged in a matrix at different positions so that the light shielded range is changed point-symmetrically.
- the pixel output unit of the same color is set as, for example, a 9 pixel output unit of 3 pixel output unit ⁇ 3 pixel output unit, and the unit of the same color of 9 pixel output unit is composed of 2 units ⁇ 2 units.
- the horizontal band type of the same width in the 9 pixel output units constituting the unit consisting of 9 pixel output units of the same color scheme may be arranged in a matrix.
- horizontal band type light shielding films 121b having different widths for each one pixel output unit are arranged in a matrix in a unit of nine pixel output constituting a unit of nine pixels of the same color. May be.
- the horizontal band type, vertical band type, and triangular type light shielding films 121b may be arranged in a matrix at different positions so as to be changed symmetrically in units of two pixels.
- the pattern using the horizontal band type, the vertical band type, and the triangular type light shielding film 121b has been described.
- other types for example, a light shielding film 121b of a circular type or the like are used.
- an example using a Bayer array has been described.
- other color arrangement patterns may be used, and the number of pixel output units of the same color constituting the unit may be one pixel.
- an example of an output unit, a 4-pixel output unit, and a 9-pixel output unit has been described, pixel output units having the same color arrangement may be set with other pixel output units.
- the randomness of the pattern in the range where the light shielding film 121b shields light in each pixel constituting the unit is high, that is, the pixels constituting the unit have different incident angle directivities.
- the pattern of the light shielding film 121b is set with a 4-pixel output unit comprising 2 pixel output units ⁇ 2 pixel output units constituting a Bayer array as one unit, and further, the light shielding film is formed as a pattern between the units.
- the light shielding film 121b may be set in other units.
- the pixel output unit having a different exposure time for example, the first exposure.
- the light shielding film 121b may be set for each processing unit including a plurality of pixel output units in consideration of the pixel output unit of time and the pixel output unit of the second exposure time longer than the first exposure time.
- the directional imaging element 121 of the fifth modified example described with reference to FIG. 49 uses the light shielding film 121b for the single photodiode 121e described with reference to FIGS.
- the description has been made assuming that the directional imaging element 121 sets the incident angle directivity for each pixel output unit.
- the directional imaging element 121 of the fifth modified example like the directional imaging element 121 described with reference to FIG. 11, the number of divisions and divisions of the plurality of photodiodes 121f constituting the pixel output unit.
- the directional imaging element 121 may be configured to set the incident angle directivity by changing the position so that the range that does not contribute to the output functions in the same manner as the light-shielded region.
- the arrangement pattern of the light-shielding film 121b in the pixel output unit of the directional imaging device 121 is, for example, a 4-pixel output unit in a Bayer array of 2 pixel output units ⁇ 2 pixel output units as shown by a pattern Pt11 in FIG.
- a horizontal band type light shielding film 121b is arranged at the left end in the pixel output unit between the units, in the pixel output unit of the upper left G pixel in the figure within the 4-pixel output unit, Furthermore, the width of the light shielding film 121b may be set wider for each unit from the left to the right in the drawing.
- the width of the light shielding film 121b of the pixel output unit of the upper left G pixel in the 4-image output unit element in each unit is randomly changed between units. You may make it set.
- the light shielding area of the light shielding film 121b of the pixel output unit of the G pixel at the upper left in the four pixel output unit in each unit is set as the pixel output unit. You may make it set so that it may change to point symmetry with respect to the center.
- a plurality of pixel output units in a unit such as a Bayer array and a pattern in a range where the light shielding film 121b shields light between the units are set.
- a pattern in a range in which the light shielding film 121b shields light may be set within a unit composed of a plurality of pixel output units classified in different categories or between units.
- the directional imaging element 121 of the sixth modification described with reference to FIG. 50 uses the light shielding film 121b for the single photodiode 121e described with reference to FIGS.
- the description has been made assuming that the directional imaging element 121 sets the incident angle directivity for each pixel output unit.
- the directional imaging element 121 of the sixth modified example like the directional imaging element 121 described with reference to FIG. 11, the number of divisions and divisions of the plurality of photodiodes 121f constituting the pixel output unit.
- the directional imaging element 121 may be configured to set the incident angle directivity by changing the position so that the range that does not contribute to the output functions in the same manner as the light-shielded region.
- a pixel output unit is configured by an arrangement of 2 ⁇ 2 photodiodes 121f, and each photodiode 121f is connected to the pixel output unit.
- the example in which the directivity with respect to the incident angle of the output pixel value of the pixel output unit is changed variously by switching the presence / absence and the degree of contribution has been described, but it is natural that the plurality of photodiodes 121f constituting the pixel output unit
- the number may be other numbers.
- a pixel as a pixel output unit composed of three photodiodes 121f ⁇ three photodiodes 121f ( 9 photodiodes 121f) of the photodiodes 121f-111 to 121e-119.
- 121a-b may be configured.
- the number of photodiodes 121f constituting the pixel output unit may be other than 4 or 9 as described above.
- a plurality of photodiodes 121f may be switched and used so as to constitute pixels 121a of various pixel output units.
- the pixel output unit is configured such that the photodiode 121f) is set.
- the range of 121f-114, 121f-117 to 121f-119 does not contribute to the detection signal, and only the signal contributing to the detection signal in the range of the photodiodes 121f-112, 121f-113, 121f-115, 121f-116 is obtained. By using it, it becomes possible to set the incident angle directivity.
- this configuration is an application of the configuration shown in the lower part of FIG. 12 or FIG.
- the light shielding film 121b without providing the light shielding film 121b, it is possible to obtain the same detection signal as when the light shielding film 121b is provided, and by switching the pattern of the photodiode 121f that does not contribute to the detection signal, the position where light is shielded. It is possible to realize a process equivalent to substantially forming the light shielding film 121b having a different range, that is, having different incident angle directivities.
- a plurality of photodiodes 121f are provided for one on-chip lens 121c, and a unit composed of the plurality of photodiodes 121f is processed as a one-pixel output unit, whereby a light shielding film.
- a detection image similar to that of the pixel 121a formed using the light shielding film 121b can be captured. That is, one on-chip lens is essential for one pixel output unit.
- Second embodiment An example in which the directional imaging device 121 and the signal processing unit 122 are configured separately has been described.
- the signal processing unit 122 is provided on the same substrate as the substrate on which the directional imaging device 121 is provided.
- the substrate on which the directional imaging element 121 is provided and the substrate on which the signal processing unit 122 and the like are stacked are stacked, for example, TSV (Through Silicon Via) or the like. They may be connected by a through electrode or the like and configured as an integral unit.
- FIG. 53 shows the subject distance on the same substrate as the substrate constituting the pixel array of the directional imaging element 121 or on the substrate stacked on the back side of the substrate constituting the pixel array of the directional imaging element 121.
- a configuration example of an imaging apparatus 101 using an integrated directional imaging element 121 provided with a determination unit 129, a coefficient set selection unit 131, and a signal processing unit 122 is shown.
- this indication can also take the following structures.
- An imaging apparatus comprising imaging elements having different characteristics with respect to an incident angle of incident light from a subject.
- the characteristic is an incident angle directivity indicating directivity with respect to an incident angle of incident light from the subject.
- one detection signal is output from each of the plurality of pixel output units.
- the imaging apparatus further including an image restoration unit that restores a restored image in which the subject can be visually recognized using detection images including a plurality of detection signals output from the plurality of pixel output units. .
- the imaging apparatus wherein the image restoration unit restores the restored image by selectively using detection signals of some pixel output units among the plurality of pixel output units.
- the image restoration unit restores the restored image using a detection signal of a part of the plurality of pixel output units, and all of the plurality of pixel output units.
- a restoration process for restoring the restored image is selectively executed using a detection signal in units of pixel outputs.
- the plurality of pixel output units include a wide-angle corresponding pixel output unit having the incident angle directivity suitable for a wide-angle image and a narrow-angle corresponding pixel output unit that is narrower than the wide-angle corresponding pixel output unit.
- the imaging apparatus according to ⁇ 2> which does not have a condensing function for causing diffused light with different chief ray incident angles from the subject to enter a plurality of adjacent pixel output units.
- each of the plurality of pixel output units has a configuration capable of independently setting a characteristic with respect to an incident angle of incident light from the subject.
- Each of the plurality of pixel output units includes one photodiode, The imaging device according to ⁇ 11>, wherein one detection signal is output from each of the plurality of pixel output units.
- Each of the at least two pixel output units includes a light shielding film that blocks incidence of subject light that is incident light from the subject on the photodiode, The imaging device according to ⁇ 12>, wherein ranges in which incidence of the subject light on the two pixel output units is blocked by the light shielding film are different from each other in the at least two pixel output units.
- Each of the plurality of pixel output units includes a plurality of photodiodes, and one detection signal is output from each of the plurality of pixel output units.
- the imaging device according to ⁇ 14>, wherein the at least two pixel output units are different from each other in the photodiodes contributing to the detection signal among the plurality of photodiodes.
- the plurality of pixel output units include a wide-angle-compatible pixel output unit having an incident-angle directivity suitable for a wide-angle image, and an incident-angle directivity suitable for a narrow-angle image as compared with the wide-angle-compatible pixel output unit.
- the imaging element according to ⁇ 11> including a pixel output unit corresponding to a narrow angle.
- the imaging device including a plurality of on-chip lenses respectively corresponding to the plurality of pixel output units.
- ⁇ 18> The imaging element according to ⁇ 17>, wherein the incident angle directivity has characteristics according to a curvature of the on-chip lens.
- ⁇ 19> The imaging device according to ⁇ 18>, wherein the incident angle directivity has characteristics according to a light shielding region.
- ⁇ 20> The imaging device according to ⁇ 18>, wherein a curvature of at least a part of the plurality of on-chip lenses is different from a curvature of another on-chip lens.
- each of the plurality of pixel output units has a configuration capable of independently setting characteristics with respect to an incident angle of incident light from the subject.
- ⁇ 22> Having a plurality of pixel output units that receive incident light without passing through either the imaging lens or the pinhole, Among the plurality of pixel output units, the plurality of pixel outputs of the imaging elements having different incident angle directivities indicating directivity with respect to the incident angle of the incident light from the subject of the output pixel values of at least two of the pixel output units.
- An image processing apparatus comprising: an image restoration unit that restores a restored image that allows the subject to be visually recognized using detection images that are each composed of a plurality of detection signals output from each unit.
- the image restoration unit restores the restored image by selectively using detection signals of some pixel output units among the plurality of pixel output units.
- the image restoration unit restores the restored image using a detection signal of a part of the plurality of pixel output units, and all of the plurality of pixel output units.
- the image processing apparatus according to ⁇ 22>, wherein a restoration process for restoring the restored image is selectively executed using a detection signal in a unit of pixel output.
- the plurality of pixel output units include a wide-angle corresponding pixel output unit having incident angle directivity suitable for a wide-angle image, and a narrow-angle corresponding pixel output unit that is narrower than the wide-angle corresponding pixel output unit.
- the image processing device wherein the image restoration unit restores the restored image by selectively using the wide-angle corresponding pixel output unit and the narrow-angle corresponding pixel output unit.
- the imaging method of an imaging device including the step which the image pick-up element from which the characteristic with respect to the incident angle of the incident light from a subject differs mutually differs.
- An image pickup method for an image pickup device including a step of picking up images by different image pickup devices having different characteristics with respect to an incident angle of incident light from a subject.
- ⁇ 28> Having a plurality of pixel output units that receive incident light without passing through either the imaging lens or the pinhole, Among the plurality of pixel output units, the plurality of pixel outputs of the imaging elements having different incident angle directivities indicating directivity with respect to the incident angle of the incident light from the subject of the output pixel values of at least two of the pixel output units.
- An image processing method including a step of restoring a restored image in which the subject can be visually recognized using a detected image composed of a plurality of detection signals output from each unit.
- Imaging device 101 imaging device, 121 directional imaging device, 121a pixel, 121b shading film, 121c on-chip lens, 121d color filter, 121e, 121f photodiode, 122 signal processing unit, 123 demosaic processing unit, 124 gamma correction unit, 125 white balance Adjustment unit, 126 Image output unit, 127 Storage unit, 128 Display unit, 129 Subject distance determination unit, 130 Operation unit, 131 Coefficient set selection unit, 151 Image sensor, 152 Optical block, 153 Focus adjustment unit
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Power Engineering (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Electromagnetism (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Optics & Photonics (AREA)
- Human Computer Interaction (AREA)
- Studio Devices (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Solid State Image Pick-Up Elements (AREA)
- Color Television Image Signal Generators (AREA)
- Cameras In General (AREA)
Abstract
Description
1.本開示の撮像装置の概要
2.第1の実施の形態
3.第2の実施の形態
本開示の撮像装置を説明するにあたって、その概要について説明する。
全ての被写体は点光源の集合であると考えることができ、あらゆる方向に光が出射されている。したがって、点光源より発する光をどのように撮像するのかについて考えることで、撮像の原理を説明することができる。ここでは、図1の上段で示されるように、点光源Pより光線L1乃至L5が発せられるものとし、光線L1乃至L5のそれぞれが光強度aの光線であるものとする。
・・・(1)
DB=α2×a+β2×b+γ2×c
・・・(2)
DC=α3×a+β3×b+γ3×c
・・・(3)
次に、図6のブロック図を参照して、本開示の撮像装置を適用した第1の実施の形態の構成例について説明する。
ここで、本願の撮像装置の構成例を説明するにあたって、その対比のため、図7のブロック図を参照して、複数の撮像レンズからなる光学ブロックを含む撮像装置の構成例について説明する。尚、図7の光学ブロックを含む撮像装置141の構成において、図1の撮像装置101の構成と同一の構成については、同一の符号、および同一の名称を付しており、その説明は適宜省略するものとする。
次に、図8のフローチャートを参照して、図7の光学ブロックを含む撮像装置141による撮像処理について説明する。
(指向性撮像素子を用いた撮像装置と光学ブロックを含む撮像装置との違い)
これに対して、図6の撮像装置101は、撮像レンズ152(図7)およびピンホール12(図3)のいずれも介さず入射される入射光を受光する指向性撮像素子121により検出画像が撮像され、信号処理部122により検出画像と係数セット群とから得られる連立方程式の解を用いることにより復元画像が求められる。
次に、図10を参照して、指向性撮像素子121の第1の構成例における側面断面、上面、および回路構成例について説明する。すなわち、図10の上段は、指向性撮像素子121の第1の構成例における側面断面図であり、図10の中段は、指向性撮像素子121の第1の構成例における上面図である。また、図10の上段の側面断面図は、図10の中段におけるAB断面となる。さらに、図10の下段は、指向性撮像素子121の回路構成例である。
(指向性撮像素子の第2の構成例における側面断面、上面、および回路構成例)
図11は、指向性撮像素子121の第2の構成例における側面断面、上面、および回路構成例を示す図である。すなわち、図11の上段には、第2の構成例である指向性撮像素子121の画素121aの側面断面図が示されており、図11の中段には、指向性撮像素子121の上面図が示されている。また、図11の上段の側面断面図は、図11の中段におけるAB断面となる。さらに、図11の下段は、指向性撮像素子121の回路構成例である。
指向性撮像素子121における各画素の入射角指向性は、例えば、図12で示されるような原理により発生する。尚、図12の左上部および右上部は、図10の指向性撮像素子121における入射角指向性の発生原理を説明する図であり、図12の左下部および右下部は、図11の指向性撮像素子121における入射角指向性の発生原理を説明する図である。
以上においては、遮光膜121bによる入射角指向性の発生原理、および、複数のフォトダイオード12fによる入射角指向性の発生原理について説明してきたが、ここでは、オンチップレンズ121cを含めた構成における入射角指向性について説明する。
例えば、図14の上段で示されるように、遮光膜121bの設定範囲が、画素121aにおける水平方向について、左端部から位置Aまでの範囲とし、垂直方向について、上端部から位置Bまでの範囲とする。
本開示の撮像装置101においては、指向性撮像素子121は、撮像レンズからなる光学ブロック152を必要としない構成であるが、オンチップレンズ121cが設けられる場合もある。なお、図11の構成の場合はオンチップレンズ121cは必須構成とされる。オンチップレンズ121cと撮像レンズとは、物理的作用が異なるものである。
信号処理部122において実行される、係数セットと検出信号とからなる連立方程式を用いた画素値の具体的な計算例について説明する。
また、被写体面31における各領域Oijに対する指向性撮像素子121への入射角度(θx、θy)は、指向性撮像素子121の画素Pijの位置に依らず図20で定義されるものとする。
図21で示されるように、画素P11,P21,P31の3画素、画素P12,P22,P32の3画素、および、画素P13,P23,P33の3画素は、遮光膜121bの紙面上の垂直方向の高さが統一されている。
また、図22で示されるように、画素P11乃至P13の3画素、画素P21乃至P23の3画素、および、画素P31乃至P33の3画素は、遮光膜121bの水平方向の幅が統一されている。
・・・(4)
領域O11における代表点となる点光源からの入射光は、全ての画素Pijに対して水平方向の入射角度θx=-5degで、かつ、垂直方向の入射角度θy=+5degで入射する。
領域O21における代表点となる点光源からの入射光は、全ての画素Pijに対して水平方向の入射角度θx=0degで、かつ、垂直方向の入射角度θy=+5degで入射する。
領域O31における代表点となる点光源からの入射光は、全ての画素Pijに対して水平方向の入射角度θx=+5degで、かつ、垂直方向の入射角度θy=+5degで入射する。
領域O12における代表点となる点光源からの入射光は、全ての画素Pijに対して水平方向の入射角度θx=-5degで、かつ、垂直方向の入射角度θy=0degで入射する。
領域O22における代表点となる点光源からの入射光は、全ての画素Pijに対して水平方向の入射角度θx=0degで、かつ、垂直方向の入射角度θy=0degで入射する。
領域O32における代表点となる点光源からの入射光は、全ての画素Pijに対して水平方向の入射角度θx=+5degで、かつ、垂直方向の入射角度θy=0degで入射する。
領域O13における代表点となる点光源からの入射光は、全ての画素Pijに対して水平方向の入射角度θx=-5degで、かつ、垂直方向の入射角度θy=-5degで入射する。
領域O23における代表点となる点光源からの入射光は、全ての画素Pijに対して水平方向の入射角度θx=0degで、かつ、垂直方向の入射角度θy=-5degで入射する。
領域O33における代表点となる点光源からの入射光は、全ての画素Pijに対して水平方向の入射角度θx=+5degで、かつ、垂直方向の入射角度θy=-5degで入射する。
・・・(5)
・・・(6)
・・・(7)
・・・(8)
・・・(9)
・・・(10)
・・・(11)
・・・(12)
・・・(13)
次に、図33を参照して、被写体面と指向性撮像素子121との距離の関係について説明する。
・・・(1)
DB=α2×a+β2×b+γ2×c
・・・(2)
DC=α3×a+β3×b+γ3×c
・・・(3)
・・・(14)
DB=α12×a’+β12×b’+γ12×c’
・・・(15)
DC=α13×a’+β13×b’+γ13×c’
・・・(16)
次に、図37のフローチャートを参照して、図6の撮像装置101による撮像処理について説明する。尚、図37のフローチャートにおけるステップS34乃至S38の処理は、図8のフローチャートを参照して説明したステップS14乃至S18の処理と同様であるので、その説明は適宜省略するものとする。
以上においては、図9で示されるように、指向性撮像素子121の各画素121aにおける遮光膜121bの構成については、垂直方向に対しては全体を遮光し、かつ、水平方向に対しての遮光幅や位置を変化させる例について説明してきたが、当然のことながら、 水平方向に対して全体として遮光し、垂直方向の幅(高さ)や位置を変化させるようにして、各画素に入射角指向性を持たせるようにしてもよい。
以上においては、横帯タイプ、縦帯タイプ、およびL字タイプの遮光膜について、遮光されている範囲がランダムに変化するように各画素に配置される例について説明してきたが、例えば、図39の指向性撮像素子121’で示されるように、矩形開口を設けた場合に、個々の画素において光線が受光する位置の近傍の範囲以外を遮光する遮光膜121bを構成するようにしてもよい。
さらに、図40(図39)で示される指向性撮像素子121’を構成する各画素121aの主遮光部Z101、および矩形開口部Z111を変化させることで、画角を変化させることができる。
ところで、指向性撮像素子121における画素121aの遮光膜121bの遮光している範囲にランダム性を持たせている場合、遮光膜121bの遮光している範囲の違いの乱雑さが大きいほど、信号処理部122による処理の負荷は大きなものとなる。そこで、画素121aの遮光膜121bの遮光している範囲の変化の一部を規則的なものとして、乱雑さを低減させることで、処理負荷を低減させるようにしてもよい。
以上における指向性撮像素子121の各画素出力単位を構成する遮光膜121bの形状のバリエーションは、図48の最上段の3パターンで示される水平方向の幅と位置が異なることで異なる入射角指向性を持たせる横帯タイプ、図48の上から2段目の3パターンで示される垂直方向の高さと位置が異なることで異なる入射角指向性を持たせる縦帯タイプ、および、図48の上から3段目の3パターンで示される横帯タイプおよび縦帯タイプを組み合わせたL字タイプについて説明してきたが、遮光膜121bのタイプは、これらに限るものではない。
以上においては、指向性撮像素子121の1画素出力単位で設定される遮光膜121bのバリエーションについて説明してきたが、所定数の複数の画素出力単位からなるユニットを構成する複数の画素出力単位で遮光膜121bのバリエーション(パターン)を設定するようにしてもよい。その一例としてモノクロの撮像素子ではなく、カラー撮像素子が考えられる。
以上においては、ベイヤ配列を構成する同一配色とされる少なくとも1画素出力単位以上のユニットを構成する複数の画素出力単位で遮光膜121bの配置のパターンを設定する例について説明してきたが、ユニット間で遮光膜121bの配置パターンを設定するようにしてもよい。
図11を参照して説明したように、複数のフォトダイオード121fの例として、フォトダイオード121fが2個×2個からなる配置で画素出力単位を構成し、各フォトダイオード121fの画素出力単位への寄与の有無や程度を切り替えることにより画素出力単位の出力画素値の入射角に対する指向性を様々に変化させる例について説明してきたが当然のことながら、画素出力単位を構成する複数のフォトダイオード121fの数は、それ以外の数であってもよい。
以上においては、複数のフォトダイオード121fにより画素出力単位の出力画素値の入射角指向性を様々に切り替える例について説明してきた。ところで、所定数のフォトダイオード121fにより1画素出力単位が構成されるときには、1画素出力単位に対して1個のオンチップレンズ121cが必須とされる。
以上においては、指向性撮像素子121と信号処理部122等とが別体とされた構成である例について説明してきたが、指向性撮像素子121が設けられた基板と同一基板に信号処理部122を構成するようにしてもよいし、または、指向性撮像素子121が設けられた基板と、信号処理部122等が構成された基板とが積層されて、例えば、TSV(Through Silicon Via)等の貫通電極等により接続されて、一体として構成されるようにしてもよい。
<1> 撮像レンズおよびピンホールのいずれも介さず、入射する入射光を受光する複数の画素出力単位を有し、前記複数の画素出力単位のうち、少なくとも2つの前記画素出力単位の出力画素値の、被写体からの入射光の入射角に対する特性が互いに異なる撮像素子
を備えた撮像装置。
<2> 前記特性は、前記被写体からの入射光の入射角に対する指向性を示す入射角指向性である
<1>に記載の撮像装置。
<3> 前記複数の画素出力単位のそれぞれから1つの検出信号が出力される
<2>に記載の撮像装置。
<4> 前記複数の画素出力単位から出力された複数の検出信号からなる検出画像を用いて、前記被写体を視認可能な復元画像を復元する画像復元部をさらに備える
<3>に記載の撮像装置。
<5> 前記画像復元部は、前記複数の画素出力単位のうちの一部の画素出力単位の検出信号を選択的に用いて、前記復元画像を復元する
<4>に記載の撮像装置。
<6> 前記画像復元部は、前記複数の画素出力単位のうちの一部の画素出力単位の検出信号を用いて前記復元画像を復元する復元処理と、前記複数の画素出力単位のうちの全ての画素出力単位の検出信号を用いて前記復元画像を復元する復元処理とを選択的に実行する
<4>に記載の撮像装置。
<7> 前記複数の画素出力単位は、広角な画像に適した前記入射角指向性を持つ広角対応画素出力単位と、前記広角対応画素出力単位と比べて狭い狭角対応画素出力単位とを含み、
前記画像復元部は、前記広角対応画素出力単位と前記狭角対応画素出力単位とを選択的に用いて前記復元画像を復元する
<4>に記載の撮像装置。
<8> 前記被写体からの主光線入射角度が異なる拡散光を、互いに隣接する複数の画素出力単位へ入射させるための集光機能を備えていない
<2>に記載の撮像装置。
<9> 前記複数の画素出力単位は、それぞれ前記被写体からの入射光の入射角に対する特性を独立に設定可能な構成を有する
<1>に記載の撮像装置。
<10> 撮像レンズおよびピンホールのいずれも介さず、入射する入射光を受光する複数の画素出力単位を有し、前記複数の画素出力単位のうち、少なくとも2つの前記画素出力単位の出力画素値の、被写体からの入射光の入射角に対する特性が互いに異なる
撮像素子。
<11> 前記複数の画素出力単位のうち、少なくとも2つの前記画素出力単位は、被写体からの入射光の入射角に対する指向性を示す入射角指向性が互いに異なる
<10>に記載の撮像素子。
<12> 前記複数の画素出力単位のそれぞれは、1つのフォトダイオードで構成され、
前記複数の画素出力単位のそれぞれから1つの検出信号が出力される
<11>に記載の撮像素子。
<13> 前記少なくとも2つの画素出力単位は、前記フォトダイオードへの前記被写体からの入射光である被写体光の入射を遮る遮光膜をそれぞれ有し、
前記被写体光の前記2つの画素出力単位へ入射が、前記遮光膜によって遮られる範囲が、前記少なくとも2つの画素出力単位で互いに異なる
<12>に記載の撮像素子。
<14> 前記複数の画素出力単位のそれぞれは、複数のフォトダイオードで構成され、前記複数の画素出力単位のそれぞれから1つの検出信号が出力される
<11>に記載の撮像素子。
<15> 前記少なくとも2つの画素出力単位は、前記複数のフォトダイオードのうち、前記検出信号に寄与するフォトダイオードが互いに異なる
<14>に記載の撮像素子。
<16> 前記複数の画素出力単位は、広角な画像に適した入射角指向性を有する広角対応画素出力単位と、前記広角対応画素出力単位と比べて狭角な画像に適した入射角指向性を有する狭角対応画素出力単位とを含む
<11>に記載の撮像素子。
<17> 前記複数の画素出力単位にそれぞれ対応する複数のオンチップレンズを備える
<11>に記載の撮像素子。
<18> 前記入射角指向性は、前記オンチップレンズの曲率に応じた特性を有する
<17>に記載の撮像素子。
<19> 前記入射角指向性は、遮光領域に応じた特性を有する
<18>に記載の撮像素子。
<20> 前記複数のオンチップレンズのうち、少なくとも一部のオンチップレンズの曲率は他のオンチップレンズの曲率と異なる
<18>に記載の撮像素子。
<21> 前記複数の画素出力単位は、それぞれ前記被写体からの入射光の入射角に対する特性を独立に設定可能な構成を有する
<10>に記載の撮像素子。
<22> 撮像レンズおよびピンホールのいずれも介さず、入射する入射光を受光する複数の画素出力単位を有し、
前記複数の画素出力単位のうち、少なくとも2つの前記画素出力単位の出力画素値の、被写体からの入射光の入射角に対する指向性を示す入射角指向性が互いに異なる撮像素子の前記複数の画素出力単位からそれぞれ出力された複数の検出信号からなる検出画像を用いて、前記被写体を視認可能な復元画像を復元する画像復元部を備える
画像処理装置。
<23> 前記画像復元部は、前記複数の画素出力単位のうちの一部の画素出力単位の検出信号を選択的に用いて、前記復元画像を復元する
<22>に記載の画像処理装置。
<24> 前記画像復元部は、前記複数の画素出力単位のうちの一部の画素出力単位の検出信号を用いて前記復元画像を復元する復元処理と、前記複数の画素出力単位のうちの全ての画素出力単位の検出信号を用いて前記復元画像を復元する復元処理とを選択的に実行する
<22>に記載の画像処理装置。
<25> 前記複数の画素出力単位は、広角な画像に適した前記入射角指向性を持つ広角対応画素出力単位と、前記広角対応画素出力単位と比べて狭い狭角対応画素出力単位とを含み、
前記画像復元部は、前記広角対応画素出力単位と前記狭角対応画素出力単位とを選択的に用いて前記復元画像を復元する
<22>に記載の画像処理装置。
<26> 撮像レンズおよびピンホールのいずれも介さず、入射する入射光を受光する複数の画素出力単位を有し、前記複数の画素出力単位のうち、少なくとも2つの前記画素出力単位の出力画素値の、被写体からの入射光の入射角に対する特性が互いに異なる撮像素子が撮像する
ステップを含む撮像装置の撮像方法。
<27> 撮像レンズおよびピンホールのいずれも介さず、入射する入射光を受光する複数の画素出力単位を有し、前記複数の画素出力単位のうち、少なくとも2つの前記画素出力単位の出力画素値の、被写体からの入射光の入射角に対する特性が互いに異なる撮像素子が撮像する
ステップを含む撮像素子の撮像方法。
<28> 撮像レンズおよびピンホールのいずれも介さず、入射する入射光を受光する複数の画素出力単位を有し、
前記複数の画素出力単位のうち、少なくとも2つの前記画素出力単位の出力画素値の、被写体からの入射光の入射角に対する指向性を示す入射角指向性が互いに異なる撮像素子の前記複数の画素出力単位からそれぞれ出力された複数の検出信号からなる検出画像を用いて、前記被写体を視認可能な復元画像を復元する
ステップを含む画像処理方法。
Claims (25)
- 撮像レンズおよびピンホールのいずれも介さず、入射する入射光を受光する複数の画素出力単位を有し、前記複数の画素出力単位のうち、少なくとも2つの前記画素出力単位、被写体からの入射光の入射角に対する特性が互いに異なる撮像素子
を備えた撮像装置。 - 前記特性は、前記被写体からの入射光の入射角に対する指向性を示す入射角指向性である
請求項1に記載の撮像装置。 - 前記複数の画素出力単位のそれぞれから1つの検出信号が出力される
請求項2に記載の撮像装置。 - 前記複数の画素出力単位から出力された複数の検出信号からなる検出画像を用いて、復元画像を復元する画像復元部をさらに備える
請求項3に記載の撮像装置。 - 前記画像復元部は、前記複数の画素出力単位のうちの一部の画素出力単位の検出信号を選択的に用いて、前記復元画像を復元する
請求項4に記載の撮像装置。 - 前記画像復元部は、前記複数の画素出力単位のうちの一部の画素出力単位の検出信号を用いて前記復元画像を復元する復元処理と、前記複数の画素出力単位のうちの全ての画素出力単位の検出信号を用いて前記復元画像を復元する復元処理とを選択的に実行する
請求項4に記載の撮像装置。 - 前記複数の画素出力単位は、広角な画像に適した前記入射角指向性を持つ広角対応画素出力単位と、前記広角対応画素出力単位と比べて狭い狭角対応画素出力単位とを含み、
前記画像復元部は、前記広角対応画素出力単位と前記狭角対応画素出力単位とを選択的に用いて前記復元画像を復元する
請求項4に記載の撮像装置。 - 前記被写体からの主光線入射角度が異なる拡散光を、互いに隣接する複数の画素出力単位へ入射させるための集光機能を備えていない
請求項2に記載の撮像装置。 - 前記複数の画素出力単位は、それぞれ前記被写体からの入射光の入射角に対する特性を独立に設定可能な構成を有する
請求項1に記載の撮像装置。 - 撮像レンズおよびピンホールのいずれも介さず、入射する入射光を受光する複数の画素出力単位を有し、前記複数の画素出力単位のうち、少なくとも2つの前記画素出力単位の出力画素値の、被写体からの入射光の入射角に対する特性が互いに異なる
撮像素子。 - 前記複数の画素出力単位のうち、少なくとも2つの前記画素出力単位は、被写体からの入射光の入射角に対する指向性を示す入射角指向性が互いに異なる
請求項10に記載の撮像素子。 - 前記複数の画素出力単位のそれぞれは、1つのフォトダイオードで構成され、
前記複数の画素出力単位のそれぞれから1つの検出信号が出力される
請求項11に記載の撮像素子。 - 前記少なくとも2つの画素出力単位は、前記フォトダイオードへの前記被写体からの入射光である被写体光の入射を遮る遮光膜をそれぞれ有し、
前記被写体光の前記2つの画素出力単位へ入射が、前記遮光膜によって遮られる範囲が、前記少なくとも2つの画素出力単位で互いに異なる
請求項12に記載の撮像素子。 - 前記複数の画素出力単位のそれぞれは、複数のフォトダイオードで構成され、前記複数の画素出力単位のそれぞれから1つの検出信号が出力される
請求項11に記載の撮像素子。 - 前記少なくとも2つの画素出力単位は、前記複数のフォトダイオードのうち、前記検出信号に寄与するフォトダイオードが互いに異なる
請求項14に記載の撮像素子。 - 前記複数の画素出力単位は、広角な画像に適した入射角指向性を有する広角対応画素出力単位と、前記広角対応画素出力単位と比べて狭角な画像に適した入射角指向性を有する狭角対応画素出力単位とを含む
請求項11に記載の撮像素子。 - 前記複数の画素出力単位にそれぞれ対応する複数のオンチップレンズを備える
請求項11に記載の撮像素子。 - 前記入射角指向性は、前記オンチップレンズの曲率に応じた特性を有する
請求項17に記載の撮像素子。 - 前記入射角指向性は、遮光領域に応じた特性を有する
請求項18に記載の撮像素子。 - 前記複数のオンチップレンズのうち、少なくとも一部のオンチップレンズの曲率は他のオンチップレンズの曲率と異なる
請求項18に記載の撮像素子。 - 前記複数の画素出力単位は、それぞれ前記被写体からの入射光の入射角に対する特性を独立に設定可能な構成を有する
請求項10に記載の撮像素子。 - 撮像レンズおよびピンホールのいずれも介さず、入射する入射光を受光する複数の画素出力単位を有し、
前記複数の画素出力単位のうち、少なくとも2つの前記画素出力単位の出力画素値の、被写体からの入射光の入射角に対する指向性を示す入射角指向性が互いに異なる撮像素子の前記複数の画素出力単位からそれぞれ出力された複数の検出信号からなる検出画像を用いて、前記被写体を視認可能な復元画像を復元する画像復元部を備える
画像処理装置。 - 前記画像復元部は、前記複数の画素出力単位のうちの一部の画素出力単位の検出信号を選択的に用いて、前記復元画像を復元する
請求項22に記載の画像処理装置。 - 前記画像復元部は、前記複数の画素出力単位のうちの一部の画素出力単位の検出信号を用いて前記復元画像を復元する復元処理と、前記複数の画素出力単位のうちの全ての画素出力単位の検出信号を用いて前記復元画像を復元する復元処理とを選択的に実行する
請求項22に記載の画像処理装置。 - 前記複数の画素出力単位は、広角な画像に適した前記入射角指向性を持つ広角対応画素出力単位と、前記広角対応画素出力単位と比べて狭い狭角対応画素出力単位とを含み、
前記画像復元部は、前記広角対応画素出力単位と前記狭角対応画素出力単位とを選択的に用いて前記復元画像を復元する
請求項22に記載の画像処理装置。
Priority Applications (7)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21216849.6A EP4016992A3 (en) | 2016-07-13 | 2017-07-11 | Imaging apparatus, imaging device, and image processing apparatus |
JP2018527611A JP6958554B2 (ja) | 2016-07-13 | 2017-07-11 | 撮像装置、撮像素子、および画像処理装置 |
EP17827619.2A EP3487165B1 (en) | 2016-07-13 | 2017-07-11 | Imaging device and imaging element |
CN202110759418.0A CN113489904B (zh) | 2016-07-13 | 2017-07-11 | 成像装置、成像器件和图像处理装置 |
KR1020197000665A KR102353634B1 (ko) | 2016-07-13 | 2017-07-11 | 촬상 장치, 촬상 소자, 및 화상 처리 장치 |
US16/315,470 US20190215473A1 (en) | 2016-07-13 | 2017-07-11 | Imaging apparatus, imaging device, and image processing apparatus |
CN201780041772.1A CN109479101B (zh) | 2016-07-13 | 2017-07-11 | 成像装置、成像器件和图像处理装置 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016138363 | 2016-07-13 | ||
JP2016-138363 | 2016-07-13 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018012492A1 true WO2018012492A1 (ja) | 2018-01-18 |
Family
ID=60953061
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/025255 WO2018012492A1 (ja) | 2016-07-13 | 2017-07-11 | 撮像装置、撮像素子、および画像処理装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20190215473A1 (ja) |
EP (2) | EP4016992A3 (ja) |
JP (2) | JP6958554B2 (ja) |
KR (1) | KR102353634B1 (ja) |
CN (3) | CN109479101B (ja) |
WO (1) | WO2018012492A1 (ja) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019189099A1 (ja) | 2018-03-26 | 2019-10-03 | ソニー株式会社 | 撮像素子、撮像装置、並びに、情報処理方法 |
WO2019188396A1 (ja) * | 2018-03-30 | 2019-10-03 | ソニー株式会社 | 撮像装置および方法、画像処理装置および方法、並びに、撮像素子 |
WO2020065442A1 (ja) * | 2018-09-28 | 2020-04-02 | 株式会社半導体エネルギー研究所 | 画像処理方法、プログラム、及び撮像装置 |
WO2020218074A1 (ja) * | 2019-04-26 | 2020-10-29 | ソニー株式会社 | 撮像システム及び撮像素子 |
EP3700186A4 (en) * | 2017-10-19 | 2020-12-02 | Sony Corporation | IMAGING DEVICE AND METHOD, AND IMAGE PROCESSING METHOD AND DEVICE |
WO2020246250A1 (ja) | 2019-06-04 | 2020-12-10 | ソニー株式会社 | 撮像素子、信号処理装置、信号処理方法、プログラム、及び、撮像装置 |
CN112106343A (zh) * | 2018-03-29 | 2020-12-18 | 索尼公司 | 信息处理装置、信息处理方法、程序和信息处理系统 |
WO2021020156A1 (ja) * | 2019-07-31 | 2021-02-04 | ソニー株式会社 | 撮像素子、撮像装置、信号処理装置、及び、信号処理方法 |
WO2021215201A1 (ja) * | 2020-04-22 | 2021-10-28 | ソニーセミコンダクタソリューションズ株式会社 | 電子機器 |
US20220377275A1 (en) * | 2019-10-30 | 2022-11-24 | Sony Group Corporation | Imaging device, display device, and imaging system |
US11659289B2 (en) | 2017-10-19 | 2023-05-23 | Sony Corporation | Imaging apparatus and method, and image processing apparatus and method |
US11770630B2 (en) | 2021-02-04 | 2023-09-26 | Canon Kabushiki Kaisha | Photoelectric conversion apparatus, photoelectric conversion system, and mobile body |
DE112021004867T5 (de) | 2020-09-15 | 2023-09-28 | Sony Semiconductor Solutions Corporation | Festkörperbildgebungsvorrichtung und elektronische einrichtung |
US12026972B2 (en) | 2020-04-22 | 2024-07-02 | Sony Semiconductor Solutions Corporation | Electronic device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013201466A (ja) * | 2010-06-30 | 2013-10-03 | Fujifilm Corp | 立体画像撮像装置 |
WO2014049941A1 (ja) * | 2012-09-28 | 2014-04-03 | パナソニック株式会社 | 固体撮像装置及び撮像装置 |
JP2015015295A (ja) * | 2013-07-03 | 2015-01-22 | ソニー株式会社 | 固体撮像装置およびその製造方法、並びに電子機器 |
JP2016092413A (ja) * | 2014-10-29 | 2016-05-23 | 株式会社半導体エネルギー研究所 | 撮像装置および電子機器 |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5256796B2 (ja) * | 2007-05-14 | 2013-08-07 | セイコーエプソン株式会社 | ラインヘッドおよびそれを用いた画像形成装置 |
WO2010044943A2 (en) * | 2008-07-25 | 2010-04-22 | Cornell University | Light field image sensor, method and applications |
JP5262823B2 (ja) * | 2009-02-23 | 2013-08-14 | ソニー株式会社 | 固体撮像装置および電子機器 |
JP2011176715A (ja) * | 2010-02-25 | 2011-09-08 | Nikon Corp | 裏面照射型撮像素子および撮像装置 |
EP2601785A4 (en) * | 2010-08-03 | 2017-12-13 | Cornell University | Angle sensitive pixel (asp)-based image processing system, method, and applications |
US9532033B2 (en) * | 2010-11-29 | 2016-12-27 | Nikon Corporation | Image sensor and imaging device |
JP5979134B2 (ja) * | 2011-04-14 | 2016-08-24 | 株式会社ニコン | 画像処理装置および画像処理プログラム |
JP6335423B2 (ja) * | 2012-08-31 | 2018-05-30 | キヤノン株式会社 | 情報処理装置および情報処理方法 |
JP2014154662A (ja) * | 2013-02-07 | 2014-08-25 | Sony Corp | 固体撮像素子、電子機器、および製造方法 |
JP2014165785A (ja) * | 2013-02-27 | 2014-09-08 | Canon Inc | 撮像装置およびその制御方法、カメラシステム、プログラム、記憶媒体 |
EP2965134A1 (en) * | 2013-03-05 | 2016-01-13 | Rambus Inc. | Phase gratings with odd symmetry for high-resoultion lensless optical sensing |
TWI620445B (zh) * | 2013-03-25 | 2018-04-01 | Sony Corp | 攝像元件及電子機器 |
US9746593B2 (en) * | 2013-08-28 | 2017-08-29 | Rambus Inc. | Patchwork Fresnel zone plates for lensless imaging |
JP6368999B2 (ja) * | 2013-09-26 | 2018-08-08 | 株式会社ニコン | 撮像素子および撮像装置 |
JP2015076475A (ja) * | 2013-10-08 | 2015-04-20 | ソニー株式会社 | 固体撮像装置およびその製造方法、並びに電子機器 |
JP6219176B2 (ja) * | 2014-01-17 | 2017-10-25 | 富士フイルム株式会社 | 撮像レンズおよび撮像装置 |
WO2015127043A1 (en) * | 2014-02-24 | 2015-08-27 | Rambus Inc. | Optical flow sensing and pattern recognition with antisymmetric phase gratings |
KR20160008364A (ko) * | 2014-07-14 | 2016-01-22 | 삼성전자주식회사 | 이미지 센서 및 이미지 센서를 포함하는 영상 촬영 장치 |
WO2016123529A1 (en) | 2015-01-29 | 2016-08-04 | William Marsh Rice University | Lensless imaging system using an image sensor with one or more attenuating layers |
-
2017
- 2017-07-11 JP JP2018527611A patent/JP6958554B2/ja active Active
- 2017-07-11 EP EP21216849.6A patent/EP4016992A3/en active Pending
- 2017-07-11 EP EP17827619.2A patent/EP3487165B1/en active Active
- 2017-07-11 US US16/315,470 patent/US20190215473A1/en not_active Abandoned
- 2017-07-11 CN CN201780041772.1A patent/CN109479101B/zh active Active
- 2017-07-11 KR KR1020197000665A patent/KR102353634B1/ko active IP Right Grant
- 2017-07-11 CN CN202110759418.0A patent/CN113489904B/zh active Active
- 2017-07-11 CN CN202110759288.0A patent/CN113489926A/zh active Pending
- 2017-07-11 WO PCT/JP2017/025255 patent/WO2018012492A1/ja unknown
-
2021
- 2021-10-07 JP JP2021165269A patent/JP7290159B2/ja active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013201466A (ja) * | 2010-06-30 | 2013-10-03 | Fujifilm Corp | 立体画像撮像装置 |
WO2014049941A1 (ja) * | 2012-09-28 | 2014-04-03 | パナソニック株式会社 | 固体撮像装置及び撮像装置 |
JP2015015295A (ja) * | 2013-07-03 | 2015-01-22 | ソニー株式会社 | 固体撮像装置およびその製造方法、並びに電子機器 |
JP2016092413A (ja) * | 2014-10-29 | 2016-05-23 | 株式会社半導体エネルギー研究所 | 撮像装置および電子機器 |
Non-Patent Citations (2)
Title |
---|
ASIF, M. SALMNAN ET AL.: "FlatCam: Replacing Lenses with Masks and Computation", IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOP(ICCVW, 7 December 2015 (2015-12-07) - 13 December 0015 (0015-12-13), pages 663 - 666, XP032865019, Retrieved from the Internet <URL:DOI:10.1109/ICCVW.2015.89> * |
See also references of EP3487165A4 * |
Cited By (34)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11659289B2 (en) | 2017-10-19 | 2023-05-23 | Sony Corporation | Imaging apparatus and method, and image processing apparatus and method |
US11336799B2 (en) | 2017-10-19 | 2022-05-17 | Sony Corporation | Imaging device including pixels having individually settable incident angle directivity |
EP3700186A4 (en) * | 2017-10-19 | 2020-12-02 | Sony Corporation | IMAGING DEVICE AND METHOD, AND IMAGE PROCESSING METHOD AND DEVICE |
EP3780580A4 (en) * | 2018-03-26 | 2021-04-07 | Sony Corporation | IMAGE CAPTURING ELEMENT, IMAGE CAPTURING DEVICE AND INFORMATION PROCESSING METHOD |
JP7375746B2 (ja) | 2018-03-26 | 2023-11-08 | ソニーグループ株式会社 | 撮像素子、撮像装置、並びに、情報処理方法 |
WO2019189099A1 (ja) | 2018-03-26 | 2019-10-03 | ソニー株式会社 | 撮像素子、撮像装置、並びに、情報処理方法 |
CN111886856A (zh) * | 2018-03-26 | 2020-11-03 | 索尼公司 | 成像元件、成像装置以及信息处理方法 |
US11252361B2 (en) | 2018-03-26 | 2022-02-15 | Sony Group Corporation | Imaging element, imaging device, and information processing method with image restoration pixels and unidirectional pixel |
JPWO2019189099A1 (ja) * | 2018-03-26 | 2021-04-22 | ソニー株式会社 | 撮像素子、撮像装置、並びに、情報処理方法 |
JP7238887B2 (ja) | 2018-03-29 | 2023-03-14 | ソニーグループ株式会社 | 情報処理装置、プログラム、及び、情報処理システム |
EP3780576A4 (en) * | 2018-03-29 | 2021-04-21 | Sony Corporation | INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, PROGRAM AND INFORMATION PROCESSING SYSTEM |
CN112106343A (zh) * | 2018-03-29 | 2020-12-18 | 索尼公司 | 信息处理装置、信息处理方法、程序和信息处理系统 |
JPWO2019188536A1 (ja) * | 2018-03-29 | 2021-04-01 | ソニー株式会社 | 情報処理装置、情報処理方法、プログラム、及び、情報処理システム |
US11470247B2 (en) | 2018-03-29 | 2022-10-11 | Sony Corporation | Information processing device, information processing method, program, and information processing system |
CN111989916A (zh) * | 2018-03-30 | 2020-11-24 | 索尼公司 | 成像设备和方法、图像处理设备和方法以及成像元件 |
EP3780594A4 (en) * | 2018-03-30 | 2021-04-07 | Sony Corporation | IMAGING DEVICE AND METHOD, IMAGE PROCESSING DEVICE AND METHOD, AND IMAGING ELEMENT |
WO2019188396A1 (ja) * | 2018-03-30 | 2019-10-03 | ソニー株式会社 | 撮像装置および方法、画像処理装置および方法、並びに、撮像素子 |
JPWO2019188396A1 (ja) * | 2018-03-30 | 2021-04-15 | ソニー株式会社 | 撮像装置および方法、画像処理装置および方法、並びに、撮像素子 |
US11159741B2 (en) | 2018-03-30 | 2021-10-26 | Sony Group Corporation | Imaging device and method, image processing device and method, and imaging element |
JP7147841B2 (ja) | 2018-03-30 | 2022-10-05 | ソニーグループ株式会社 | 撮像装置および方法、画像処理装置および方法、並びに、撮像素子 |
JPWO2020065442A1 (ja) * | 2018-09-28 | 2021-10-07 | 株式会社半導体エネルギー研究所 | 画像処理方法、プログラム、及び撮像装置 |
US11631708B2 (en) | 2018-09-28 | 2023-04-18 | Semiconductor Energy Laboratory Co., Ltd. | Image processing method, program, and imaging device |
JP7395490B2 (ja) | 2018-09-28 | 2023-12-11 | 株式会社半導体エネルギー研究所 | 画像処理方法、プログラム、及び撮像装置 |
WO2020065442A1 (ja) * | 2018-09-28 | 2020-04-02 | 株式会社半導体エネルギー研究所 | 画像処理方法、プログラム、及び撮像装置 |
WO2020218074A1 (ja) * | 2019-04-26 | 2020-10-29 | ソニー株式会社 | 撮像システム及び撮像素子 |
WO2020246250A1 (ja) | 2019-06-04 | 2020-12-10 | ソニー株式会社 | 撮像素子、信号処理装置、信号処理方法、プログラム、及び、撮像装置 |
US11889199B2 (en) | 2019-06-04 | 2024-01-30 | Sony Group Corporation | Imaging device, signal processing device, signal processing method, program, and imaging apparatus |
JP7484904B2 (ja) | 2019-06-04 | 2024-05-16 | ソニーグループ株式会社 | 撮像素子、信号処理装置、信号処理方法、プログラム、及び、撮像装置 |
WO2021020156A1 (ja) * | 2019-07-31 | 2021-02-04 | ソニー株式会社 | 撮像素子、撮像装置、信号処理装置、及び、信号処理方法 |
US20220377275A1 (en) * | 2019-10-30 | 2022-11-24 | Sony Group Corporation | Imaging device, display device, and imaging system |
WO2021215201A1 (ja) * | 2020-04-22 | 2021-10-28 | ソニーセミコンダクタソリューションズ株式会社 | 電子機器 |
US12026972B2 (en) | 2020-04-22 | 2024-07-02 | Sony Semiconductor Solutions Corporation | Electronic device |
DE112021004867T5 (de) | 2020-09-15 | 2023-09-28 | Sony Semiconductor Solutions Corporation | Festkörperbildgebungsvorrichtung und elektronische einrichtung |
US11770630B2 (en) | 2021-02-04 | 2023-09-26 | Canon Kabushiki Kaisha | Photoelectric conversion apparatus, photoelectric conversion system, and mobile body |
Also Published As
Publication number | Publication date |
---|---|
CN113489926A (zh) | 2021-10-08 |
EP3487165A4 (en) | 2019-06-19 |
CN109479101A (zh) | 2019-03-15 |
JP2022000994A (ja) | 2022-01-04 |
JP7290159B2 (ja) | 2023-06-13 |
JP6958554B2 (ja) | 2021-11-02 |
KR20190022619A (ko) | 2019-03-06 |
CN113489904A (zh) | 2021-10-08 |
EP3487165B1 (en) | 2022-03-16 |
CN109479101B (zh) | 2021-08-03 |
US20190215473A1 (en) | 2019-07-11 |
EP4016992A3 (en) | 2022-08-24 |
EP3487165A1 (en) | 2019-05-22 |
KR102353634B1 (ko) | 2022-01-21 |
JPWO2018012492A1 (ja) | 2019-04-25 |
CN113489904B (zh) | 2023-07-21 |
EP4016992A2 (en) | 2022-06-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7290159B2 (ja) | 撮像装置および撮像方法、撮像素子、画像処理装置および画像処理方法、並びにプログラム | |
US11659289B2 (en) | Imaging apparatus and method, and image processing apparatus and method | |
EP3700197B1 (en) | Imaging device and method, and image processing device and method | |
CN111201782B (zh) | 成像器件、图像处理装置、图像处理方法和记录介质 | |
JPWO2019078333A1 (ja) | 撮像装置、露出制御方法、プログラム、及び、撮像素子 | |
EP3700187B1 (en) | Signal processing device and imaging device | |
JP7375746B2 (ja) | 撮像素子、撮像装置、並びに、情報処理方法 | |
WO2019188396A1 (ja) | 撮像装置および方法、画像処理装置および方法、並びに、撮像素子 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2018527611 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17827619 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20197000665 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2017827619 Country of ref document: EP Effective date: 20190213 |