WO2023021871A1 - 撮像装置、センサ及び撮像制御装置 - Google Patents
撮像装置、センサ及び撮像制御装置 Download PDFInfo
- Publication number
- WO2023021871A1 WO2023021871A1 PCT/JP2022/026817 JP2022026817W WO2023021871A1 WO 2023021871 A1 WO2023021871 A1 WO 2023021871A1 JP 2022026817 W JP2022026817 W JP 2022026817W WO 2023021871 A1 WO2023021871 A1 WO 2023021871A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- resolution
- imaging
- image
- pixels
- pixel
- Prior art date
Links
- 238000006243 chemical reaction Methods 0.000 claims abstract description 45
- 239000011159 matrix material Substances 0.000 claims abstract description 19
- 238000003384 imaging method Methods 0.000 claims description 337
- 238000012545 processing Methods 0.000 claims description 86
- 238000001514 detection method Methods 0.000 claims description 47
- 238000010586 diagram Methods 0.000 description 61
- 238000000034 method Methods 0.000 description 45
- 230000000875 corresponding effect Effects 0.000 description 38
- 230000008569 process Effects 0.000 description 33
- 238000012546 transfer Methods 0.000 description 33
- 239000004065 semiconductor Substances 0.000 description 24
- 239000000758 substrate Substances 0.000 description 16
- 230000008878 coupling Effects 0.000 description 10
- 238000010168 coupling process Methods 0.000 description 10
- 238000005859 coupling reaction Methods 0.000 description 10
- 230000035945 sensitivity Effects 0.000 description 10
- 210000001747 pupil Anatomy 0.000 description 9
- 230000003321 amplification Effects 0.000 description 7
- 238000003199 nucleic acid amplification method Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 6
- 239000000203 mixture Substances 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 229910004298 SiO 2 Inorganic materials 0.000 description 4
- 238000005259 measurement Methods 0.000 description 4
- 230000001681 protective effect Effects 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 238000012937 correction Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 3
- 239000012212 insulator Substances 0.000 description 3
- 238000002955 isolation Methods 0.000 description 3
- 229910052581 Si3N4 Inorganic materials 0.000 description 2
- VYPSYNLAJGMNEJ-UHFFFAOYSA-N Silicium dioxide Chemical compound O=[Si]=O VYPSYNLAJGMNEJ-UHFFFAOYSA-N 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 2
- 239000010949 copper Substances 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 239000002184 metal Substances 0.000 description 2
- 229910052751 metal Inorganic materials 0.000 description 2
- HQVNEWCFYHHQES-UHFFFAOYSA-N silicon nitride Chemical compound N12[Si]34N5[Si]62N3[Si]51N64 HQVNEWCFYHHQES-UHFFFAOYSA-N 0.000 description 2
- 239000004925 Acrylic resin Substances 0.000 description 1
- 229920000178 Acrylic resin Polymers 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 239000003990 capacitor Substances 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 239000004020 conductor Substances 0.000 description 1
- 230000001276 controlling effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 230000002950 deficient Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 229910010272 inorganic material Inorganic materials 0.000 description 1
- 239000011147 inorganic material Substances 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000011368 organic material Substances 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 230000000149 penetrating effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
- 229910052814 silicon oxide Inorganic materials 0.000 description 1
- WFKWXMTUELFFGS-UHFFFAOYSA-N tungsten Chemical compound [W] WFKWXMTUELFFGS-UHFFFAOYSA-N 0.000 description 1
- 229910052721 tungsten Inorganic materials 0.000 description 1
- 239000010937 tungsten Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/46—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B15/00—Special procedures for taking photographs; Apparatus therefor
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/40—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
- H04N25/42—Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by switching between different modes of operation using different resolutions or aspect ratios, e.g. switching between interlaced and non-interlaced mode
Definitions
- the present disclosure relates to imaging devices, sensors, and imaging control devices.
- a device that generates an image of a subject by including an imaging device in which photoelectric conversion elements that perform photoelectric conversion of incident light are arranged in a two-dimensional matrix is used (see, for example, Patent Document 1).
- the conventional technology described above has the problem that the resolution of the image of the subject cannot be adjusted, resulting in a decrease in convenience.
- this disclosure proposes an imaging device that adjusts the resolution of the image of the subject.
- An imaging device includes an imaging device, a resolution selection section, and an image signal addition section.
- the imaging device includes a plurality of pixels that perform photoelectric conversion of incident light from an object to generate image signals, and an on-chip lens that is arranged in common with the plurality of pixels and converges the incident light onto the plurality of pixels.
- the provided pixel blocks are arranged in a two-dimensional matrix.
- the resolution selection unit selects a pixel block unit composed of a first resolution corresponding to the size of the pixel, a second resolution corresponding to the size of the pixel block, and a plurality of adjacent pixel blocks. select a third resolution that is a resolution that depends on the size of the .
- the image signal adder adds the generated image signals according to the selected resolution to generate a second image signal.
- FIG. 1 is a diagram illustrating a configuration example of an imaging device according to a first embodiment of the present disclosure
- FIG. It is a figure showing an example of composition of an image sensor concerning an embodiment of this indication.
- FIG. 3 is a diagram showing a configuration example of a pixel according to the first embodiment of the present disclosure
- FIG. 4 is a cross-sectional view showing a configuration example of a pixel according to the embodiment of the present disclosure
- FIG. 3 is a diagram illustrating a configuration example of a signal processing unit according to the first embodiment of the present disclosure
- FIG. FIG. 2 illustrates an example of remosaic processing according to an embodiment of the present disclosure
- FIG. 2 illustrates an example of remosaic processing according to an embodiment of the present disclosure
- FIG. 2 illustrates an example of remosaic processing according to an embodiment of the present disclosure
- FIG. 2 illustrates an example of remosaic processing according to an embodiment of the present disclosure
- FIG. 1 is a diagram illustrating a configuration example
- FIG. 2 illustrates an example of remosaic processing according to an embodiment of the present disclosure
- 1 is a diagram showing an example of an image generated by an imaging device according to an embodiment of the present disclosure
- FIG. 1 is a diagram showing an example of an image generated by an imaging device according to an embodiment of the present disclosure
- FIG. FIG. 4 is a diagram showing an example of imaging processing according to the first embodiment of the present disclosure
- FIG. It is a figure showing an example of composition of an imaging device concerning a 2nd embodiment of this indication.
- FIG. 11 is a diagram illustrating an example of imaging processing according to the second embodiment of the present disclosure;
- FIG. It is a figure showing the example of composition of the imaging device concerning a 3rd embodiment of this indication.
- FIG. 11 is a diagram illustrating a configuration example of a signal processing unit according to a third embodiment of the present disclosure.
- FIG. It is a figure which shows the structural example of the phase difference pixel which concerns on 3rd Embodiment of this indication. It is a figure which shows the structural example of the phase difference pixel which concerns on 3rd Embodiment of this indication. It is a figure which shows the structural example of the phase difference pixel which concerns on 3rd Embodiment of this indication. It is a figure which shows the structural example of the phase difference pixel which concerns on 3rd Embodiment of this indication. It is a figure which shows the structural example of the image pick-up element of a prior art.
- FIG. 13 is a diagram illustrating a configuration example of an imaging device according to a fifth embodiment of the present disclosure
- FIG. FIG. 21 is a diagram illustrating an example of a processing procedure of imaging processing according to the fifth embodiment of the present disclosure
- FIG. 11 is a diagram illustrating a configuration example of an imaging device according to a sixth embodiment of the present disclosure
- FIG. FIG. 21 is a diagram illustrating an example of a processing procedure of imaging processing according to the sixth embodiment of the present disclosure
- FIG. 11 is a diagram showing an example of an image generated by an imaging device according to a seventh embodiment of the present disclosure
- FIG. FIG. 10 is a diagram showing a configuration example of a pixel according to a first modified example of the embodiment of the present disclosure
- FIG. FIG. 10 is a diagram illustrating a configuration example of a pixel according to a second modified example of the embodiment of the present disclosure
- FIG. 2 is a diagram showing an example of a circuit configuration of a pixel block according to an embodiment of the present disclosure
- FIG. 3 is a diagram illustrating a configuration example of a pixel block according to an embodiment of the present disclosure
- FIG. 1 is a diagram illustrating a configuration example of an imaging device according to the first embodiment of the present disclosure.
- FIG. 1 is a block diagram showing a configuration example of the imaging device 1.
- the imaging device 1 is a device that generates an image signal forming an image of a subject.
- the imaging device 1 includes an imaging device 10 , an image signal addition section 20 , a signal processing section 30 , an imaging control section 40 , a target area detection section 50 , a storage section 60 and an imaging lens 70 .
- the image pickup device 10 takes an image of a subject and generates an image signal.
- the imaging device 10 includes a pixel array section (pixel array section 11 described later) in which pixels (pixels 100 described later) that perform photoelectric conversion of incident light from a subject are arranged in a two-dimensional matrix.
- the image pickup device 10 in FIG. 1 generates an image signal based on a control signal from the image pickup control section 40 .
- the generated image signal is output to the image signal adder 20 . Details of the configuration of the imaging device 10 will be described later.
- the imaging control unit 40 controls imaging by the imaging device 10 .
- the imaging control unit 40 controls the imaging element 10 by outputting control signals.
- the image capturing control unit 40 controls the image sensor 10 to capture a still image or a moving image.
- the imaging control unit 40 also performs processing for selecting the resolution of the image. As will be described later, the imaging control unit 40 can select a first resolution, a second resolution, and a third resolution.
- the first resolution is a resolution equal to the density of the plurality of pixels 100 arranged on the image sensor 10 .
- the second resolution is 1 ⁇ 4 of the first resolution.
- the third resolution is 1 ⁇ 4 of the second resolution.
- These selected resolutions are output to the image signal adder 20 .
- the imaging control unit 40 can select a resolution according to a target area, which is an area of an image including an object to be imaged among subjects. As this target area, a target area input from a target area detection unit 50, which will be described later, can be used. On the other hand, the imaging control section 40 outputs an image signal to the target area detection section 50 .
- imaging modes in which the first resolution, second resolution, and third resolution are selected are referred to as first imaging mode, second imaging mode, and third imaging mode, respectively.
- the details of the resolution selection by the imaging control unit 40 will be described later.
- the imaging control unit 40 is an example of the resolution selection unit described in the claims.
- the image signal addition unit 20 adds image signals according to the resolution selected by the imaging control unit 40 to generate a new image signal (second image signal).
- the image signal adder 20 adds image signals generated by four adjacent pixels 100 to generate an image signal corresponding to the second resolution. Further, the image signal addition unit 20 adds the image signals generated by the 16 adjacent pixels 100 to generate an image signal corresponding to the third resolution. In order to prevent overflow, the image signal adder 20 can adjust the number of digits during addition.
- the image signal adder 20 outputs the generated image signal to the signal processor 30 . The details of addition of image signals in the image signal addition section 20 will be described later.
- the signal processing section 30 processes the image signal output from the image signal adding section 20 .
- the details of the configuration of the signal processing unit 30 will be described later.
- the target area detection section 50 detects the target area from the image based on the image signal input from the imaging control section 40 . As described above, this target area is the area that contains the object to be imaged. For example, a specific person corresponds to this object.
- the target area detection unit 50 searches the input image to determine whether or not the target object is included. Then, when an object is included in the image, the target area detection unit 50 detects the area including the target object as the target area and outputs it to the imaging control unit 40 .
- the storage unit 60 holds the data of the target object. For example, the object input by the user of the imaging device 1 can be applied to this object.
- the storage unit 60 stores data for identifying the target object and information of the target object such as the minimum size when the target object is tracked.
- the storage unit 60 also outputs object data, which is information about the object, to the target area detection unit 50 based on the control signal from the target area detection unit 50 .
- detection of the target area by the target area detection unit 50 is not limited to this example.
- AI Artificial Intelligence
- AI recognizes a person using an image generated in advance, and AI detects a target area including the recognized person.
- the photographing lens 70 is a lens that forms an image of the subject on the surface of the image sensor 10 on which the pixels 100 are arranged.
- the imaging control unit 40, the target area detection unit 50, and the storage unit 60 constitute an imaging control device.
- the imaging device 10 and the image signal adding section 20 constitute a sensor.
- FIG. 2 is a diagram illustrating a configuration example of an imaging element according to an embodiment of the present disclosure
- FIG. 1 is a block diagram showing a configuration example of the imaging device 10.
- the imaging device 10 is a semiconductor device that generates image data of a subject.
- the imaging device 10 includes a pixel array section 11 , a vertical driving section 12 , a column signal processing section 13 and a control section 14 .
- the pixel array section 11 is configured by arranging a plurality of pixels 100 .
- a pixel array section 11 in the figure represents an example in which a plurality of pixels 100 are arranged in a two-dimensional matrix.
- the pixel 100 includes a photoelectric conversion unit that photoelectrically converts incident light, and generates an image signal of a subject based on the irradiated incident light.
- a photodiode for example, can be used for this photoelectric conversion unit.
- Signal lines 15 and 16 are wired to each pixel 100 .
- the pixels 100 are controlled by control signals transmitted through the signal lines 15 to generate image signals, and output the generated image signals through the signal lines 16 .
- the signal line 15 is arranged for each row in a two-dimensional matrix and is commonly wired to the plurality of pixels 100 arranged in one row.
- the signal line 16 is arranged for each column in a two-dimensional matrix and is commonly wired to the plurality of pixels 100 arranged in one column.
- the vertical driving section 12 generates control signals for the pixels 100 described above.
- a vertical drive unit 12 in the figure generates a control signal for each row of the two-dimensional matrix of the pixel array unit 11 and sequentially outputs the control signal via a signal line 15 .
- the column signal processing unit 13 processes image signals generated by the pixels 100 .
- a column signal processing unit 13 shown in the figure simultaneously processes image signals from a plurality of pixels 100 arranged in one row of the pixel array unit 11 and transmitted via a signal line 16 .
- this processing for example, analog-to-digital conversion for converting analog image signals generated by the pixels 100 into digital image signals and correlated double sampling (CDS) for removing offset errors in image signals may be performed. can be done.
- the processed image signal is output to a circuit or the like outside the imaging device 10 .
- the control unit 14 controls the vertical driving unit 12 and the column signal processing unit 13.
- a control unit 14 in the figure generates control signals for controlling the vertical driving unit 12 and the column signal processing unit 13 based on a clock input from an external circuit or the like and data instructing an operation mode.
- the control section 14 outputs control signals through the signal lines 17 and 18 to control the vertical driving section 12 and the column signal processing section 13 .
- FIG. 3 is a diagram illustrating a configuration example of a pixel according to the first embodiment of the present disclosure;
- This figure is a plan view showing a configuration example of the pixel 100 .
- the pixels 100 are arranged in the pixel array section 11 .
- This figure shows an example in which pixels 100 are arranged in a two-dimensional matrix.
- "R", "G", and "B" attached to the pixels 100 in FIG. 'R', 'G' and 'B' in the figure represent the arrangement of the color filters 150 corresponding to red light, green light and blue light, respectively.
- the white rectangles in the figure represent the pixels 100 in which the color filters 150 corresponding to green light are arranged.
- a rectangle hatched with oblique lines slanted downward to the right in FIG.
- an on-chip lens 170 is arranged in the pixel 100 .
- the on-chip lens 170 converges incident light onto the photoelectric conversion units (photoelectric conversion units 101 ) of the pixels 100 , and is arranged commonly to the plurality of pixels 100 .
- the on-chip lens 170 in the figure represents an example in which the on-chip lens 170 is commonly arranged for four pixels 100 arranged in two rows and two columns.
- the on-chip lens 170 shown in the figure is arranged in common for the four pixels 100 in which the color filters 150 of the same color are arranged.
- the on-chip lens 170 and the plurality of pixels 100 sharing the on-chip lens 170 constitute a pixel block 200 .
- a plurality of adjacent pixel blocks 200 constitute a pixel block unit 300 .
- a pixel block unit 300 in the figure represents an example composed of four pixel blocks 200 arranged in two rows and two columns.
- color filters 150 of the same color are arranged in the pixel blocks 200 included in the pixel block unit 300 .
- This figure shows an example in which pixel block units 300 corresponding to red light, green light, and blue light are arranged in a Bayer arrangement. In this manner, the image sensor 10 has a plurality of pixel block units 300 arranged in a two-dimensional matrix.
- the on-chip lens 170 in the figure represents an example of a circular configuration in plan view. Note that the on-chip lens 170 can also be configured to have a rectangular shape in plan view.
- the first resolution mentioned above corresponds to the resolution according to the density of the plurality of pixels 100 in the figure.
- This first resolution is the maximum resolution of the imaging device 10 .
- the first resolution can be applied, for example, to capturing still images. So-called RAW data can be generated from the image signal to which the first resolution is applied.
- the image signal adding unit 20 selects the first resolution, the image signal adding unit 20 outputs the image signal output from the imaging element 10 to the signal processing unit 30 and the like without performing addition. .
- the second resolution mentioned above corresponds to the resolution according to the density of the pixel block 200 in the figure. As shown in the figure, this second resolution is a quarter of the first resolution.
- the second resolution can be applied, for example, to imaging a moving image equivalent to 8K.
- the imaging control unit 40 described above selects the second resolution
- the image signal addition unit 20 adds the four image signals included in the pixel block 200, and outputs the image signal after the addition to the signal processing unit 30. etc.
- the above-mentioned third resolution corresponds to the resolution according to the density of the pixel block unit 300 in the figure. As shown in the figure, the third resolution is 1/4 of the second resolution and 1/16 of the first resolution. The third resolution can be applied, for example, to shooting a moving image equivalent to 4K.
- the image signal addition unit 20 selects the third resolution, performs addition of the 16 image signals included in the pixel block unit 300, and outputs the image signal after the addition to the signal processing unit. 30 and so on.
- An operation mode in which an image signal is generated for each pixel 100 in the sensor including the image sensor 10 and the image signal adder 20 is referred to as a first mode.
- An operation mode in which an image signal is generated for each pixel block 200 is called a second mode.
- An operation mode in which an image signal is generated for each pixel block unit 300 is called a third mode.
- the resolution can be changed by adding image signals.
- Resolution change processing can be simplified.
- FIG. 4 is a cross-sectional view showing a configuration example of a pixel according to an embodiment of the present disclosure.
- This figure is a cross-sectional view showing a configuration example of the pixel 100 .
- Pixels 100 in the figure are pixels 100 included in the pixel block 200 in which the on-chip lens 170 is commonly arranged.
- the pixel 100 includes a semiconductor substrate 120 , an insulating film 130 , a wiring region 140 , an isolation portion 135 , a protective film 136 and a color filter 150 .
- the semiconductor substrate 120 is a semiconductor substrate on which the diffusion layer of the semiconductor element of the pixel 100 is arranged.
- the semiconductor substrate 120 can be made of silicon (Si), for example.
- a semiconductor element or the like is arranged in a well region formed in the semiconductor substrate 120 .
- the semiconductor substrate 120 in FIG. 1 shows the photoelectric conversion unit 101 as an example.
- This photoelectric conversion unit 101 is configured by an n-type semiconductor region 121 .
- the photoelectric conversion unit 101 corresponds to a photodiode composed of a pn junction at the interface between the n-type semiconductor region 121 and the surrounding p-type well region.
- the insulating film 130 insulates the surface side of the semiconductor substrate 120 .
- a silicon oxide (SiO 2 ) film for example, can be applied to the insulating film 130 .
- the wiring region 140 is a region that is arranged on the surface side of the semiconductor substrate 120 and in which the wiring of the elements is formed.
- This wiring region 140 includes a wiring 141 , a via plug 142 and an insulating layer 143 .
- the wiring 141 is a conductor that transmits a signal to elements or the like on the semiconductor substrate 120 .
- This wiring 141 can be made of, for example, a metal such as copper (Cu) or tungsten (W).
- the via plugs 142 connect the wirings 141 arranged in different layers.
- This via plug 142 can be made of, for example, a columnar metal.
- the insulating layer 143 insulates the wiring 141 and the like.
- This insulating layer 143 can be made of, for example, SiO 2 .
- the isolation part 135 is arranged at the boundary of the pixels 100 on the semiconductor substrate 120 to electrically and optically isolate the pixels 100 .
- the isolation part 135 can be composed of an insulator embedded in the semiconductor substrate 120 .
- the separation portion 135 can be formed by, for example, placing an insulator such as SiO 2 in a groove penetrating the semiconductor substrate 120 formed at the boundary of the pixels 100 .
- the protective film 136 is a film that protects the back side of the semiconductor substrate 120 .
- This protective film 136 can be composed of an insulator such as SiO 2 .
- the protective film 136 in the figure can be formed at the same time as the separating portion 135 .
- the color filter 150 is an optical filter that transmits incident light of a predetermined wavelength out of incident light.
- a color filter that transmits red light, green light and blue light can be used.
- one color filter 150 corresponding to any one of red light, green light, and blue light is arranged in the pixel 100 .
- the pixels 100 generate image signals of incident light of wavelengths corresponding to the color filters 150 .
- the same kind of color filters 150 are arranged in the plurality of pixels 100 arranged in the pixel block 200 .
- the color filter 150 shown in the figure is arranged on the back side of the semiconductor substrate 120 .
- a light shielding film 159 is arranged in the area of the color filter 150 on the boundary of the pixel block 200 .
- the light shielding film 159 shields incident light.
- the light shielding film 159 it is possible to shield the incident light obliquely incident from the adjacent pixels 100 . Since the light-shielding film 159 shields incident light that has passed through different types of color filters 150 of the pixels 100 of the adjacent pixel blocks 200, it is possible to prevent color mixture and prevent deterioration of image quality.
- the on-chip lens 170 is a lens that is commonly arranged for the plurality of pixels 100 forming the pixel block 200 as described above.
- the on-chip lens 170 in FIG. 1 is configured to have a hemispherical cross section and converges incident light onto the photoelectric conversion unit 101 .
- the on-chip lens 170 can be made of an organic material such as acrylic resin or an inorganic material such as silicon nitride (SiN).
- FIG. 5 is a diagram illustrating a configuration example of a signal processing unit according to the first embodiment of the present disclosure; This figure is a block diagram showing a configuration example of the signal processing unit 30. As shown in FIG. A signal processing unit 30 shown in FIG.
- the sensitivity correction unit 31 corrects the sensitivity difference of the pixels 100 .
- the sensitivity difference of the pixel 100 is generated according to the incident angle of incident light.
- the sensitivity corrector 31 corrects such a sensitivity difference.
- the corrected image signal is output to the pixel defect correction section 32 .
- the pixel defect correction unit 32 corrects image signals from defective pixels 100 .
- the corrected image signal is output to the re-mosaic processing unit 33 .
- the re-mosaic processing unit 33 performs re-mosaic processing on image signals.
- This re-mosaic processing is a process of converting into an image signal corresponding to an arrangement of a plurality of color filters in a different order from the arrangement of the pixels 100 of the pixel array section 11 .
- the image signal after re-mosaic processing is output to an external device. Remosaic processing will be described using FIGS. 6A to 6C.
- FIGS. 6A-6C are diagrams illustrating an example of remosaic processing according to embodiments of the present disclosure.
- the figure shows the wavelength of incident light to which the image signal for each pixel 100 corresponds.
- a rectangle hatched with dots in the figure represents an area of an image signal corresponding to green light.
- a rectangle hatched with horizontal lines represents the region of the image signal corresponding to red light.
- a rectangle hatched with vertical lines represents the area of the image signal corresponding to blue light.
- FIG. 6A shows an example of remosaic processing at the first resolution.
- This figure shows an example in which re-mosaic processing is performed so that four pixels 100 form a Bayer array for each pixel block 200 .
- the pixel block 200 on the upper left of the figure will be described as an example. It is assumed that the pixels 100 of this pixel block 200 are arranged with color filters 150 corresponding to green light, like the pixel block 200 on the upper left in FIG. In this case, the re-mosaic process generates an image signal corresponding to red light and an image signal corresponding to blue light to replace the image signals corresponding to green light in the upper right and lower left of pixel block 200, respectively.
- FIG. 6B shows an example of remosaic processing at the second resolution. This figure shows an example in which remosaic processing is performed so that four pixel blocks 200 form a Bayer array for each pixel block unit 300 . Pixels 100 of image signals corresponding to wavelengths different from those of the color filters 150 in the array of FIG. 3 are replaced by generating image signals corresponding to the wavelengths.
- FIG. 6C shows an example of remosaic processing at the third resolution.
- This figure shows an example of performing remosaic processing so as to form a Bayer array for each of four pixel block units 300 .
- Pixels 100 of image signals corresponding to wavelengths different from those of the color filters 150 in the array of FIG. 3 are replaced by generating image signals corresponding to the wavelengths. Note that a known method can be applied to generate an image signal associated with remosaic processing.
- [Image example] 7A and 7B are diagrams showing examples of images generated by the imaging device according to the embodiment of the present disclosure. This figure shows an example of changing the resolution and configuration of an image for each scene captured by the imaging device 1 .
- FIG. 7A shows an example of capturing a moving image following an object specified from among a plurality of subjects and changing the resolution.
- An image 400 on the left side of the figure represents an image of the subject at the start of imaging.
- the target area detection unit 50 searches for the target object and detects the target area.
- a dashed rectangle represents the detected region of interest 401 .
- the target area detection unit 50 continuously detects the target area 401 according to the movement of the target object, and outputs the detected target area 401 to the imaging control unit 40 .
- the imaging control unit 40 can change the resolution according to the size of the target area 401, as in the image 400 on the right side of the figure.
- the imaging control unit 40 can use the first resolution as an initial value and select the second resolution when the size of the target region 401 is equal to or smaller than a predetermined threshold (first size). Furthermore, the imaging control unit 40 can select the third resolution when the size of the target area 401 is equal to or less than a predetermined threshold (second size).
- FIG. 7B shows an example in which a moving image is captured and an image of a target area set to a different resolution is superimposed on the generated image.
- An image 400 in the figure is generated, for example, at the third resolution.
- the imaging control unit 40 performs control to generate an image in which an image 402 enlarged by applying the first resolution to the target region 401 is superimposed. conduct.
- the imaging control unit 40 generates an image based on the image signal of the first resolution output from the imaging device 10 and an image based on the image signal of the third resolution generated by the image signal adding unit 20.
- the target area detection unit 50 detects the target area 401 in the image of the first resolution. By combining these images, an image containing regions of different resolutions can be generated.
- the first resolution can be applied only to the target area 401 as shown in FIG. As a result, it is possible to image the object with high image quality and reduce the amount of data for the entire image.
- FIG. 8 is a diagram illustrating an example of imaging processing according to the first embodiment of the present disclosure. This figure is a flow chart showing an example of imaging processing in the imaging apparatus 1 .
- the imaging control unit 40 selects the first resolution (step S100).
- the target area detection unit 50 selects a subject based on the subject data in the storage unit 60 (step S101).
- the target area detection unit 50 detects the target area based on the selected subject (step S102).
- the imaging control unit 40 controls the imaging element 10 to start imaging (step S103).
- the target area detection unit 50 tracks the subject (step S104). This can be done by having the target area detection unit 50 continuously detect the target area according to the movement of the subject.
- the imaging control unit 40 determines whether the size of the target area is equal to or smaller than a predetermined first size (step S105). If the size of the target area is equal to or smaller than the first size (step S105, Yes), the imaging control unit 40 selects the second resolution (step S106), and proceeds to the process of step S107. On the other hand, if the size of the target area is larger than the first size (step S105, No), the imaging control unit 40 proceeds to the process of step S107.
- step S107 the imaging control unit 40 determines whether or not the size of the target area is equal to or smaller than a predetermined second size (step S107). If the size of the target area is equal to or smaller than the second size (step S107, Yes), the imaging control unit 40 selects the third resolution (step S108), and proceeds to the process of step S109. On the other hand, if the size of the target area is larger than the second size (step S107, No), the imaging control unit 40 proceeds to the process of step S109.
- step S109 the imaging control unit 40 determines whether or not to stop imaging (step S109). This can be determined, for example, based on the user's operation of the imaging apparatus 1 to stop imaging. If the imaging is not stopped (step S109, No), the process proceeds to step S104. In the case of stopping imaging (step S109, Yes), the imaging control unit 40 controls the imaging device 10 to stop imaging (step S110), and ends the imaging process.
- the imaging process in the imaging apparatus 1 can be performed by the above procedure.
- the method of selecting the resolution by the imaging control unit 40 is not limited to this example, and other embodiments may be applied.
- the imaging control unit 40 can select the resolution according to the image quality of the generated image. Specifically, the imaging control unit 40 can select a lower resolution when detecting degradation of image quality such as crushed highlights.
- the imaging control unit 40 can select the resolution according to the luminance. Specifically, the imaging control unit 40 can select a low resolution when imaging in a low illuminance environment. Thereby, the image signal is added by the image signal adder 20, and the signal level of the image signal can be increased.
- the imaging control unit 40 can also select the resolution according to the generation rate (frame rate) when capturing a moving image.
- the imaging control unit 40 can select a low resolution when the frame rate is high.
- the first resolution, the second resolution, and the third resolution can be applied to different values of the first frame rate, the second frame rate, and the third frame rate, respectively.
- This first frame rate may be approximately one quarter of the value of the second frame rate.
- the third frame rate can be set to approximately 1/4 the value of the second frame rate.
- frame rates of 7.5 fps, 30 fps and 120 fps for example, can be applied as the first frame rate, the second frame rate and the third frame rate. This makes it possible to reduce an increase in the data amount of moving images.
- the imaging device 1 includes the imaging device 10 in which the plurality of pixel block units 300 are arranged, and generates an image signal having a resolution selected by the imaging control unit 40. can be done. As a result, imaging can be performed at a resolution suitable for the subject, and convenience can be improved.
- the imaging apparatus 1 of the first embodiment described above captures an image of the target area detected by the target area detection unit 50 .
- the imaging device 1 of the second embodiment of the present disclosure differs from the above-described first embodiment in that another sensor is used to recognize an object.
- FIG. 9 is a diagram illustrating a configuration example of an imaging device according to the second embodiment of the present disclosure. This figure, like FIG. 1, is a block diagram showing a configuration example of the imaging device 1. As shown in FIG. The image pickup apparatus 1 shown in FIG. 1 is different from the image pickup apparatus 1 shown in FIG.
- the distance measuring unit 80 measures the distance to the object.
- a ToF (Time of Flight) sensor for example, can be used for this distance measuring unit 80 .
- This ToF sensor irradiates a subject with light from a light source (not shown), detects the reflected light reflected from the subject, and measures the flight time of the light, thereby measuring the distance to the subject.
- the distance measuring unit 80 is an example of the sensor described in the claims.
- the imaging control unit 40 in the same figure reads subject data from the storage unit 60 and outputs it to the distance measuring unit 80 .
- the distance measuring unit 80 recognizes the subject based on this subject data and measures the distance. After that, the distance measurement section 80 outputs the measured distance to the imaging control section 40 .
- the imaging control unit 40 can select the resolution according to the distance to the subject. For example, the imaging control unit 40 can use the first resolution as an initial value and select the second resolution when the distance to the subject is equal to or less than a predetermined threshold (first distance). Furthermore, the imaging control unit 40 can select the third resolution when the distance to the subject is equal to or less than a predetermined threshold (second distance).
- FIG. 10 is a diagram illustrating an example of imaging processing according to the second embodiment of the present disclosure. This figure is a flow chart showing an example of imaging processing in the imaging apparatus 1 .
- the imaging control unit 40 selects the first resolution (step S150).
- the target area detection unit 50 selects a subject based on the subject data in the storage unit 60 (step S151).
- the selected object data is output to the distance measuring section 80 .
- the imaging control unit 40 controls the imaging device 10 to start imaging (step S153).
- the distance measuring unit 80 measures the distance to the subject (step S154).
- the imaging control unit 40 determines whether or not the distance to the subject is equal to or less than a predetermined first distance (step S155). If the distance to the subject is equal to or less than the first distance (step S155, Yes), the imaging control unit 40 selects the second resolution (step S156), and proceeds to the process of step S157. On the other hand, if the distance to the subject is greater than the first distance (step S155, No), the process proceeds to step S157.
- step S157 the imaging control unit 40 determines whether or not the distance to the subject is equal to or less than a predetermined second distance (step S157). If the distance to the subject is equal to or less than the second distance (step S157, Yes), the imaging control unit 40 selects the third resolution (step S158), and proceeds to the process of step S159. On the other hand, if the distance to the subject is greater than the second distance (step S157, No), the process proceeds to step S159.
- step S159 the imaging control unit 40 determines whether or not to stop imaging (step S159). This can be determined, for example, based on the user's operation of the imaging apparatus 1 to stop imaging. If the imaging is not stopped (step S159, No), the process proceeds to step S154. In the case of stopping imaging (step S159, Yes), the imaging control unit 40 controls the imaging element 10 to stop imaging (step S160), and ends the imaging process.
- the imaging process in the imaging apparatus 1 can be performed by the above procedure.
- the configuration of the imaging device 1 other than this is the same as the configuration of the imaging device 1 according to the first embodiment of the present disclosure, description thereof will be omitted.
- the imaging device 1 detects the distance to the subject by the distance measurement unit 80. Thereby, the resolution can be selected according to the distance of the subject.
- the imaging apparatus 1 of the first embodiment described above generates an image signal of the subject.
- the imaging device 1 of the third embodiment of the present disclosure differs from the above-described first embodiment in that the focal position of the subject is further detected.
- FIG. 11 is a diagram illustrating a configuration example of an imaging device according to the third embodiment of the present disclosure.
- This figure like FIG. 1, is a block diagram showing a configuration example of the imaging device 1.
- the image pickup apparatus 1 shown in FIG. 1 is different from the image pickup apparatus 1 shown in FIG. 1 in that it further includes a focus position detection section 90 and the signal processing section 30 outputs a phase difference signal.
- a signal processing unit 30 in the figure outputs a phase difference signal in addition to the image signal.
- This phase difference signal is a signal for detecting the focal position of the photographing lens 70 .
- Autofocus can be performed by adjusting the position of the taking lens 70 based on the focal position detected by this phase difference signal.
- the pixels 100 of the imaging device 10 can be used as phase difference pixels that generate phase difference signals.
- the four pixels 100 forming the pixel block 200 receive light from a subject that is collected by a common on-chip lens 170 . Therefore, the four pixels 100 arranged in the pixel block 200 can pupil-divide the subject.
- Light transmitted through the left side of the imaging lens 70 enters the right pixel 100 of the four pixels 100 , and light transmitted through the right side of the imaging lens 70 enters the left pixel 100 .
- the focus position can be detected.
- the focus position can be detected.
- the four pixels 100 arranged in the pixel block 200 can pupil-divide the object in the horizontal direction and the vertical direction.
- the imaging device 10 can detect the focal position from two directions, the horizontal direction and the vertical direction. Further, the imaging device 10 can use all the pixels 100 as phase difference pixels. Therefore, the imaging device 1 including the imaging element 10 can improve the detection accuracy of the focal position.
- the signal processing section 30 in the figure generates this phase difference signal and outputs it to the focus position detection section 90 .
- the focus position detection section 90 detects the focus position based on the phase difference signal from the signal processing section 30 . Further, the focal position detection section 90 adjusts the position of the photographing lens 70 based on the detected focal position.
- FIG. 12 is a diagram illustrating a configuration example of a signal processing unit according to the third embodiment of the present disclosure; This figure, like FIG. 5, is a block diagram showing a configuration example of the signal processing unit 30. As shown in FIG. The signal processing unit 30 in FIG. 5 is different from the signal processing unit 30 in FIG. 5 in that it further includes a luminance signal generation unit 34 .
- the luminance signal generator 34 generates a luminance signal from the color image signal output from the sensitivity corrector 31 .
- the generated luminance signal is output to the focus position detection section 90 as a phase difference signal.
- FIG. 13 is a diagram illustrating a configuration example of a phase difference pixel according to the third embodiment of the present disclosure; This figure is a diagram showing an arrangement example of phase difference pixels when the first resolution is applied.
- "TL”, “TR”, “BL” and “BR” represent upper left, upper right, lower left and lower right phase difference pixels, respectively.
- Pupil division can be performed in the left and right direction of the figure by phase difference pixels of “TL” and “TR” and “BL” and “BR”.
- pupil division can be performed in the up-down direction in the figure by phase difference pixels of “TL” and “BL” and “TR” and “BR”.
- FIGS. 14A and 14B are diagrams showing configuration examples of phase difference pixels according to the third embodiment of the present disclosure.
- This figure is a diagram showing an arrangement example of phase difference pixels when the second resolution is applied. Further, this figure is a diagram showing an example in which the pixels 100 of the pixel block 200 corresponding to green light are used as phase difference pixels.
- the image signals of two adjacent pixels 100 of the pixel block 200 are added, and the image signal after addition is used as a phase difference signal (second phase difference signal). can do.
- the dashed rectangles in the figure represent pairs of pixels 100 to which image signals are added. Note that the addition of image signals can be performed by the image signal addition unit 20 .
- T, “B”, “L” and “R” in the figure represent upper, lower, left and right phase difference pixels, respectively.
- the 100 pairs of pixels “T” in FIG. 14A and “B” in FIG. 14B can perform pupil division in the vertical direction of the figure.
- the 100 pairs of pixels of "L” in FIG. 14A and “R” in FIG. 14B can perform pupil division in the horizontal direction of the figure.
- FIG. 15 is a diagram showing a configuration example of a phase difference pixel according to the third embodiment of the present disclosure.
- This figure is a diagram showing an arrangement example of phase difference pixels when the third resolution is applied.
- This figure is a diagram showing an example in which the pixels 100 of the pixel block 200 corresponding to green light are used as phase difference pixels.
- image signals of eight pixels 100 of two adjacent pixel blocks 200 are added, and the image signal after addition can be used as a phase difference signal.
- the dashed rectangles in the figure represent a plurality of pixels 100 to which the image signals are added. Note that the addition of image signals can be performed by the image signal addition unit 20 .
- "T" and "L” in the figure represent phase difference pixels as in FIGS. 14A and 14B. Note that the description of the “B” and “R” phase difference pixels is omitted.
- the image sensor 10 can generate phase difference signals in all the pixels 100 arranged in the pixel array section 11 .
- FIG. 16A and 16B are diagrams showing configuration examples of conventional imaging devices.
- FIG. 1 is a diagram showing an arrangement example of pixels in a pixel array section of an imaging device that is a comparative example of the technique of the present disclosure.
- FIG. 16A is a diagram showing an example of a pixel array section configured by arranging rectangular pixels 500.
- FIG. An on-chip lens 570 is arranged commonly to two pixels 500 . These two pixels 500 constitute a phase difference pixel.
- phase difference signals can be generated in all the pixels.
- the pupil can be divided only in one direction (horizontal direction in the figure).
- FIG. 16B is a diagram showing an example of a pixel array section configured by arranging pixels 501 having on-chip lenses 571.
- FIG. 16B In the pixel array portion of the figure, two pixels 502 having an on-chip lens 571 arranged in common constitute a phase difference pixel.
- the phase difference pixels are dispersedly arranged in a part of the area.
- the pupil can be divided only in one direction (horizontal direction in the figure).
- the image sensor of the comparative example supports pupil division in only one direction, it is not possible to detect the phase difference of a subject image having contrast in a direction different from the pupil division direction, and the phase difference detection accuracy is decreases.
- the number of phase difference pixels is limited in the imaging device having the pixel array portion of FIG. 16B, it becomes difficult to detect the phase difference in high-resolution imaging.
- the imaging element 10 of the present disclosure generates phase difference signals in all the pixels 100 arranged in the pixel array unit 11, and divides the pupil in two directions of left and right and up and down to generate phase difference signals. can be generated. Thereby, the phase difference detection accuracy can be improved.
- the configuration of the imaging device 1 other than this is the same as the configuration of the imaging device 1 according to the first embodiment of the present disclosure, description thereof will be omitted.
- the imaging device 1 generates a phase signal based on the image signal added according to the resolution of the target area, and detects the focal position.
- autofocus can be performed on the subject in the target area imaged at the selected resolution. Convenience can be further improved.
- the imaging apparatus 1 of the first embodiment described above selects a resolution according to the target area detected by the target area detection unit 50 .
- the imaging device 1 of the fourth embodiment of the present disclosure differs from the above-described first embodiment in that the resolution selected by the user of the imaging device 1 is applied.
- FIG. 17 is a diagram illustrating a configuration example of an imaging device according to the fourth embodiment of the present disclosure; This figure, like FIG. 1, is a block diagram showing a configuration example of the imaging device 1. As shown in FIG. The imaging apparatus 1 in FIG. 1 is different from the imaging apparatus 1 in FIG. 1 in that the target area detection unit 50 and the storage unit 60 are omitted.
- the user of the imaging device 1 inputs the resolution to the imaging control unit 40 in the figure.
- the imaging control unit 40 shown in the figure outputs the input resolution to the image signal adding unit 20 as the selected resolution.
- the configuration of the imaging device 1 other than this is the same as the configuration of the imaging device 1 according to the first embodiment of the present disclosure, description thereof will be omitted.
- the imaging device 1 can perform imaging at the resolution selected by the user of the imaging device 1.
- FIG. 18 is a diagram illustrating a configuration example of an imaging device according to the fifth embodiment of the present disclosure; This figure, like FIG. 1, is a block diagram showing a configuration example of the imaging device 1. As shown in FIG. The image pickup apparatus 1 in FIG. 1 is different from the image pickup apparatus 1 in FIG. 1 in that the target area detection section 50 is omitted and an illuminance sensor 89 is provided. Note that, in FIG. 1, the imaging control unit 40 and the storage unit 60 constitute an imaging control device.
- the illuminance sensor 89 detects illuminance.
- the illuminance sensor 89 outputs the detected illuminance data to the imaging control section 40 .
- the imaging control unit 40 in the figure selects the resolution according to the illuminance input from the illuminance sensor 89 .
- the imaging control unit 40 can select the resolution based on, for example, a threshold value of illuminance.
- the imaging control unit 40 sets the first illuminance and the second illuminance as thresholds.
- the second illuminance is a threshold lower than the first illuminance.
- the imaging control unit 40 performs the second illuminance, respectively.
- One resolution, a second resolution and a third resolution can be selected.
- the imaging control section 40 outputs the illuminance to the storage section 60 . Then, the storage unit 60 outputs the resolution according to the illuminance to the imaging control unit 40 . The imaging control unit 40 can select the output resolution.
- the storage unit 60 in the figure holds the information on the correspondence relationship between the above-mentioned illuminance threshold and resolution.
- the storage unit 60 can hold, for example, information on the correspondence relationship between the illuminance threshold and the resolution in the form of a table (LUT: Look-up Table) for referring to the relationship between the illuminance threshold and the resolution.
- LUT Look-up Table
- the image signal addition unit 20 adds four image signals, so the level of the image signal can be made higher than in the first resolution. Further, in the third resolution, the image signal adder 20 adds 16 image signals, so that the level of the image signal can be further increased.
- FIG. 19 is a diagram illustrating an example of a processing procedure of imaging processing according to the fifth embodiment of the present disclosure.
- This figure is a flow chart showing an example of the processing procedure of the imaging process in the imaging apparatus 1 .
- the illuminance sensor 89 measures illuminance (step S180).
- the imaging control unit 40 determines whether the illuminance of the measurement result is greater than or equal to the first illuminance (step S182). As a result, if the illuminance is equal to or higher than the first illuminance (step S182, Yes), the imaging control unit 40 selects the first resolution (step S183), and proceeds to the process of step S187.
- the imaging control unit 40 determines whether the illuminance is equal to or higher than the second illuminance (step S184). As a result, if the illuminance is equal to or higher than the second illuminance (step S184, Yes), the imaging control unit 40 selects the second resolution (step S185), and proceeds to the process of step S187. On the other hand, if the illuminance is not equal to or higher than the second illuminance (step S184, No), the imaging control unit 40 selects the third resolution (step S186), and proceeds to the process of step S187.
- step S187 the imaging control unit 40 outputs a control signal to the imaging element 10 to perform imaging (step S187).
- the imaging process can be performed by the above process.
- the configuration of the imaging device 1 other than this is the same as the configuration of the imaging device 1 according to the first embodiment of the present disclosure, description thereof will be omitted.
- the imaging device 1 of the fifth embodiment of the present disclosure selects the resolution according to the illuminance and performs imaging. As a result, it is possible to perform imaging with improved sensitivity even in a low-illuminance environment.
- FIG. 20 is a diagram illustrating a configuration example of an imaging device according to the sixth embodiment of the present disclosure. This figure is a block diagram showing a configuration example of the imaging device 1, like FIG.
- the imaging apparatus 1 in FIG. 18 differs from the imaging apparatus 1 in FIG. 18 in that it includes an exposure control section 88 instead of the illuminance sensor 89 and storage section 60 .
- the exposure control unit 88 controls exposure.
- the exposure control unit 88 controls exposure by detecting exposure (hereinafter referred to as an EV value) corresponding to the set gain and shutter speed and outputting it to the imaging control unit 40 .
- the imaging control unit 40 in the same figure selects the resolution according to the EV value input from the exposure control unit 88. At this time, the imaging control unit 40 can select the resolution based on, for example, the threshold of the EV value. Specifically, the imaging control unit 40 sets the first EV value and the second EV value as threshold values. Here, the second EV value is a threshold lower than the first EV value. Then, when the EV value detected by the exposure control unit 88 is greater than or equal to the first EV value, less than the first EV value and greater than or equal to the second EV value, and less than the second EV value , a first resolution, a second resolution and a third resolution can be selected respectively.
- FIG. 21 is a diagram illustrating an example of a processing procedure of imaging processing according to the sixth embodiment of the present disclosure. This figure is a flow chart showing an example of the processing procedure of the imaging process in the imaging apparatus 1 .
- the imaging control unit 40 determines the gain and shutter speed based on the level of the image signal (step S190).
- the exposure controller 88 detects the EV value based on the determined gain and shutter speed (step S191).
- the imaging control unit 40 determines whether the detected EV value is greater than or equal to the first EV value (step S192). As a result, if the detected EV value is greater than or equal to the first EV value (step S192, Yes), the imaging control unit 40 selects the first resolution (step S193), and proceeds to the process of step S197. .
- the imaging control unit 40 determines whether the EV value is equal to or greater than the second EV value (step S194). As a result, if the EV value is greater than or equal to the second EV value (step S194, Yes), the imaging control unit 40 selects the second resolution (step S195), and proceeds to the process of step S197. On the other hand, if the EV value is not equal to or greater than the second EV value (step S194, No), the imaging control unit 40 selects the third resolution (step S196), and proceeds to the process of step S197.
- step S197 the imaging control unit 40 outputs a control signal to the imaging element 10 to perform imaging (step S197).
- the imaging process can be performed by the above process.
- the configuration of the imaging device 1 other than this is the same as the configuration of the imaging device 1 according to the fifth embodiment of the present disclosure, so the description is omitted.
- the imaging device 1 of the sixth embodiment of the present disclosure selects resolution according to the EV value and performs imaging. As a result, it is possible to perform imaging according to the exposure.
- the imaging control unit 40 selects the resolution corresponding to the target area detected by the target area detection unit 50 .
- the imaging apparatus 1 of the seventh embodiment of the present disclosure differs from the above-described first embodiment in that multiple target areas are detected.
- the imaging device 1 of the seventh embodiment of the present disclosure can adopt the same configuration as the imaging device 1 of FIG. Also, the target area detection unit 50 of the seventh embodiment of the present disclosure can detect a plurality of target areas. Also, the imaging control unit 40 of the seventh embodiment of the present disclosure can select the resolution for each detected target area.
- FIG. 22 is a diagram illustrating an example of an image generated by an imaging device according to the seventh embodiment of the present disclosure; Similar to FIG. 7B, this figure shows an example in which an image of a target region set to a different resolution is superimposed on a generated image.
- An image 410 in FIG. 4 is generated, for example, at the second resolution.
- This image 410 represents an example of photographing a person from inside a building with the scenery outside as a background. Since the image is backlit, the image of the person is dark.
- target regions 411 and 413 are detected by the target region detection unit 50 and output to the imaging control unit 40 .
- the imaging control unit 40 can generate an image 412 that is enlarged by applying the first resolution to the target area 411 and an image 414 that is increased in luminance by applying the third resolution to the target area 413 .
- the configuration of the imaging device 1 other than this is the same as the configuration of the imaging device 1 according to the first embodiment of the present disclosure, description thereof will be omitted.
- the imaging device 1 of the seventh embodiment of the present disclosure selects the resolution for each of multiple target areas. Thereby, convenience can be improved.
- FIG. 23 is a diagram illustrating a configuration example of pixels according to the first modification of the embodiment of the present disclosure.
- This figure like FIG. 3, is a plan view showing a configuration example of the pixel 100 arranged in the pixel array section 11.
- the pixels 100 in FIG. 3 are different from the pixels 100 in FIG. 3 in that they each include an on-chip lens 170 except for the pixels 100 forming the phase difference pixels.
- a pixel 100 having a common on-chip lens 170 forming a phase difference pixel is arranged on one side of the pixel block unit 300 corresponding to green light arranged in a Bayer array.
- the pixels 100 forming the phase difference pixels are arranged in one pixel block 200 of the four pixel blocks 200 forming the pixel block unit 300 .
- FIG. 24 is a diagram showing a configuration example of pixels according to the second modification of the embodiment of the present disclosure.
- This figure like FIG. 3, is a plan view showing a configuration example of the pixel 100 arranged in the pixel array section 11.
- a pixel block 200 in FIG. 3 is different from the pixel block 200 in FIG. 3 in that it is composed of pixels 100 forming phase difference pixels.
- the pixel array section 11 of FIGS. 23 and 24 can also generate image signals corresponding to the first resolution, the second resolution, and the third resolution in the same manner as the pixel array section 11 of FIG. .
- FIG. 1 The figure is a diagram showing an example of a circuit configuration of a pixel block according to an embodiment of the present disclosure.
- This figure is a diagram showing an example of a circuit configuration in which an image signal generator 110 is arranged for every two pixel blocks 200 .
- a pixel block 200a in the figure includes photoelectric conversion units 101a, 101b, 101c, 101d, 101e, 101f, 101g and 101h, and charge transfer units 102a, 102b, 102c, 102d, 102e, 102f, 102g and 102h.
- the pixel block 200 a further includes a charge holding portion 103 , a resetting portion 104 , an image signal generating portion 110 , an auxiliary charge holding portion 108 , a combining portion 107 and an image signal generating portion 110 .
- the image signal generation unit 110 includes an amplification transistor 111 and a selection transistor 112 .
- the photoelectric conversion unit 101a and the charge transfer unit 102a and the photoelectric conversion unit 101b and the charge transfer unit 102b configure the pixel 100a and the pixel 100b (not shown), respectively.
- the photoelectric conversion unit 101c and the charge transfer unit 102c, and the photoelectric conversion unit 101d and the charge transfer unit 102d configure a pixel 100c and a pixel 100d (not shown), respectively.
- These pixels 100a-100d constitute a pixel block.
- the photoelectric conversion unit 101e and the charge transfer unit 102e, and the photoelectric conversion unit 101f and the charge transfer unit 102f constitute a pixel 100e and a pixel 100f (not shown), respectively.
- the photoelectric conversion unit 101g and the charge transfer unit 102g, and the photoelectric conversion unit 101h and the charge transfer unit 102h configure a pixel 100g and a pixel 100h (not shown), respectively.
- These pixels 100e to 100h constitute a pixel block.
- the charge transfer units 102a to 102h, the reset unit 104, the amplification transistor 111 and the selection transistor 112, and the coupling unit 107 can be configured by n-channel MOS transistors.
- the signal lines 15 and 16 described in FIG. 2 are wired to the pixel block 200a.
- the signal lines 15 in the figure include a signal line TG1, a signal line TG2, a signal line TG3, a signal line TG4, a signal line TG5, a signal line TG6, a signal line TG7, a signal line TG8, a signal line FDG, a signal line RST, and a signal line.
- Line SEL is included.
- the signal line 16 also includes a signal line VSL.
- a signal line 19 is further arranged in the pixel block 200a.
- the signal line 19 is a signal line that connects the pixel block 200a and other pixel blocks 200 (pixel blocks 200b, 200c and 200d, which will be described later).
- a power line Vdd is wired to the pixel block 200a.
- the power line Vdd is a wiring for supplying power to the pixel block 200a.
- the anode of the photoelectric conversion unit 101a is grounded, and the cathode is connected to the source of the charge transfer unit 102a.
- the photoelectric conversion unit 101b has an anode grounded and a cathode connected to the source of the charge transfer unit 102b.
- the photoelectric conversion unit 101c has an anode grounded and a cathode connected to the source of the charge transfer unit 102c.
- the photoelectric conversion unit 101d has an anode grounded and a cathode connected to the source of the charge transfer unit 102d.
- the photoelectric conversion unit 101e has an anode grounded and a cathode connected to the source of the charge transfer unit 102e.
- the photoelectric conversion unit 101f has an anode grounded and a cathode connected to the source of the charge transfer unit 102f.
- the photoelectric conversion section 101g has an anode grounded and a cathode connected to the source of the charge transfer section 102g.
- the photoelectric conversion unit 101h has an anode grounded and a cathode connected to the source of the charge transfer unit 102h.
- the drains of the charge transfer portions 102 a , 102 b , 102 c , 102 d , 102 e , 102 f , 102 g , and 102 h of the charge transfer portions are connected to the charge holding portion 103 . Commonly connected to one end. Further, the gate of the amplification transistor 111 , the source of the reset section 104 and the drain of the coupling section 107 are further connected to one end of the charge holding section 103 . Another end of the charge holding unit 103 is grounded. The drain of the reset unit 104 and the drain of the amplification transistor 111 are connected to the power supply line Vdd. The source of the amplification transistor 111 is connected to the drain of the selection transistor 112, and the source of the selection transistor 112 is connected to the signal line VSL.
- the gate of the charge transfer section 102a is connected to the signal line TG1.
- a gate of the charge transfer unit 102b is connected to the signal line TG2.
- a gate of the charge transfer unit 102c is connected to the signal line TG3.
- a gate of the charge transfer unit 102d is connected to the signal line TG4.
- a gate of the charge transfer unit 102e is connected to the signal line TG5.
- a gate of the charge transfer unit 102f is connected to the signal line TG6.
- a gate of the charge transfer section 102g is connected to the signal line TG7.
- a gate of the charge transfer section 102h is connected to the signal line TG8.
- a gate of the reset unit 104 is connected to the signal line RST.
- a gate of the coupling unit 107 is connected to the signal line FDG.
- One end of the auxiliary charge holding portion 108 is grounded, and the other end is connected to the source of the coupling portion 107 and the signal line 19 .
- the photoelectric conversion unit 101a and the like perform photoelectric conversion of incident light.
- the photoelectric conversion units 101a and the like can be configured by photodiodes.
- the charge holding unit 103 holds charges generated by photoelectric conversion of the photoelectric conversion unit 101a and the like.
- the charge holding portion 103 can be configured by a semiconductor region formed on a semiconductor substrate.
- the charge transfer unit 102 a and the like transfer the charge of the photoelectric conversion unit 101 a and the like to the charge holding unit 103 .
- a control signal for the charge transfer unit 102a and the like is transmitted through the signal line TG1 and the like.
- the reset unit 104 resets the charge holding unit 103 . This reset can be performed by discharging the charge in the charge holding portion 103 by electrically connecting the charge holding portion 103 and the power supply line Vdd. A control signal for the reset unit 104 is transmitted through a signal line RST.
- the amplification transistor 111 amplifies the voltage of the charge holding section 103 .
- a gate of the amplification transistor 111 is connected to the charge holding unit 103 . Therefore, at the source of the amplifying transistor 111, an image signal having a voltage corresponding to the charge held in the charge holding portion 103 is generated. By turning on the select transistor 112, the image signal can be output to the signal line VSL.
- a control signal for the select transistor 112 is transmitted by a signal line SEL.
- the auxiliary charge holding section 108 is a capacitor coupled to the charge holding section 103 .
- the charge holding capacity of the pixel block 200a can be adjusted. Specifically, coupling the auxiliary charge holding portion 108 to the charge holding portion 103 increases the charge holding capacity of the pixel block 200a. This can reduce the sensitivity of the pixel block 200a.
- the charge storage capacity of the pixel block 200a becomes relatively high, but charge saturation is likely to occur.
- the operation mode in which the auxiliary charge storage unit 108 is not coupled to the charge storage unit 103 and the operation mode in which the auxiliary charge storage unit 108 is coupled to the charge storage unit 103 are referred to as a high sensitivity mode and a low sensitivity mode, respectively.
- a coupling portion 107 couples the auxiliary charge holding portion 108 to the charge holding portion 103 .
- the coupling portion 107 is composed of a MOS transistor, and can couple the auxiliary charge holding portion 108 to the charge holding portion 103 by conducting between the charge holding portion 103 and the auxiliary charge holding portion 108 .
- the pixel block 200a in FIG. 1 and the pixel block 200b, pixel block 200c, and pixel block 200d, which will be described later, are connected to each other by signal lines 19 at the auxiliary charge holding portions 108 thereof.
- the charge transfer section 102, the reset section 104, the selection transistor 112 and the coupling section 107 can be configured with n-channel MOS transistors.
- the drain-source can be made conductive by applying a voltage exceeding the threshold of the gate-source voltage Vgs to the gate.
- a voltage exceeding the threshold of the gate-source voltage Vgs is hereinafter referred to as an on-voltage.
- a voltage that makes a MOS transistor non-conductive is called an off voltage.
- a control signal including this on-voltage and off-voltage is transmitted by the signal line TG1 or the like.
- the photoelectric conversion unit 101 can also be reset by making the charge transfer unit 102 conductive.
- the auxiliary charge holding unit 108 can be reset by making the coupling unit 107 conductive. This resetting of the auxiliary charge holding unit 108 is performed during the self-pixel block charge holding period, which is the period during which the charge is held in the charge holding unit 103 of the pixel block 200, and when the charge is not held in the charge holding unit 103 of any pixel block 200. It can be performed during the charge non-holding period.
- FIG. 26 is a diagram illustrating a configuration example of a pixel block according to an embodiment of the present disclosure; This figure is a diagram showing a configuration example of pixel blocks 200a, 200b, 200c and 200d.
- Pixel blocks 200b, 200c, and 200d in the figure are configured in a circuit common to the pixel block 200a.
- Two sets of pixel blocks 200a, 200b, 200c and 200d are commonly connected to the signal line 16 (signal line VSL) described in FIG.
- the configuration of the second embodiment of the present disclosure can be applied to other embodiments.
- the distance measuring unit 80 of FIG. 9 can be applied to the third embodiment of the present disclosure.
- the imaging device 1 has an imaging element 10 , a resolution selection section (imaging control section 40 ), and an image signal addition section 20 .
- the image pickup device 10 includes a plurality of pixels 100 that perform photoelectric conversion of incident light from a subject to generate image signals, and an on-chip device that is arranged in common to the plurality of pixels 100 and collects the incident light to the plurality of pixels 100.
- Pixel blocks 200 with lenses 170 are arranged in a two-dimensional matrix.
- the resolution selection unit (imaging control unit 40) selects a first resolution corresponding to the size of the pixel 100, a second resolution corresponding to the size of the pixel block 200, and a plurality of adjacent pixel blocks 200.
- a third resolution is selected which is a resolution dependent on the size of the pixel block unit 300 to be constructed.
- the image signal adder 20 generates a second image signal by adding the generated image signals according to the selected resolution. This has the effect of generating image signals with different resolutions.
- the pixel 100 may be provided with one of a plurality of color filters that transmit incident light of different wavelengths. Thereby, a color image signal can be generated.
- the pixel block unit 300 may have color filters that transmit incident light of the same wavelength in each of the pixels 100 of the plurality of pixel blocks 200 thereof. Accordingly, the wavelengths of incident light corresponding to the pixels 100 of the pixel block 200 can be aligned.
- the plurality of color filters may be arranged in a predetermined arrangement order in the plurality of pixels 100 .
- it may further include a signal processing unit that performs a remosaic process, which is a process of converting an image signal into an image signal corresponding to an arrangement of a plurality of color filters in a different order than the arrangement. This makes it possible to generate an image signal corresponding to a desired arrangement of color filters.
- a remosaic process which is a process of converting an image signal into an image signal corresponding to an arrangement of a plurality of color filters in a different order than the arrangement. This makes it possible to generate an image signal corresponding to a desired arrangement of color filters.
- the signal processing unit may perform re-mosaic processing according to the selected resolution. This makes it possible to select remosaic processing according to resolution.
- a target area detection unit 50 for detecting a target area, which is an image area including an object to be imaged among subjects in an image formed by the generated image signal
- a resolution selection unit may select the resolution according to the detected target area. Thereby, it is possible to adjust the resolution of the desired object to be imaged.
- the target area detection unit 50 may detect the target area based on the user's instruction.
- the target area detection unit 50 may detect the target area from an image generated in advance.
- a sensor that measures the distance to the subject may be further provided, and the resolution selection unit (imaging control unit 40) may select the resolution according to the measured distance. This makes it possible to automate enlargement and reduction processing according to the distance to the subject.
- the resolution selection unit (imaging control unit 40) may select the resolution according to the frame rate of the images when generating a moving image, which is a plurality of images generated in time series. Thereby, the resolution can be selected according to the data amount of the moving image.
- the pixel 100 further generates a phase difference signal for detecting the image plane phase difference by splitting the subject, and the image signal addition unit 20 converts the generated phase difference signal according to the selected resolution.
- a second phase difference signal may be generated by summing. Thereby, a phase difference signal can be generated according to the resolution.
- the image signal addition unit 20 adds the phase difference signals generated by two adjacent pixels 100 in the pixel block 200 when the second resolution is selected, and adds the phase difference signals when the third resolution is selected. In some cases, phase difference signals generated by a plurality of pixels 100 arranged in two adjacent pixel blocks 200 may be added.
- the pixel block 200 may include four pixels 100 arranged in two rows and two columns.
- the imaging device 1 has multiple pixels 100 and multiple on-chip lenses 170 .
- a plurality of pixels 100 are arranged in a two-dimensional matrix that performs photoelectric conversion of incident light to generate image signals.
- a plurality of on-chip lenses 170 are arranged for each pixel block 200 consisting of four pixels 100 arranged in two rows and two columns.
- One of a plurality of color filters 150 that transmit incident light of different wavelengths is arranged in each pixel block unit 300 consisting of four pixel blocks 200 arranged in two rows and two columns, and an image signal is generated for each pixel 100.
- the first resolution in the first imaging mode is four times the second resolution in the second imaging mode in which an image signal is generated for each pixel block 200
- the second resolution is four times the second resolution for each pixel block unit 300.
- the first frame rate in the first imaging mode is approximately 1/4 of the second frame rate in the second imaging mode
- the third frame rate in the third imaging mode is the second frame rate. It may be approximately 1/4 of the frame rate. Thereby, the resolution according to the frame rate can be applied.
- the sensor has an image pickup device 10 in which a plurality of pixels 100 for generating image signals by performing photoelectric conversion of incident light from a subject are arranged in a two-dimensional matrix.
- One of a plurality of color filters 150 that transmit the incident light of different wavelengths is arranged in each pixel block unit 300 in which pixel blocks 200 each including pixels 100 are arranged in two rows and two columns.
- any one of a first imaging mode for generating an image signal for each pixel block unit 200, a second imaging mode for generating an image signal for each pixel block unit 200, and a third imaging mode for generating an image signal for each pixel block unit 300 It is a sensor that operates in the mode of This has the effect of generating image signals with different resolutions.
- the imaging control device that outputs a control signal for switching the mode to the sensor has a first mode that generates an image signal for each pixel 100 that photoelectrically converts incident light from a subject, and a first mode that is arranged in two rows and two columns.
- a second mode in which an image signal is generated for each pixel block 200 consisting of the above four pixels, and a second mode in which an image signal is generated for each pixel block unit 300 consisting of the above four pixel blocks 200 arranged in two rows and two columns.
- a pixel block comprising a plurality of pixels for photoelectrically converting incident light from an object to generate an image signal and an on-chip lens disposed in common with the plurality of pixels for condensing the incident light onto the plurality of pixels.
- an imaging element arranged in a two-dimensional matrix; a first resolution corresponding to the pixel size; a second resolution corresponding to the size of the pixel block; and a pixel block unit size corresponding to a plurality of adjacent pixel blocks.
- a resolution selection unit that selects a third resolution that is a resolution
- an image signal adder that generates a second image signal by adding the generated image signals in accordance with the selected resolution.
- the imaging device according to (1) wherein the pixels each include one of a plurality of color filters that transmit the incident light of different wavelengths.
- the pixel block unit includes the color filters that transmit incident light of the same wavelength to the pixels of the plurality of pixel blocks of the pixel block unit.
- the plurality of color filters are arranged in a predetermined arrangement order in the plurality of pixels.
- the imaging device according to (4) further comprising a signal processing unit that performs remosaic processing, which is processing for converting the image signal into an image signal corresponding to an arrangement of the plurality of color filters in a different order from the arrangement.
- the imaging device according to (5), wherein the signal processing unit performs the re-mosaic processing according to the selected resolution.
- (7) further comprising a target area detection unit for detecting a target area, which is an image area including an object to be imaged among the subjects, in the image formed by the generated image signal;
- the imaging apparatus according to any one of (1) to (6), wherein the resolution selection unit selects the resolution according to the detected target area.
- the target area detection unit detects the target area based on a user's instruction.
- the imaging apparatus according to (7), wherein the target area detection unit detects the target area from the image generated in advance.
- (10) further comprising a sensor for measuring the distance to the subject;
- the imaging apparatus according to any one of (1) to (9), wherein the resolution selection section selects the resolution according to the measured distance.
- Imaging device (12) The pixels further generate a phase difference signal for pupil-dividing the subject and detecting an image plane phase difference, The imaging according to any one of (1) to (11), wherein the image signal addition unit adds the generated phase difference signals according to the selected resolution to generate a second phase difference signal.
- the image signal addition unit adds the generated phase difference signals according to the selected resolution to generate a second phase difference signal.
- the image signal adder adds the phase difference signals generated by two adjacent pixels in the pixel block when the second resolution is selected, and adds the phase difference signals generated by the two adjacent pixels in the pixel block when the third resolution is selected.
- the image pickup apparatus according to (12) wherein the phase difference signals generated by a plurality of pixels arranged in two adjacent pixel blocks are added.
- the first frame rate in the first imaging mode is approximately 1/4 of the second frame rate in the second imaging mode
- the imaging apparatus according to (15), wherein the third frame rate in the third imaging mode is approximately 1 ⁇ 4 of the second frame rate.
- any of a first imaging mode for generating an image signal for each pixel, a second imaging mode for generating an image signal for each pixel block, and a third imaging mode for generating an image signal for each pixel block unit A sensor that operates in the mode of (18) A first mode in which an image signal is generated for each pixel that photoelectrically converts incident light from an object, and a second mode in which an image signal is generated for each pixel block composed of the four pixels arranged in two rows and two columns. and a third mode for generating an image signal for each pixel block unit consisting of the four pixel blocks arranged in two rows and two columns.
- An imaging control device that outputs a control signal for switching between the first mode, the second mode, and the third mode to the sensor based on a second image signal generated in any one of the third modes. .
- imaging device 10 imaging element 20 image signal addition unit 30 signal processing unit 33 re-mosaic processing unit 34 luminance signal generation unit 40 imaging control unit 50 target area detection unit 60 storage unit 70 photographing lens 80 distance measurement unit 90 focus position detection unit 100 pixels 150 color filter 170 on-chip lens 200 pixel block 300 pixel block unit
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Power Engineering (AREA)
- General Physics & Mathematics (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Electromagnetism (AREA)
- Computer Hardware Design (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Studio Devices (AREA)
Abstract
Description
1.第1の実施形態
2.第2の実施形態
3.第3の実施形態
4.第4の実施形態
5.第5の実施形態
6.第6の実施形態
7.第7の実施形態
8.変形例
9.画素ブロックの構成例
[撮像装置の構成]
図1は、本開示の第1の実施形態に係る撮像装置の構成例を示す図である。同図は、撮像装置1の構成例を表すブロック図である。撮像装置1は、被写体の画像を構成する画像信号を生成する装置である。撮像装置1は、撮像素子10と、画像信号加算部20と、信号処理部30と、撮像制御部40と、対象領域検出部50と、記憶部60と、撮影レンズ70とを備える。
図2は、本開示の実施形態に係る撮像素子の構成例を示す図である。同図は、撮像素子10の構成例を表すブロック図である。撮像素子10は、被写体の画像データを生成する半導体素子である。撮像素子10は、画素アレイ部11と、垂直駆動部12と、カラム信号処理部13と、制御部14とを備える。
図3は、本開示の第1の実施形態に係る画素の構成例を示す図である。同図は、画素100の構成例を表す平面図である。前述のように、画素100は画素アレイ部11に配列される。同図は、画素100が2次元行列状に配列される例を表したものである。なお、同図の画素100に付された「R」、「G」及び「B」は、後述するカラーフィルタ150の種類を表す。同図の「R」、「G」及び「B」は、それぞれ赤色光、緑色光及び青色光に対応するカラーフィルタ150の配置を表す。また、同図の白抜きの矩形は、緑色光に対応するカラーフィルタ150が配置される画素100を表す。また、同図の右下がりの斜線ハッチングの矩形は、赤色光に対応するカラーフィルタ150が配置される画素100を表す。また、同図の右上がりの斜線ハッチングの矩形は、青色光に対応するカラーフィルタ150が配置される画素100を表す。
図4は、本開示の実施形態に係る画素の構成例を示す断面図である。同図は、画素100の構成例を表す断面図である。また、同図の画素100は、オンチップレンズ170が共通に配置される画素ブロック200に含まれる画素100である。画素100は、半導体基板120と、絶縁膜130と、配線領域140と、分離部135と、保護膜136と、カラーフィルタ150とを備える。
図5は、本開示の第1の実施形態に係る信号処理部の構成例を示す図である。同図は、信号処理部30の構成例を表すブロック図である。同図の信号処理部30は、感度補正部31と、画素欠陥補正部32と、リモザイク処理部33とを備える。
図6A乃至6Cは、本開示の実施形態に係るリモザイク処理の一例を示す図である。同図は、画素100毎の画像信号が対応する入射光の波長を表す。同図においてドットハッチングが付された矩形は、緑色光に対応する画像信号の領域を表す。横線ハッチングが付された矩形は、赤色光に対応する画像信号の領域を表す。縦線ハッチングが付された矩形は、青色光に対応する画像信号の領域を表す。
図7A及び7Bは、本開示の実施形態に係る撮像装置が生成する画像の一例を示す図である。同図は、撮像装置1における撮像の場面毎に画像の解像度や構成を変更する例を表したものである。
図8は、本開示の第1の実施形態に係る撮像処理の一例を示す図である。同図は、撮像装置1における撮像処理の一例を表す流れ図である。まず、撮像制御部40が第1の解像度を選択する(ステップS100)。次に、対象領域検出部50が記憶部60の被写体データに基づいて被写体を選択する(ステップS101)。次に、選択した被写体に基づいて、対象領域検出部50が対象領域を検出する(ステップS102)。次に、撮像制御部40が撮像素子10を制御して撮像を開始させる(ステップS103)。次に、対象領域検出部50が被写体を追尾する(ステップS104)。これは、対象領域検出部50が被写体の動きに応じて対象領域を連続して検出することにより行うことができる。
上述の第1の実施形態の撮像装置1は、対象領域検出部50により検出された対象領域の撮像を行っていた。これに対し、本開示の第2の実施形態の撮像装置1は、他のセンサを使用して対象物を認識する点で、上述の第1の実施形態と異なる。
図9は、本開示の第2の実施形態に係る撮像装置の構成例を示す図である。同図は、図1と同様に、撮像装置1の構成例を表すブロック図である。同図の撮像装置1は、対象領域検出部50の代わりに測距部80を備える点で、図1の撮像装置1と異なる。
図10は、本開示の第2の実施形態に係る撮像処理の一例を示す図である。同図は、撮像装置1における撮像処理の一例を表す流れ図である。まず、撮像制御部40が第1の解像度を選択する(ステップS150)。次に、対象領域検出部50が記憶部60の被写体データに基づいて被写体を選択する(ステップS151)。この選択された被写体のデータは、測距部80に対して出力される。次に、撮像制御部40が撮像素子10を制御して撮像を開始させる(ステップS153)。次に、測距部80が被写体までの距離を測定する(ステップS154)。
上述の第1の実施形態の撮像装置1は、被写体の画像信号を生成していた。これに対し、本開示の第3の実施形態の撮像装置1は、被写体の焦点位置を更に検出する点で、上述の第1の実施形態と異なる。
図11は、本開示の第3の実施形態に係る撮像装置の構成例を示す図である。同図は、図1と同様に、撮像装置1の構成例を表すブロック図である。同図の撮像装置1は、焦点位置検出部90を更に備え、信号処理部30が位相差信号を出力する点で、図1の撮像装置1と異なる。
図12は、本開示の第3の実施形態に係る信号処理部の構成例を示す図である。同図は、図5と同様に、信号処理部30の構成例を表すブロック図である。同図の信号処理部30は、輝度信号生成部34を更に備える点で、図5の信号処理部30と異なる。
図13は、本開示の第3の実施形態に係る位相差画素の構成例を示す図である。同図は、第1の解像度が適用される場合の位相差画素の配置例を表した図である。同図において、「TL」、「TR」、「BL」及び「BR」は、それぞれ左上、右上、左下及び右下の位相差画素を表す。「TL」及び「TR」並びに「BL」及び「BR」の位相差画素により同図の左右方向に瞳分割することができる。また、「TL」及び「BL」並びに「TR」及び「BR」の位相差画素により同図の上下方向に瞳分割することができる。
上述の第1の実施形態の撮像装置1は、対象領域検出部50が検出した対象領域に応じた解像度を選択していた。これに対し、本開示の第4の実施形態の撮像装置1は、撮像装置1の使用者が選択した解像度を適用する点で、上述の第1の実施形態と異なる。
図17は、本開示の第4の実施形態に係る撮像装置の構成例を示す図である。同図は、図1と同様に、撮像装置1の構成例を表すブロック図である。同図の撮像装置1は、対象領域検出部50及び記憶部60が省略される点で、図1の撮像装置1と異なる。
照度に応じて解像度を選択し、撮像を行う例について説明する。
図18は、本開示の第5の実施形態に係る撮像装置の構成例を示す図である。同図は、図1と同様に、撮像装置1の構成例を表すブロック図である。同図の撮像装置1は、対象領域検出部50が省略され、照度センサ89を備える点で、図1の撮像装置1と異なる。なお、同図において、撮像制御部40及び記憶部60が撮像制御装置を構成する。
図19は、本開示の第5の実施形態に係る撮像処理の処理手順の一例を示す図である。同図は、撮像装置1における撮像処理の処理手順の一例を表す流れ図である。まず、照度センサ89が照度を測定する(ステップS180)。次に、撮像制御部40は、測定結果の照度が第1の照度以上かを判断する(ステップS182)。その結果、照度が第1の照度以上の場合は(ステップS182、Yes)、撮像制御部40は、第1の解像度を選択し(ステップS183)、ステップS187の処理に移行する。
露出に応じて解像度を選択し、撮像を行う例について説明する。
図20は、本開示の第6の実施形態に係る撮像装置の構成例を示す図である。同図は、図18と同様に、撮像装置1の構成例を表すブロック図である。同図の撮像装置1は、照度センサ89及び記憶部60の代わりに露光制御部88を備える点で、図18の撮像装置1と異なる。
図21は、本開示の第6の実施形態に係る撮像処理の処理手順の一例を示す図である。同図は、撮像装置1における撮像処理の処理手順の一例を表す流れ図である。まず、撮像制御部40が画像信号のレベルに基づいてゲイン及びシャッタ速度を決定する(ステップS190)。次に、決定されたゲイン及びシャッタ速度に基づいて露光制御部88がEV値を検出する(ステップS191)。次に、撮像制御部40は、検出されたEV値が第1のEV値以上かを判断する(ステップS192)。その結果、検出されたEV値が第1のEV値以上の場合は(ステップS192、Yes)、撮像制御部40は、第1の解像度を選択し(ステップS193)、ステップS197の処理に移行する。
上述の第1の実施形態の撮像装置1は、撮像制御部40は、対象領域検出部50が検出した対象領域に応じた解像度を選択していた。これに対し、本開示の第7の実施形態の撮像装置1は、複数の対象領域が検出される点で、上述の第1の実施形態と異なる。
図22は、本開示の第7の実施形態に係る撮像装置が生成する画像の一例を示す図である。同図は、図7Bと同様に、生成した画像に異なる解像度に設定した対象領域の画像を重ね合わせる例を表したものである。同図の画像410は、例えば、第2の解像度にて生成される。この画像410は、建物内から外の景色を背景にして人物の撮影を行う場合の例を表したものである。逆光となるため、人物が暗い画像となる。この画像において、対象領域検出部50により対象領域411及び413が検出され、撮像制御部40に出力される。撮像制御部40は、対象領域411に第1の解像度を適用して拡大した画像412及び対象領域413に第3の解像度を適用して輝度を向上させた画像414を生成することができる。
本開示の実施形態の変形例について説明する。
図23は、本開示の実施形態の第1の変形例に係る画素の構成例を示す図である。同図は、図3と同様に、画素アレイ部11に配置される画素100の構成例を表す平面図である。同図の画素100は、位相差画素を構成する画素100を除いてオンチップレンズ170をそれぞれ備える点で、図3の画素100と異なる。ベイヤー配列に配置される緑色光に対応する画素ブロックユニット300の一方に、位相差画素を構成する共通のオンチップレンズ170を有する画素100が配置される。また、この位相差画素を構成する画素100は、画素ブロックユニット300を構成する4つの画素ブロック200の内の1つの画素ブロック200に配置される。
本開示の実施形態の画素ブロック200の回路例について説明する。
図は、本開示の実施形態に係る画素ブロックの回路構成の一例を示す図である。同図は、2つの画素ブロック200毎に画像信号生成部110が配置される回路構成の一例を表す図である。同図の画素ブロック200aは、光電変換部101a、101b、101c、101d、101e、101f、101g及び101h並びに電荷転送部102a、102b、102c、102d、102e、102f、102g及び102hを備える。また、画素ブロック200aは、電荷保持部103、リセット部104、画像信号生成部110、補助電荷保持部108、結合部107及び画像信号生成部110を更に備える。なお、画像信号生成部110は、増幅トランジスタ111及び選択トランジスタ112を備える。
図26は、本開示の実施形態に係る画素ブロックの構成例を示す図である。同図は、画素ブロック200a、200b、200c及び200dの構成例を表す図である。同図の画素ブロック200a等は、オンチップレンズ170が共通に配置される2行2列に配列された4つの画素100が2組配置される。同図の画素ブロック200b、200c及び200dは、画素ブロック200aと共通の回路に構成される。また、2組の画素ブロック200a、200b、200c及び200dが図2において説明した信号線16(信号線VSL)に共通に接続される。
撮像装置1は、撮像素子10と、解像度選択部(撮像制御部40)と、画像信号加算部20とを有する。撮像素子10は、被写体からの入射光の光電変換を行って画像信号を生成する複数の画素100及び当該複数の画素100に共通に配置されて入射光を複数の画素100に集光するオンチップレンズ170を備える画素ブロック200が2次元行列状に配置される。解像度選択部(撮像制御部40)は、画素100のサイズに応じた解像度である第1の解像度、画素ブロック200のサイズに応じた解像度である第2の解像度及び隣接する複数の画素ブロック200により構成される画素ブロックユニット300のサイズに応じた解像度である第3の解像度を選択する。画像信号加算部20は、生成された画像信号を選択された解像度に応じて加算することにより第2の画像信号を生成する。これにより、異なる解像度の画像信号を生成するという作用をもたらす。
(1)
被写体からの入射光の光電変換を行って画像信号を生成する複数の画素及び当該複数の画素に共通に配置されて前記入射光を前記複数の画素に集光するオンチップレンズを備える画素ブロックが2次元行列状に配置される撮像素子と、
前記画素のサイズに応じた解像度である第1の解像度、前記画素ブロックのサイズに応じた解像度である第2の解像度及び隣接する複数の前記画素ブロックにより構成される画素ブロックユニットのサイズに応じた解像度である第3の解像度を選択する解像度選択部と、
前記生成された画像信号を前記選択された解像度に応じて加算することにより第2の画像信号を生成する画像信号加算部と
を有する撮像装置。
(2)
前記画素は、異なる波長の前記入射光を透過する複数のカラーフィルタのうちの1つが配置される
前記(1)に記載の撮像装置。
(3)
前記画素ブロックユニットは、自身の複数の前記画素ブロックのそれぞれの前記画素に同じ波長の入射光を透過する前記カラーフィルタが配置される
前記(2)に記載の撮像装置。
(4)
前記複数のカラーフィルタは、複数の前記画素において所定の並び順の配列に配置される
前記(2)に記載の撮像装置。
(5)
前記画像信号を前記配列とは異なる前記複数のカラーフィルタの並び順の配列に対応する画像信号に変換する処理であるリモザイク処理を行う信号処理部
を更に有する前記(4)に記載の撮像装置。
(6)
前記信号処理部は、前記選択された解像度に応じた前記リモザイク処理を行う
前記(5)に記載の撮像装置。
(7)
前記生成された画像信号により構成される画像において前記被写体のうちの撮像の対象となる対象物を含む画像の領域である対象領域を検出する対象領域検出部
を更に有し、
前記解像度選択部は、前記検出された対象領域に応じて前記解像度を選択する
前記(1)から(6)の何れかに記載の撮像装置。
(8)
前記対象領域検出部は、使用者の指示に基づいて前記対象領域を検出する
前記(7)に記載の撮像装置。
(9)
前記対象領域検出部は、事前に生成された前記画像から前記対象領域を検出する
前記(7)に記載の撮像装置。
(10)
前記被写体までの距離を測定するセンサを更に有し、
前記解像度選択部は、前記測定された距離に応じて前記解像度を選択する
前記(1)から(9)の何れかに記載の撮像装置。
(11)
前記解像度選択部は、時系列に生成される複数の画像である動画像を生成する際の画像のフレームレートに応じて前記解像度を選択する
前記(1)から(10)の何れかに記載の撮像装置。
(12)
前記画素は、前記被写体を瞳分割して像面位相差を検出するための位相差信号を更に生成し、
前記画像信号加算部は、前記生成された位相差信号を前記選択された解像度に応じて加算して第2の位相差信号を生成する
前記(1)から(11)の何れかに記載の撮像装置。
(13)
前記画像信号加算部は、前記第2の解像度が選択された場合に前記画素ブロックのうちの隣接する2つの前記画素により生成される前記位相差信号を加算し、前記第3の解像度が選択された場合に隣接する2つの画素ブロックに配置される複数の画素により生成される前記位相差信号を加算する
前記(12)に記載の撮像装置。
(14)
前記画素ブロックは、2行2列に配置される4つの前記画素を備える
前記(1)から(13)の何れかに記載の撮像装置。
(15)
入射光の光電変換を行って画像信号を生成する2次元行列状に配置された複数の画素と、
2行2列に配置された4つの前記画素からなる画素ブロック毎に配置される複数のオンチップレンズと
を有し、
2行2列に配置された4つの前記画素ブロックからなる画素ブロックユニット毎に異なる波長の前記入射光を透過する複数のカラーフィルタのうちの1つが配置され、
前記画素毎に画像信号を生成する第1の撮像モードにおける第1の解像度は、前記画素ブロック毎に画像信号を生成する第2の撮像モードにおける第2の解像度の4倍であり、
前記第2の解像度は、前記画素ブロックユニット毎に画像信号を生成する第3の撮像モードにおける第3の解像度の4倍である
撮像装置。
(16)
前記第1の撮像モードにおける第1のフレームレートは、前記第2の撮像モードにおける第2のフレームレートの略1/4であり、
前記第3の撮像モードにおける第3のフレームレートは、前記第2のフレームレートの略1/4である
前記(15)に記載の撮像装置。
(17)
被写体からの入射光の光電変換を行って画像信号を生成する複数の画素が2次元行列状に配置される撮像素子を有し、
2行2列に配置された4つの前記画素からなる画素ブロックが2行2列に配置されて構成された画素ブロックユニット毎に異なる波長の前記入射光を透過する複数のカラーフィルタのうちの1つが配置され、
前記画素毎の画像信号を生成する第1の撮像モード、前記画素ブロック毎の画像信号を生成する第2の撮像モード及び前記画素ブロックユニット毎の画像信号を生成する第3の撮像モードの何れかのモードにて動作する
センサ。
(18)
被写体からの入射光の光電変換を行う画素毎に画像信号を生成する第1のモードと、2行2列に配置された4つの前記画素からなる画素ブロック毎に画像信号を生成する第2のモードと、2行2列に配置された4つの前記画素ブロックからなる画素ブロックユニット毎に画像信号を生成する第3のモードとを備えるセンサにより前記第1のモード、前記第2のモード及び前記第3のモードの何れかにおいて生成された第2の画像信号に基づいて、前記第1のモード、前記第2のモード及び前記第3のモードを切り替える制御信号を前記センサに出力する撮像制御装置。
10 撮像素子
20 画像信号加算部
30 信号処理部
33 リモザイク処理部
34 輝度信号生成部
40 撮像制御部
50 対象領域検出部
60 記憶部
70 撮影レンズ
80 測距部
90 焦点位置検出部
100 画素
150 カラーフィルタ
170 オンチップレンズ
200 画素ブロック
300 画素ブロックユニット
Claims (18)
- 被写体からの入射光の光電変換を行って画像信号を生成する複数の画素及び当該複数の画素に共通に配置されて前記入射光を前記複数の画素に集光するオンチップレンズを備える画素ブロックが2次元行列状に配置される撮像素子と、
前記画素のサイズに応じた解像度である第1の解像度、前記画素ブロックのサイズに応じた解像度である第2の解像度及び隣接する複数の前記画素ブロックにより構成される画素ブロックユニットのサイズに応じた解像度である第3の解像度を選択する解像度選択部と、
前記生成された画像信号を前記選択された解像度に応じて加算することにより第2の画像信号を生成する画像信号加算部と
を有する撮像装置。 - 前記画素は、異なる波長の前記入射光を透過する複数のカラーフィルタのうちの1つが配置される
請求項1に記載の撮像装置。 - 前記画素ブロックユニットは、自身の複数の前記画素ブロックのそれぞれの前記画素に同じ波長の入射光を透過する前記カラーフィルタが配置される
請求項2に記載の撮像装置。 - 前記複数のカラーフィルタは、複数の前記画素において所定の並び順の配列に配置される
請求項2に記載の撮像装置。 - 前記画像信号を前記配列とは異なる前記複数のカラーフィルタの並び順の配列に対応する画像信号に変換する処理であるリモザイク処理を行う信号処理部
を更に有する請求項4に記載の撮像装置。 - 前記信号処理部は、前記選択された解像度に応じた前記リモザイク処理を行う
請求項5に記載の撮像装置。 - 前記生成された画像信号により構成される画像において前記被写体のうちの撮像の対象となる対象物を含む画像の領域である対象領域を検出する対象領域検出部
を更に有し、
前記解像度選択部は、前記検出された対象領域に応じて前記解像度を選択する
請求項1に記載の撮像装置。 - 前記対象領域検出部は、使用者の指示に基づいて前記対象領域を検出する
請求項7に記載の撮像装置。 - 前記対象領域検出部は、事前に生成された前記画像から前記対象領域を検出する
請求項7に記載の撮像装置。 - 前記被写体までの距離を測定するセンサを更に有し、
前記解像度選択部は、前記測定された距離に応じて前記解像度を選択する
請求項1に記載の撮像装置。 - 前記解像度選択部は、時系列に生成される複数の画像である動画像を生成する際の画像のフレームレートに応じて前記解像度を選択する
請求項1に記載の撮像装置。 - 前記画素は、前記被写体を瞳分割して像面位相差を検出するための位相差信号を更に生成し、
前記画像信号加算部は、前記生成された位相差信号を前記選択された解像度に応じて加算して第2の位相差信号を生成する
請求項1に記載の撮像装置。 - 前記画像信号加算部は、前記第2の解像度が選択された場合に前記画素ブロックのうちの隣接する2つの前記画素により生成される前記位相差信号を加算し、前記第3の解像度が選択された場合に隣接する2つの画素ブロックに配置される複数の画素により生成される前記位相差信号を加算する
請求項12に記載の撮像装置。 - 前記画素ブロックは、2行2列に配置される4つの前記画素を備える
請求項1に記載の撮像装置。 - 入射光の光電変換を行って画像信号を生成する2次元行列状に配置された複数の画素と、
2行2列に配置された4つの前記画素からなる画素ブロック毎に配置される複数のオンチップレンズと
を有し、
2行2列に配置された4つの前記画素ブロックからなる画素ブロックユニット毎に異なる波長の前記入射光を透過する複数のカラーフィルタのうちの1つが配置され、
前記画素毎に画像信号を生成する第1の撮像モードにおける第1の解像度は、前記画素ブロック毎に画像信号を生成する第2の撮像モードにおける第2の解像度の4倍であり、
前記第2の解像度は、前記画素ブロックユニット毎に画像信号を生成する第3の撮像モードにおける第3の解像度の4倍である
撮像装置。 - 前記第1の撮像モードにおける第1のフレームレートは、前記第2の撮像モードにおける第2のフレームレートの略1/4であり、
前記第3の撮像モードにおける第3のフレームレートは、前記第2のフレームレートの略1/4である
請求項15に記載の撮像装置。 - 被写体からの入射光の光電変換を行って画像信号を生成する複数の画素が2次元行列状に配置される撮像素子を有し、
2行2列に配置された4つの前記画素からなる画素ブロックが2行2列に配置されて構成された画素ブロックユニット毎に異なる波長の前記入射光を透過する複数のカラーフィルタのうちの1つが配置され、
前記画素毎の画像信号を生成する第1の撮像モード、前記画素ブロック毎の画像信号を生成する第2の撮像モード及び前記画素ブロックユニット毎の画像信号を生成する第3の撮像モードの何れかのモードにて動作する
センサ。 - 被写体からの入射光の光電変換を行う画素毎に画像信号を生成する第1のモードと、2行2列に配置された4つの前記画素からなる画素ブロック毎に画像信号を生成する第2のモードと、2行2列に配置された4つの前記画素ブロックからなる画素ブロックユニット毎に画像信号を生成する第3のモードとを備えるセンサにより前記第1のモード、前記第2のモード及び前記第3のモードの何れかにおいて生成された第2の画像信号に基づいて、前記第1のモード、前記第2のモード及び前記第3のモードを切り替える制御信号を前記センサに出力する撮像制御装置。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020247003314A KR20240050322A (ko) | 2021-08-18 | 2022-07-06 | 촬상 장치, 센서 및 촬상 제어 장치 |
CN202280055956.4A CN117859341A (zh) | 2021-08-18 | 2022-07-06 | 成像装置、传感器及成像控制装置 |
JP2023542256A JPWO2023021871A1 (ja) | 2021-08-18 | 2022-07-06 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021-133379 | 2021-08-18 | ||
JP2021133379 | 2021-08-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023021871A1 true WO2023021871A1 (ja) | 2023-02-23 |
Family
ID=85240535
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2022/026817 WO2023021871A1 (ja) | 2021-08-18 | 2022-07-06 | 撮像装置、センサ及び撮像制御装置 |
Country Status (5)
Country | Link |
---|---|
JP (1) | JPWO2023021871A1 (ja) |
KR (1) | KR20240050322A (ja) |
CN (1) | CN117859341A (ja) |
TW (1) | TW202315389A (ja) |
WO (1) | WO2023021871A1 (ja) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010147816A (ja) * | 2008-12-18 | 2010-07-01 | Canon Inc | 撮像装置及びその制御方法 |
JP2014007454A (ja) * | 2012-06-21 | 2014-01-16 | Nikon Corp | 撮像装置および撮像方法 |
JP2014228671A (ja) | 2013-05-22 | 2014-12-08 | ソニー株式会社 | 信号処理装置および信号処理方法、固体撮像装置、並びに、電子機器 |
WO2017126326A1 (ja) * | 2016-01-20 | 2017-07-27 | ソニー株式会社 | 固体撮像装置およびその駆動方法、並びに電子機器 |
WO2018003502A1 (ja) * | 2016-06-28 | 2018-01-04 | ソニー株式会社 | 撮像装置、撮像方法、プログラム |
WO2019102887A1 (ja) * | 2017-11-22 | 2019-05-31 | ソニーセミコンダクタソリューションズ株式会社 | 固体撮像素子および電子機器 |
-
2022
- 2022-07-06 KR KR1020247003314A patent/KR20240050322A/ko unknown
- 2022-07-06 CN CN202280055956.4A patent/CN117859341A/zh active Pending
- 2022-07-06 WO PCT/JP2022/026817 patent/WO2023021871A1/ja active Application Filing
- 2022-07-06 JP JP2023542256A patent/JPWO2023021871A1/ja active Pending
- 2022-07-22 TW TW111127472A patent/TW202315389A/zh unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2010147816A (ja) * | 2008-12-18 | 2010-07-01 | Canon Inc | 撮像装置及びその制御方法 |
JP2014007454A (ja) * | 2012-06-21 | 2014-01-16 | Nikon Corp | 撮像装置および撮像方法 |
JP2014228671A (ja) | 2013-05-22 | 2014-12-08 | ソニー株式会社 | 信号処理装置および信号処理方法、固体撮像装置、並びに、電子機器 |
WO2017126326A1 (ja) * | 2016-01-20 | 2017-07-27 | ソニー株式会社 | 固体撮像装置およびその駆動方法、並びに電子機器 |
WO2018003502A1 (ja) * | 2016-06-28 | 2018-01-04 | ソニー株式会社 | 撮像装置、撮像方法、プログラム |
WO2019102887A1 (ja) * | 2017-11-22 | 2019-05-31 | ソニーセミコンダクタソリューションズ株式会社 | 固体撮像素子および電子機器 |
Also Published As
Publication number | Publication date |
---|---|
TW202315389A (zh) | 2023-04-01 |
KR20240050322A (ko) | 2024-04-18 |
JPWO2023021871A1 (ja) | 2023-02-23 |
CN117859341A (zh) | 2024-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11444115B2 (en) | Solid-state imaging device and electronic apparatus | |
US20190109165A1 (en) | Solid-state image sensor, electronic apparatus, and imaging method | |
JP5132102B2 (ja) | 光電変換装置および光電変換装置を用いた撮像システム | |
US8049293B2 (en) | Solid-state image pickup device, electronic apparatus using such solid-state image pickup device and method of manufacturing solid-state image pickup device | |
JP4839990B2 (ja) | 固体撮像素子及びこれを用いた撮像装置 | |
KR101679867B1 (ko) | 고체 촬상 장치, 및, 그 제조 방법, 전자 기기 | |
TW201537983A (zh) | 固態成像器件及其驅動方法,以及電子裝置 | |
JP2011029835A (ja) | 固体撮像装置とその駆動方法、及び電子機器 | |
JP2013145779A (ja) | 固体撮像装置及び電子機器 | |
JP2007081142A (ja) | Mos型固体撮像装置及びその製造方法 | |
US9461087B2 (en) | Solid-state imaging device, imaging apparatus, and method of driving the solid-state imaging device | |
US10785431B2 (en) | Image sensors having dark pixels and imaging pixels with different sensitivities | |
JP2009016432A (ja) | 固体撮像素子および撮像装置 | |
JP2006147816A (ja) | 物理量分布検知装置および物理情報取得装置 | |
WO2023021871A1 (ja) | 撮像装置、センサ及び撮像制御装置 | |
JP5539458B2 (ja) | 光電変換装置および光電変換装置を用いた撮像システム | |
US20220247950A1 (en) | Image capture element and image capture apparatus | |
TWI806991B (zh) | 攝像元件及攝像元件之製造方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 2023542256 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202280055956.4 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022858194 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2022858194 Country of ref document: EP Effective date: 20240318 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22858194 Country of ref document: EP Kind code of ref document: A1 |