WO2020017337A1 - 受光素子および測距モジュール - Google Patents
受光素子および測距モジュール Download PDFInfo
- Publication number
- WO2020017337A1 WO2020017337A1 PCT/JP2019/026572 JP2019026572W WO2020017337A1 WO 2020017337 A1 WO2020017337 A1 WO 2020017337A1 JP 2019026572 W JP2019026572 W JP 2019026572W WO 2020017337 A1 WO2020017337 A1 WO 2020017337A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- pixel
- semiconductor region
- substrate
- receiving element
- tap
- Prior art date
Links
- 239000004065 semiconductor Substances 0.000 claims abstract description 620
- 238000001514 detection method Methods 0.000 claims abstract description 119
- 238000012546 transfer Methods 0.000 claims description 41
- 230000010287 polarization Effects 0.000 claims description 14
- 230000001678 irradiating effect Effects 0.000 claims description 3
- 238000000034 method Methods 0.000 abstract description 65
- 238000005516 engineering process Methods 0.000 abstract description 30
- 239000000758 substrate Substances 0.000 description 412
- 238000000605 extraction Methods 0.000 description 261
- 239000010410 layer Substances 0.000 description 227
- 229910052751 metal Inorganic materials 0.000 description 118
- 239000002184 metal Substances 0.000 description 118
- 239000012071 phase Substances 0.000 description 100
- 238000010586 diagram Methods 0.000 description 72
- 238000000926 separation method Methods 0.000 description 59
- 238000006243 chemical reaction Methods 0.000 description 57
- 238000012545 processing Methods 0.000 description 48
- 238000003384 imaging method Methods 0.000 description 45
- 230000003321 amplification Effects 0.000 description 35
- 238000003199 nucleic acid amplification method Methods 0.000 description 35
- 230000006870 function Effects 0.000 description 30
- 238000012937 correction Methods 0.000 description 29
- 238000009792 diffusion process Methods 0.000 description 29
- 210000001747 pupil Anatomy 0.000 description 28
- 230000005684 electric field Effects 0.000 description 27
- 238000005259 measurement Methods 0.000 description 27
- 230000035945 sensitivity Effects 0.000 description 24
- 101100184148 Xenopus laevis mix-a gene Proteins 0.000 description 22
- 230000004048 modification Effects 0.000 description 22
- 238000012986 modification Methods 0.000 description 22
- 238000002955 isolation Methods 0.000 description 20
- 238000007599 discharging Methods 0.000 description 19
- 239000000203 mixture Substances 0.000 description 18
- 101000733752 Homo sapiens Retroviral-like aspartic protease 1 Proteins 0.000 description 16
- 102100033717 Retroviral-like aspartic protease 1 Human genes 0.000 description 16
- 230000002093 peripheral effect Effects 0.000 description 16
- 239000000969 carrier Substances 0.000 description 14
- 230000003287 optical effect Effects 0.000 description 13
- 230000000694 effects Effects 0.000 description 12
- 239000010949 copper Substances 0.000 description 11
- 239000012535 impurity Substances 0.000 description 11
- 230000010363 phase shift Effects 0.000 description 11
- 230000015572 biosynthetic process Effects 0.000 description 10
- 230000000737 periodic effect Effects 0.000 description 10
- 230000008569 process Effects 0.000 description 10
- 230000004044 response Effects 0.000 description 10
- 238000007667 floating Methods 0.000 description 9
- 238000005286 illumination Methods 0.000 description 9
- 238000009826 distribution Methods 0.000 description 8
- 229910021420 polycrystalline silicon Inorganic materials 0.000 description 8
- 229920005591 polysilicon Polymers 0.000 description 8
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 6
- 239000003990 capacitor Substances 0.000 description 6
- 239000011159 matrix material Substances 0.000 description 6
- 230000007246 mechanism Effects 0.000 description 6
- 229910052710 silicon Inorganic materials 0.000 description 6
- 239000010703 silicon Substances 0.000 description 6
- 102000012677 DET1 Human genes 0.000 description 5
- 101150113651 DET1 gene Proteins 0.000 description 5
- 239000004020 conductor Substances 0.000 description 5
- 230000007423 decrease Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 5
- 238000002598 diffusion tensor imaging Methods 0.000 description 5
- 230000006872 improvement Effects 0.000 description 5
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 4
- 229910052782 aluminium Inorganic materials 0.000 description 4
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 4
- 229910052796 boron Inorganic materials 0.000 description 4
- 229910052802 copper Inorganic materials 0.000 description 4
- 238000013500 data storage Methods 0.000 description 4
- 230000000149 penetrating effect Effects 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 230000000903 blocking effect Effects 0.000 description 3
- 238000005530 etching Methods 0.000 description 3
- 239000007769 metal material Substances 0.000 description 3
- 230000001629 suppression Effects 0.000 description 3
- 101150066284 DET2 gene Proteins 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 150000001875 compounds Chemical class 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000012447 hatching Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 239000011229 interlayer Substances 0.000 description 2
- 238000005468 ion implantation Methods 0.000 description 2
- 238000005304 joining Methods 0.000 description 2
- 238000003475 lamination Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 239000007790 solid phase Substances 0.000 description 2
- WFKWXMTUELFFGS-UHFFFAOYSA-N tungsten Chemical compound [W] WFKWXMTUELFFGS-UHFFFAOYSA-N 0.000 description 2
- 229910052721 tungsten Inorganic materials 0.000 description 2
- 239000010937 tungsten Substances 0.000 description 2
- 229910018173 Al—Al Inorganic materials 0.000 description 1
- ZOXJGFHDIHLPTG-UHFFFAOYSA-N Boron Chemical compound [B] ZOXJGFHDIHLPTG-UHFFFAOYSA-N 0.000 description 1
- 229910017767 Cu—Al Inorganic materials 0.000 description 1
- 229910005542 GaSb Inorganic materials 0.000 description 1
- 229910001218 Gallium arsenide Inorganic materials 0.000 description 1
- 240000004050 Pentaglottis sempervirens Species 0.000 description 1
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 1
- OAICVXFJPJFONN-UHFFFAOYSA-N Phosphorus Chemical compound [P] OAICVXFJPJFONN-UHFFFAOYSA-N 0.000 description 1
- 229910004298 SiO 2 Inorganic materials 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 229910052785 arsenic Inorganic materials 0.000 description 1
- RQNWIZPPADIBDY-UHFFFAOYSA-N arsenic atom Chemical compound [As] RQNWIZPPADIBDY-UHFFFAOYSA-N 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 239000000356 contaminant Substances 0.000 description 1
- 230000008878 coupling Effects 0.000 description 1
- 238000010168 coupling process Methods 0.000 description 1
- 238000005859 coupling reaction Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000023077 detection of light stimulus Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000004313 glare Effects 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 238000009413 insulation Methods 0.000 description 1
- 238000010030 laminating Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 238000000059 patterning Methods 0.000 description 1
- 229910052698 phosphorus Inorganic materials 0.000 description 1
- 239000011574 phosphorus Substances 0.000 description 1
- 239000002356 single layer Substances 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L31/00—Semiconductor devices sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof; Details thereof
- H01L31/02—Details
- H01L31/02016—Circuit arrangements of general character for the devices
- H01L31/02019—Circuit arrangements of general character for the devices for devices characterised by at least one potential jump barrier or surface barrier
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/14625—Optical elements or arrangements associated with the device
- H01L27/14627—Microlenses
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/32—Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated
- G01S17/36—Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/93—Lidar systems specially adapted for specific applications for anti-collision purposes
- G01S17/931—Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/481—Constructional features, e.g. arrangements of optical elements
- G01S7/4816—Constructional features, e.g. arrangements of optical elements of receivers alone
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4861—Circuits for detection, sampling, integration or read-out
- G01S7/4863—Detector arrays, e.g. charge-transfer gates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/487—Extracting wanted echo signals, e.g. pulse detection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/491—Details of non-pulse systems
- G01S7/4912—Receivers
- G01S7/4913—Circuits for detection, sampling, integration or read-out
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/491—Details of non-pulse systems
- G01S7/4912—Receivers
- G01S7/4913—Circuits for detection, sampling, integration or read-out
- G01S7/4914—Circuits for detection, sampling, integration or read-out of detector arrays, e.g. charge-transfer gates
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/499—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00 using polarisation effects
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/14603—Special geometry or disposition of pixel-elements, address-lines or gate-electrodes
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/14609—Pixel-elements with integrated switching, control, storage or amplification elements
- H01L27/1461—Pixel-elements with integrated switching, control, storage or amplification elements characterised by the photosensitive area
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/1462—Coatings
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/1462—Coatings
- H01L27/14623—Optical shielding
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/14625—Optical elements or arrangements associated with the device
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/14625—Optical elements or arrangements associated with the device
- H01L27/14629—Reflectors
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L27/00—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
- H01L27/14—Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
- H01L27/144—Devices controlled by radiation
- H01L27/146—Imager structures
- H01L27/14601—Structural or functional details thereof
- H01L27/14636—Interconnect structures
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L31/00—Semiconductor devices sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof; Details thereof
- H01L31/02—Details
- H01L31/0216—Coatings
- H01L31/02161—Coatings for devices characterised by at least one potential jump barrier or surface barrier
- H01L31/02162—Coatings for devices characterised by at least one potential jump barrier or surface barrier for filtering or shielding light, e.g. multicolour filters for photodetectors
- H01L31/02164—Coatings for devices characterised by at least one potential jump barrier or surface barrier for filtering or shielding light, e.g. multicolour filters for photodetectors for shielding light, e.g. light blocking layers, cold shields for infrared detectors
-
- H—ELECTRICITY
- H01—ELECTRIC ELEMENTS
- H01L—SEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
- H01L31/00—Semiconductor devices sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof; Details thereof
- H01L31/08—Semiconductor devices sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof; Details thereof in which radiation controls flow of current through the device, e.g. photoresistors
- H01L31/10—Semiconductor devices sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation; Processes or apparatus specially adapted for the manufacture or treatment thereof or of parts thereof; Details thereof in which radiation controls flow of current through the device, e.g. photoresistors characterised by potential barriers, e.g. phototransistors
- H01L31/101—Devices sensitive to infrared, visible or ultraviolet radiation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/70—SSIS architectures; Circuits associated therewith
Definitions
- the present technology relates to a light receiving element and a distance measuring module, and particularly to a light receiving element and a distance measuring module capable of improving characteristics.
- a distance measuring system using an indirect ToF (Time of Flight) method is known.
- signal charges obtained by receiving light reflected by an active light irradiated by an LED (Light Emitting Diode) or a laser in a certain phase on an object are distributed to different regions at high speed.
- a capable sensor is essential.
- a technique has been proposed in which a wide area in a substrate can be modulated at high speed by applying a voltage directly to the substrate of the sensor to generate a current in the substrate (for example, see Patent Document 1). ).
- Such a sensor is also called a CAPD (Current Assisted Photonic Demodulator) sensor.
- the above-mentioned CAPD sensor is a front-illuminated sensor in which wiring and the like are arranged on the surface of the substrate on the side that receives external light.
- PD Photodiode
- wiring for extracting charge, various control lines, and signal lines must be arranged on the light-receiving surface side of the PD, which limits the photoelectric conversion area. . That is, a sufficient photoelectric conversion region cannot be secured, and characteristics such as pixel sensitivity may be reduced.
- the external light component is a noise component for the indirect ToF method in which distance measurement is performed using active light.
- Qs saturation signal amount
- the wiring layout is limited in the front-illuminated CAPD sensor, it is necessary to use a method other than the wiring capacitance, such as providing an additional transistor to secure the capacitance.
- a signal extraction unit called Tap is arranged on the side of the substrate where light is incident.
- the ratio of photoelectric conversion occurring on the light incident surface side is high although there is a difference in the attenuation rate depending on the wavelength of light. Therefore, in the surface-type CAPD sensor, there is a possibility that the probability that photoelectric conversion is performed in an Inactive Tap region, which is a Tap region in which a signal charge is not distributed, in the Tap region in which the signal extraction unit is provided.
- distance measurement information is obtained using signals distributed to each charge storage area according to the phase of the active light.Therefore, components directly photoelectrically converted in the Inactive Tap area become noise. It can get worse. That is, the characteristics of the CAPD sensor may be degraded.
- the present technology has been made in view of such a situation, and aims to improve characteristics.
- the light receiving element includes: On-chip lens, A wiring layer, A semiconductor layer disposed between the on-chip lens and the wiring layer, The semiconductor layer, A first tap having a first voltage application unit and a first charge detection unit disposed therearound; A second tap having a second voltage application unit and a second charge detection unit disposed therearound; A phase difference is detected using a signal detected by the first tap and a signal detected by the second tap.
- an on-chip lens, a wiring layer, and a semiconductor layer disposed between the on-chip lens and the wiring layer are provided.
- a first tap including a voltage application unit, a first charge detection unit disposed therearound, a second tap including a second voltage application unit, and a second charge detection unit disposed therearound; Two taps are provided, and a phase difference is detected using signals detected by the first tap and the second tap.
- the light receiving element includes: On-chip lens, A wiring layer, A semiconductor layer disposed between the on-chip lens and the wiring layer; A polarizer disposed between the on-chip lens and the semiconductor layer, The semiconductor layer, A first tap having a first voltage application unit and a first charge detection unit disposed therearound; A second tap having a second voltage application unit and a second charge detection unit disposed therearound.
- an on-chip lens, a wiring layer, a semiconductor layer disposed between the on-chip lens and the wiring layer, and a semiconductor layer disposed between the on-chip lens and the semiconductor layer A first tap having a first voltage application unit, a first charge detection unit disposed therearound, and a second voltage application unit. And a second tap having a second charge detection unit disposed therearound.
- the light receiving element includes: On-chip lens, A wiring layer, A semiconductor layer disposed between the on-chip lens and the wiring layer; A color filter disposed between the on-chip lens and the semiconductor layer, The semiconductor layer, A first tap having a first voltage application unit and a first charge detection unit disposed therearound; A second tap having a second voltage application unit and a second charge detection unit disposed therearound.
- an on-chip lens, a wiring layer, a semiconductor layer disposed between the on-chip lens and the wiring layer, and a semiconductor layer disposed between the on-chip lens and the semiconductor layer A first tap having a first voltage application unit, a first charge detection unit disposed therearound, and a second voltage.
- a second tap having an application unit and a second charge detection unit disposed therearound is provided.
- the ranging module includes: A light-receiving element according to any one of the first to third aspects, A light source for irradiating irradiation light whose brightness varies periodically, A light emission control unit that controls the irradiation timing of the irradiation light.
- a light receiving element according to any one of the first to third aspects, a light source that emits irradiation light whose brightness periodically varies, and an irradiation timing of the irradiation light And a light emission control unit for controlling the light emission.
- FIG. 3 is a block diagram illustrating a configuration example of a light receiving element.
- FIG. 3 is a diagram illustrating a configuration example of a pixel.
- FIG. 3 is a diagram illustrating a configuration example of a signal extraction unit of a pixel. It is a figure explaining sensitivity improvement.
- FIG. 4 is a diagram for describing improvement in charge separation efficiency.
- FIG. 5 is a diagram for describing an improvement in electron extraction efficiency.
- FIG. 4 is a diagram illustrating a moving speed of a signal carrier in a front-side irradiation type.
- FIG. 3 is a diagram illustrating a moving speed of a signal carrier in a backside illumination type.
- FIG. 9 is a diagram illustrating another configuration example of a signal extraction unit of a pixel.
- FIG. 9 is a diagram illustrating another configuration example of a signal extraction unit of a pixel.
- FIG. 9 is a diagram illustrating another configuration example of a signal extraction unit of a pixel.
- FIG. 9 is a diagram illustrating another configuration example of a signal extraction unit of a pixel.
- FIG. 9 is a diagram illustrating another configuration example of a signal extraction unit of a pixel.
- FIG. 9 is a diagram illustrating another configuration example of a signal extraction unit of a pixel.
- FIG. 9 is a diagram illustrating another configuration example of a signal extraction unit of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 6 is a diagram illustrating another configuration example of a pixel.
- FIG. 3 is a diagram illustrating an equivalent circuit of a pixel.
- FIG. 4 is a diagram illustrating another equivalent circuit of a pixel.
- FIG. 4 is a diagram illustrating an example of the arrangement of voltage supply lines employing a periodic arrangement.
- FIG. 3 is a diagram illustrating an example of a voltage supply line arrangement employing a mirror arrangement.
- FIG. 3 is a diagram illustrating characteristics of a periodic arrangement and a mirror arrangement.
- FIG. 21 is a sectional view of a plurality of pixels according to a fourteenth embodiment.
- FIG. 21 is a sectional view of a plurality of pixels according to a fourteenth embodiment.
- FIG. 21 is a sectional view of a plurality of pixels according to a fourteenth embodiment. It is sectional drawing of the multiple pixel in 9th Embodiment.
- FIG. 33 is a cross-sectional view of a plurality of pixels according to a first modification of the ninth embodiment.
- FIG. 37 is a cross-sectional view of a plurality of pixels according to a fifteenth embodiment. It is sectional drawing of the some pixel in 10th Embodiment.
- FIG. 5 is a diagram illustrating five metal films of a multilayer wiring layer.
- FIG. 5 is a diagram illustrating five metal films of a multilayer wiring layer.
- FIG. 3 is a diagram illustrating a polysilicon layer. It is a figure showing the modification of the reflection member formed in the metal film.
- FIG. 3 is a diagram illustrating a substrate configuration of a light receiving element.
- FIG. 4 is a diagram illustrating noise around a pixel transistor region.
- FIG. 3 is a diagram illustrating a noise suppression structure around a pixel transistor region.
- FIG. 4 is a diagram illustrating a charge discharging structure around a pixel transistor region.
- FIG. 4 is a diagram illustrating a charge discharging structure around a pixel transistor region.
- FIG. 4 is a diagram for explaining charge discharge around an effective pixel area.
- FIG. 3 is a plan view illustrating a configuration example of a charge discharging region provided on an outer periphery of an effective pixel region.
- FIG. 4 is a cross-sectional view in a case where a charge discharging region is configured by a light-shielding pixel region and an N-type region.
- FIG. 3 is a diagram illustrating a current flow when a pixel transistor is arranged on a substrate having a photoelectric conversion region.
- FIG. 62 is a sectional view of a plurality of pixels according to an eighteenth embodiment. It is a figure explaining circuit allotment of two boards.
- FIG. 39 is a diagram illustrating a substrate configuration according to an eighteenth embodiment. It is a top view which shows arrangement
- FIG. 4 is a diagram for explaining a problem of an increase in current consumption. It is the top view and sectional view of the pixel concerning the 1st example of composition of a 19th embodiment. It is the top view and sectional view of the pixel concerning the 2nd example of composition of a 19th embodiment.
- FIG. 53 is a diagram illustrating another planar shape of the first configuration example and the second configuration example of the nineteenth embodiment.
- FIG. 53 is a diagram illustrating another planar shape of the first configuration example and the second configuration example of the nineteenth embodiment.
- FIG. 39 is a diagram illustrating another planar shape of the third configuration example of the nineteenth embodiment.
- FIG. 39 is a diagram illustrating another planar shape of the third configuration example of the nineteenth embodiment.
- FIG. 4 is a diagram illustrating a circuit configuration example of a pixel array unit when a 4-tap pixel signal is output simultaneously.
- FIG. 9 is a diagram illustrating a wiring layout for arranging four vertical signal lines.
- FIG. 14 is a diagram illustrating a first modification of a wiring layout in which four vertical signal lines are arranged.
- FIG. 14 is a diagram illustrating a second modification of the wiring layout in which four vertical signal lines are arranged. It is a figure which shows the modification of the arrangement example of a pixel transistor.
- FIG. 74 is a diagram showing a connection layout in the pixel transistor layout of B in FIG. 73.
- FIG. 74 is a diagram showing a connection layout in the pixel transistor layout of B in FIG. 73.
- FIG. 74 is a diagram showing a wiring layout in the pixel transistor layout of B in FIG. 73.
- FIG. 3 is a diagram illustrating a wiring layout in which two power lines are provided in one pixel column.
- FIG. 3 is a plan view showing a wiring example of a VSS wiring.
- FIG. 3 is a plan view showing a wiring example of a VSS wiring.
- FIG. 4 is a diagram illustrating a first method of pupil correction.
- FIG. 4 is a diagram illustrating a first method of pupil correction.
- FIG. 4 is a diagram illustrating a first method of pupil correction.
- FIG. 4 is a diagram illustrating a first method of pupil correction.
- FIG. 5 is a diagram illustrating a shift amount of an on-chip lens in a first method of pupil correction.
- FIG. 4 is a diagram illustrating a wiring example of a voltage supply line. It is a sectional view and a plan view of a pixel according to a first configuration example of a twentieth embodiment. It is a figure showing the example of arrangement of the 1st and 2nd taps.
- FIG. 3 is a diagram illustrating an example of an arrangement of a phase difference light shielding film and an on-chip lens.
- FIG. 34 is a sectional view of a pixel according to a twenty-first embodiment.
- FIG. 39 is a plan view of a pixel according to a twenty-first embodiment.
- FIG. 33 is a sectional view of a pixel according to a twenty-second embodiment.
- FIG. 62 is a plan view of a pixel according to a twenty-second embodiment.
- FIG. 3 is a block diagram illustrating a configuration example of a distance measuring module. It is a block diagram showing an example of a schematic structure of a vehicle control system. It is explanatory drawing which shows an example of the installation position of a vehicle exterior information detection part and an imaging part.
- the present technology is intended to improve characteristics such as pixel sensitivity by using a back-illuminated configuration of a CAPD sensor.
- the present technology can be applied to, for example, a light receiving element included in a distance measuring system that performs a distance measurement by an indirect ToF method, an imaging device having such a light receiving element, and the like.
- a distance measurement system is mounted on a vehicle and measures the distance to an object such as a user's hand, or a vehicle-mounted system that measures the distance to an object outside the vehicle.
- the present invention can be applied to a gesture recognition system that recognizes a gesture.
- the result of the gesture recognition can be used, for example, for operating a car navigation system.
- FIG. 1 is a block diagram illustrating a configuration example of an embodiment of a light receiving element to which the present technology is applied.
- the light receiving element 1 shown in FIG. 1 is a back-illuminated CAPD sensor, and is provided, for example, in an imaging device having a distance measuring function.
- the light receiving element 1 has a configuration including a pixel array section 20 formed on a semiconductor substrate (not shown) and a peripheral circuit section integrated on the same semiconductor substrate as the pixel array section 20.
- the peripheral circuit unit includes, for example, a tap drive unit 21, a vertical drive unit 22, a column processing unit 23, a horizontal drive unit 24, and a system control unit 25.
- the light receiving element 1 is further provided with a signal processing unit 31 and a data storage unit 32.
- the signal processing unit 31 and the data storage unit 32 may be mounted on the same substrate as the light receiving element 1 or may be arranged on a different substrate from the light receiving element 1 in the imaging device.
- the pixel array section 20 has a configuration in which pixels 51 that generate electric charges according to the amount of received light and output signals according to the electric charges are two-dimensionally arranged in rows and columns in a matrix. That is, the pixel array unit 20 has a plurality of pixels 51 that photoelectrically convert incident light and output a signal corresponding to the resulting charge.
- the row direction refers to the arrangement direction of the pixels 51 in the horizontal direction
- the column direction refers to the arrangement direction of the pixels 51 in the vertical direction.
- the row direction is the horizontal direction in the figure
- the column direction is the vertical direction in the figure.
- the pixel 51 receives light incident from the outside, particularly infrared light, performs photoelectric conversion, and outputs a pixel signal corresponding to the obtained electric charge.
- the pixel 51 applies a predetermined voltage MIX0 (first voltage) to apply a first tap TA for detecting photoelectrically converted electric charges, and applies a predetermined voltage MIX1 (second voltage) to apply a predetermined voltage MIX1 (second voltage). And a second tap TB for detecting the converted charge.
- the tap drive unit 21 supplies a predetermined voltage MIX0 to the first tap TA of each pixel 51 of the pixel array unit 20 via the predetermined voltage supply line 30, and supplies a predetermined voltage MIX0 to the second tap TB.
- a predetermined voltage MIX1 is supplied via a line 30. Therefore, in one pixel column of the pixel array unit 20, two voltage supply lines 30 that transmit the voltage MIX0 and a voltage supply line 30 that transmits the voltage MIX1 are wired.
- a pixel drive line 28 is wired along a row direction for each pixel row, and two vertical signal lines 29 are wired along a column direction for each pixel column for a pixel array in a matrix. ing.
- the pixel drive line 28 transmits a drive signal for driving when reading a signal from a pixel.
- the pixel drive line 28 is shown as one line, but is not limited to one line.
- One end of the pixel drive line 28 is connected to an output end corresponding to each row of the vertical drive unit 22.
- the vertical drive unit 22 is configured by a shift register, an address decoder, and the like, and drives each pixel of the pixel array unit 20 simultaneously for all pixels or in units of rows. That is, the vertical drive unit 22 constitutes a drive unit that controls the operation of each pixel of the pixel array unit 20, together with the system control unit 25 that controls the vertical drive unit 22.
- the signal output from each pixel 51 in the pixel row according to the drive control by the vertical drive unit 22 is input to the column processing unit 23 through the vertical signal line 29.
- the column processing unit 23 performs predetermined signal processing on a pixel signal output from each pixel 51 through the vertical signal line 29, and temporarily holds the pixel signal after the signal processing.
- the column processing unit 23 performs noise removal processing, AD (Analog to Digital) conversion processing, and the like as signal processing.
- AD Analog to Digital
- the horizontal drive unit 24 is configured by a shift register, an address decoder, and the like, and sequentially selects unit circuits corresponding to the pixel columns of the column processing unit 23. By the selective scanning by the horizontal driving unit 24, the pixel signals subjected to the signal processing for each unit circuit in the column processing unit 23 are sequentially output.
- the system control unit 25 is configured by a timing generator or the like that generates various timing signals, and based on the various timing signals generated by the timing generator, the tap driving unit 21, the vertical driving unit 22, the column processing unit 23, And drive control of the horizontal drive unit 24 and the like.
- the signal processing unit 31 has at least an arithmetic processing function, and performs various signal processing such as arithmetic processing based on the pixel signal output from the column processing unit 23.
- the data storage unit 32 temporarily stores data necessary for the signal processing in the signal processing unit 31.
- FIG. 2 shows a cross section of one pixel 51 provided in the pixel array unit 20.
- the pixel 51 receives light incident from the outside, particularly infrared light, performs photoelectric conversion, and obtains the result. And outputs a signal corresponding to the charge.
- the pixel 51 has a substrate 61 made of a P-type semiconductor layer such as a silicon substrate, for example, and an on-chip lens 62 formed on the substrate 61.
- the substrate 61 has a thickness in the vertical direction in the drawing, that is, a thickness in a direction perpendicular to the surface of the substrate 61 is 20 ⁇ m or less.
- the thickness of the substrate 61 may be 20 ⁇ m or more, of course, and the thickness may be determined according to the target characteristics of the light receiving element 1 and the like.
- the substrate 61 is, for example, a high-resistance P-Epi substrate having a substrate concentration of 1E + 13 order or less, and the substrate 61 has a resistance (resistivity) of, for example, 500 [ ⁇ cm] or more.
- the relationship between the substrate concentration of the substrate 61 and the resistor for example, substrate concentration 6.48E + 12 [cm 3] resistor 2000 [[Omega] cm] when the resistance when the substrate concentration 1.30E + 13 [cm 3] 1000 [ ⁇ cm], When the substrate concentration is 2.59E + 13 [cm 3 ], the resistance is 500 [ ⁇ cm], and when the substrate concentration is 1.30E + 14 [cm 3 ], the resistance is 100 [ ⁇ cm].
- the upper surface of the substrate 61 is the back surface of the substrate 61, and is a light incident surface on which light from the outside is incident on the substrate 61.
- the lower surface of the substrate 61 is the surface of the substrate 61, on which a multilayer wiring layer (not shown) is formed.
- a fixed charge film 66 made of a single layer film or a laminated film having a positive fixed charge is formed, and the light incident from the outside is collected on the upper surface of the fixed charge film 66.
- An on-chip lens 62 to be incident on the substrate 61 is formed.
- the fixed charge film 66 makes the light incident surface side of the substrate 61 a hole accumulation state, and suppresses the generation of dark current.
- an inter-pixel light-shielding film 63-1 and an inter-pixel light-shielding film 63-2 for preventing crosstalk between adjacent pixels are formed at end portions of the pixel 51 on the fixed charge film 66. ing.
- the inter-pixel light-shielding film 63-1 and the inter-pixel light-shielding film 63-2 are also simply referred to as the inter-pixel light-shielding film 63 when it is not particularly necessary to distinguish them.
- the inter-pixel light-shielding film 63 is provided so that light entering from the outside is adjacent to the pixel 51 on the substrate 61. This is formed so as not to enter the area of another pixel. That is, light that enters the on-chip lens 62 from the outside and travels into another pixel adjacent to the pixel 51 is shielded by the inter-pixel light-shielding film 63-1 and the inter-pixel light-shielding film 63-2, so that the light that is adjacent to the other Is prevented from being incident on the pixel.
- the light receiving element 1 is a back-side illuminated CAPD sensor
- the light incident surface of the substrate 61 is a so-called back surface, and no wiring layer such as wiring is formed on the back surface.
- wiring for driving a transistor or the like formed in the pixel 51, wiring for reading a signal from the pixel 51, and the like are formed in a portion of the substrate 61 opposite to the light incident surface.
- the wiring layer is formed by lamination.
- An oxide film 64 and a signal extraction portion 65-1 and a signal extraction portion 65-2 are formed on the surface of the substrate 61 opposite to the light incident surface, that is, on the inner side of the lower surface in the drawing. Have been.
- the signal extracting unit 65-1 corresponds to the first tap TA described in FIG. 1, and the signal extracting unit 65-2 corresponds to the second tap TB described in FIG.
- an oxide film 64 is formed in the central portion of the pixel 51 near the surface of the substrate 61 opposite to the light incident surface, and the signal extraction portion 65-1 and the signal extraction portion are provided at both ends of the oxide film 64, respectively.
- a portion 65-2 is formed.
- the signal extraction unit 65-1 includes an N-type semiconductor region 71-1 and an N-type semiconductor region 72-1 having a lower donor impurity concentration than the N + type semiconductor region 71-1; It has a P + semiconductor region 73-1 and a P ⁇ semiconductor region 74-1 having an acceptor impurity concentration lower than that of the P + semiconductor region 73-1.
- the donor impurities include, for example, elements belonging to Group 5 of the periodic table of elements such as phosphorus (P) and arsenic (As) with respect to Si
- the acceptor impurities include, for example, Elements belonging to Group 3 of the periodic table of elements such as boron (B) are given.
- An element that becomes a donor impurity is called a donor element, and an element that becomes an acceptor impurity is called an acceptor element.
- an N + semiconductor region 71-1 is formed at a position adjacent to the right side of the oxide film 64 on the inner side of the surface of the substrate 61 opposite to the light incident surface. Further, an N- semiconductor region 72-1 is formed on the upper side of the N + semiconductor region 71-1 so as to cover (surround) the N + semiconductor region 71-1.
- a P + semiconductor region 73-1 is formed on the right side of the N + semiconductor region 71-1. Further, a P- semiconductor region 74-1 is formed on the upper side of the P + semiconductor region 73-1 so as to cover (surround) the P + semiconductor region 73-1.
- an N + semiconductor region 71-1 is formed on the right side of the P + semiconductor region 73-1. Further, an N- semiconductor region 72-1 is formed on the upper side of the N + semiconductor region 71-1 so as to cover (surround) the N + semiconductor region 71-1.
- the signal extraction unit 65-2 includes an N-type semiconductor region 71-2, an N-type semiconductor region 72-2, and an N-type semiconductor region 72-2 having a lower donor impurity concentration than the N + type semiconductor region 71-2. It has a P + semiconductor region 73-2 and a P- semiconductor region 74-2 having an acceptor impurity concentration lower than that of the P + semiconductor region 73-2.
- an N + semiconductor region 71-2 is formed at a position adjacent to the left side of the oxide film 64 in an inner portion of the surface of the substrate 61 opposite to the light incident surface. Further, an N- semiconductor region 72-2 is formed on the upper side of the N + semiconductor region 71-2 so as to cover (surround) the N + semiconductor region 71-2.
- a P + semiconductor region 73-2 is formed on the left side of the N + semiconductor region 71-2. Further, a P- semiconductor region 74-2 is formed on the upper side of the P + semiconductor region 73-2 so as to cover (surround) the P + semiconductor region 73-2.
- an N + semiconductor region 71-2 is formed on the left side of the P + semiconductor region 73-2. Further, an N- semiconductor region 72-2 is formed on the upper side of the N + semiconductor region 71-2 so as to cover (surround) the N + semiconductor region 71-2.
- An oxide film 64 similar to the central portion of the pixel 51 is formed at an end portion of the pixel 51 at an inner portion of the surface of the substrate 61 opposite to the light incident surface.
- the signal extracting unit 65-1 and the signal extracting unit 65-2 will be simply referred to as the signal extracting unit 65 unless it is particularly necessary to distinguish them.
- the N + semiconductor region 72-1 and the N ⁇ semiconductor region 72-2 are simply referred to as the N + semiconductor region 71.
- N-semiconductor regions 72 are simply referred to as N-semiconductor regions 72.
- the P + semiconductor region 74-1 and the P ⁇ semiconductor region 74-2 are simply referred to as the P + semiconductor region 73. In the case where there is no particular need to distinguish them, they are simply referred to as P-semiconductor regions 74.
- an isolation part 75-1 for separating these regions by an oxide film or the like.
- an isolation portion 75-2 for isolating these regions is formed by an oxide film or the like.
- the N + semiconductor region 71 provided on the substrate 61 functions as a charge detection unit for detecting the amount of light incident on the pixels 51 from the outside, that is, the amount of signal carriers generated by photoelectric conversion by the substrate 61.
- the N- semiconductor region 72 having a low donor impurity concentration can be regarded as a charge detection unit.
- the P + semiconductor region 73 functions as a voltage application unit for injecting majority carrier current into the substrate 61, that is, for applying a voltage directly to the substrate 61 and generating an electric field in the substrate 61.
- the P- semiconductor region 74 having a low acceptor impurity concentration can be regarded as a voltage application unit.
- the N + semiconductor region 71-1 is directly connected to a floating diffusion (FD) portion (hereinafter also referred to as an FD portion A), which is a floating diffusion region (not shown).
- FD portion A floating diffusion region
- amplification transistor or the like not shown
- FD section B another FD section (hereinafter, also referred to as FD section B in particular) different from the FD section A is directly connected to the N + semiconductor region 71-2, and the FD section B is not shown. It is connected to the vertical signal line 29 via an amplification transistor or the like.
- the FD section A and the FD section B are connected to different vertical signal lines 29.
- infrared light is emitted from the imaging device provided with the light receiving element 1 toward the target.
- the substrate 61 of the light receiving element 1 receives the reflected light (infrared light) that has entered and performs photoelectric conversion.
- the tap drive unit 21 drives the first tap TA and the second tap TB of the pixel 51 and distributes a signal corresponding to the charge DET obtained by photoelectric conversion to the FD unit A and the FD unit B.
- infrared light reflected light
- the infrared light is photoelectrically converted in the substrate 61 to generate electrons and holes.
- the obtained electrons are guided toward the P + semiconductor region 73-1 by the electric field between the P + semiconductor regions 73, and move into the N + semiconductor region 71-1.
- the electrons generated by the photoelectric conversion are used as signal carriers for detecting a signal corresponding to the amount of infrared light incident on the pixel 51, that is, the amount of infrared light received.
- the accumulated charge DET0 of the N + semiconductor region 71-1 is transferred to the FD portion A directly connected to the N + semiconductor region 71-1.
- a signal corresponding to the charge DET0 transferred to the FD portion A is amplified by the amplification transistor or the like.
- the data is read out by the column processing unit 23 via the vertical signal line 29. Then, the read signal is subjected to a process such as an AD conversion process in the column processing unit 23, and a pixel signal obtained as a result is supplied to the signal processing unit 31.
- This pixel signal is a signal indicating the amount of charge corresponding to the electrons detected by the N + semiconductor region 71-1, that is, the amount of charge DET0 stored in the FD section A.
- the pixel signal is a signal indicating the amount of infrared light received by the pixel 51.
- the pixel signal corresponding to the electrons detected in the N + semiconductor region 71-2 may be appropriately used for distance measurement.
- a voltage is applied to the two P + semiconductor regions 73 via the contacts and the like by the tap driving unit 21 so that an electric field in a direction opposite to the electric field generated in the substrate 61 is generated.
- a voltage of MIX1 1.5V is applied to the P + semiconductor region 73-2 which is the second tap TB. Is applied.
- infrared light reflected light
- the infrared light is photoelectrically converted in the substrate 61 to form a pair of electrons and holes.
- the obtained electrons are guided toward the P + semiconductor region 73-2 by the electric field between the P + semiconductor regions 73, and move into the N + semiconductor region 71-2.
- the accumulated charge DET1 in the N + semiconductor region 71-2 is transferred to the FD portion B directly connected to the N + semiconductor region 71-2, and a signal corresponding to the charge DET1 transferred to the FD portion B is amplified by the amplification transistor or the like.
- the data is read out by the column processing unit 23 via the vertical signal line 29. Then, the read signal is subjected to a process such as an AD conversion process in the column processing unit 23, and a pixel signal obtained as a result is supplied to the signal processing unit 31.
- a pixel signal corresponding to the electrons detected in the N + semiconductor region 71-1 may be used for distance measurement as appropriate.
- the signal processing unit 31 calculates distance information indicating the distance to the target based on those pixel signals. And outputs it to the subsequent stage.
- a method of distributing signal carriers to different N + semiconductor regions 71 and calculating distance information based on signals corresponding to the signal carriers is called an indirect ToF method.
- the periphery of the P + semiconductor region 73 is N + semiconductor region 71 as shown in FIG. It has a structure surrounded by. Note that, in FIG. 3, the portions corresponding to those in FIG. 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- an oxide film 64 (not shown) is formed at the center of the pixel 51, and a signal extraction unit 65 is formed at a portion slightly from the center of the pixel 51.
- two signal extraction portions 65 are formed in the pixel 51.
- a P + semiconductor region 73 is formed in a rectangular shape at the center position, and the periphery of the P + semiconductor region 73 is rectangular, and more specifically, rectangular around the P + semiconductor region 73. It is surrounded by a frame-shaped N + semiconductor region 71. That is, the N + semiconductor region 71 is formed so as to surround the periphery of the P + semiconductor region 73.
- the on-chip lens 62 is formed so that infrared light incident from the outside is focused on the central portion of the pixel 51, that is, the portion indicated by the arrow A11.
- the infrared light incident on the on-chip lens 62 from the outside is condensed by the on-chip lens 62 at the position shown by the arrow A11, that is, the upper position of the oxide film 64 in FIG.
- the infrared light is converged at a position between the signal extracting unit 65-1 and the signal extracting unit 65-2. This suppresses the occurrence of crosstalk due to the infrared light being incident on the pixel adjacent to the pixel 51 and the suppression of the infrared light being directly incident on the signal extraction unit 65. be able to.
- the charge separation efficiency that is, Cmod (Contrast between active and inactive tap) and Modulation contrast are reduced.
- the signal extraction unit 65 from which a signal corresponding to the charge DET obtained by the photoelectric conversion is read that is, the signal extraction unit 65 from which the charge DET obtained by the photoelectric conversion is to be detected is set to an active tap (active). tap).
- the signal extraction unit 65 from which a signal corresponding to the charge DET obtained by the photoelectric conversion is not read that is, the signal extraction unit 65 that is not the active tap is connected to the inactive tap (inactive tap).
- the signal extraction unit 65 to which the voltage of 1.5 V is applied to the P + semiconductor region 73 is an active tap
- the signal extraction unit 65 to which the voltage of 0 V is applied to the P + semiconductor region 73 is the active tap. Active tap.
- Cmod is calculated by the following equation (1), and what percentage of the charge generated by photoelectric conversion of incident infrared light can be detected in the N + semiconductor region 71 of the signal extraction unit 65 that is an active tap. This is an index indicating whether a signal corresponding to the charge can be taken out, and indicates the charge separation efficiency.
- I0 is a signal detected by one of the two charge detection units (P + semiconductor region 73)
- I1 is a signal detected by the other.
- Cmod ⁇
- the infrared light is condensed in the vicinity of the center of the pixel 51 located at substantially the same distance from the two signal extraction units 65, so that the infrared light incident from the outside
- the probability of photoelectric conversion in the active tap region can be reduced, and the charge separation efficiency can be improved.
- Modulation @ contrast can also be improved. In other words, electrons obtained by photoelectric conversion can be easily guided to the N + semiconductor region 71 in the active tap.
- the quantum efficiency (QE) ⁇ the aperture ratio (FF (Fill Factor)) can be maximized, and the distance measurement characteristics of the light receiving element 1 can be improved. it can.
- a normal surface-illuminated image sensor has a structure in which a wiring 102 and a wiring 103 are formed on a light incident surface of a PD 101, which is a photoelectric conversion unit, on which light from the outside enters. It has become.
- the back-illuminated image sensor has the wiring 105 and the wiring 105 on the surface of the PD 104, which is the photoelectric conversion unit, on the side opposite to the light incident surface on which light from the outside enters, as shown by an arrow W12, for example. 106 is formed.
- a sufficient aperture ratio can be secured as compared with the case of the surface irradiation type. That is, for example, as shown by arrows A23 and A24 from outside, light obliquely entering the PD 104 at a certain angle enters the PD 104 without being blocked by the wiring. Thereby, more light can be received and the sensitivity of the pixel can be improved.
- a signal extraction unit 112 called a tap is provided on a light incident surface side where light from the outside is incident inside the PD 111 which is a photoelectric conversion unit.
- a P + semiconductor region and an N + semiconductor region of a tap are formed.
- the front-illuminated CAPD sensor has a structure in which a wiring 113 and a wiring 114 such as a contact or a metal connected to the signal extraction unit 112 are formed on the light incident surface side.
- a part of the light obliquely incident on the PD 111 at a certain angle is blocked by the wiring 113 and the like, so that the light is not incident on the PD 111.
- an arrow A27 there is a case where the light that is perpendicularly incident on the PD 111 is also blocked by the wiring 114 and is not incident on the PD 111.
- the back-illuminated type CAPD sensor has a signal extraction unit on the surface of the PD 115, which is a photoelectric conversion unit, on the surface opposite to the light incident surface on which light from the outside is incident, as shown by an arrow W14, for example. 116 is formed.
- a wiring 117 and a wiring 118 such as a contact or a metal connected to the signal extraction unit 116 are formed.
- the PD 115 corresponds to the substrate 61 shown in FIG. 2
- the signal extracting unit 116 corresponds to the signal extracting unit 65 shown in FIG.
- a backside illuminated CAPD sensor with such a structure can ensure a sufficient aperture ratio as compared to the front illuminated type. Therefore, the quantum efficiency (QE) ⁇ the aperture ratio (FF) can be maximized, and the distance measurement characteristics can be improved.
- the back-illuminated type CAPD sensor not only the light that is incident at a certain angle, but also the light that is incident perpendicularly to the PD 115.
- the light reflected by the wiring or the like can also be received. Thereby, more light can be received and the sensitivity of the pixel can be improved.
- the quantum efficiency (QE) ⁇ the aperture ratio (FF) can be maximized, and as a result, the ranging characteristics can be improved.
- the front-illuminated CAPD sensor cannot secure a sufficient aperture ratio and lowers the sensitivity of the pixel.
- the light receiving element 1 which is an irradiation type CAPD sensor, a sufficient aperture ratio can be secured regardless of the arrangement position of the tap, and the sensitivity of the pixel can be improved.
- the signal extraction portion 65 is formed near the surface of the substrate 61 opposite to the light incident surface on which infrared light from the outside is incident. , The occurrence of photoelectric conversion of infrared light can be reduced. Thereby, Cmod, that is, charge separation efficiency can be improved.
- FIG. 5 is a cross-sectional view of a pixel of a front-illuminated and back-illuminated CAPD sensor.
- the upper side of the substrate 141 in the figure is a light incident surface, and a wiring layer 152 including a plurality of layers of wiring, a light-shielding portion between pixels, 153 and an on-chip lens 154 are stacked.
- a wiring layer 152 including a plurality of wiring layers is formed under the substrate 142 opposite to the light incident surface in the drawing.
- the inter-pixel light shielding portion 153 and the on-chip lens 154 are stacked on the upper side of the substrate 142 which is a.
- the gray trapezoidal shape indicates a region where the light intensity is strong due to the infrared light being condensed by the on-chip lens 154.
- the probability of photoelectric conversion of infrared light in the region R11 increases. That is, since the amount of infrared light incident near the inactive tap is large, the number of signal carriers that cannot be detected by the active tap increases, and the charge separation efficiency decreases.
- the region R12 where the inactive tap and the active tap exist is located at a position far from the light incident surface of the substrate 142, that is, at a position near the surface opposite to the light incident surface side.
- the substrate 142 corresponds to the substrate 61 shown in FIG.
- the region R12 is located far from the light incident surface.
- the strength is relatively weak.
- the intensity of the incident infrared light is relatively weak in the vicinity of the region R12 including the inactive tap, the probability that the infrared light is photoelectrically converted in the region R12 is reduced. That is, since the amount of infrared light incident near the inactive tap is small, the number of signal carriers (electrons) generated by photoelectric conversion near the inactive tap and moving to the N + semiconductor region of the inactive tap. And the charge separation efficiency can be improved. As a result, the ranging characteristics can be improved.
- the thickness of the substrate 61 can be reduced, so that the efficiency of taking out electrons (charges) as signal carriers can be improved.
- the substrate 171 is required to secure a higher quantum efficiency and suppress a decrease in the quantum efficiency ⁇ the aperture ratio. Need to be thicker to some extent.
- the potential gradient becomes gentle in a region near the surface opposite to the light incident surface in the substrate 171, for example, in a region R ⁇ b> 21, and the electric field in a direction substantially perpendicular to the substrate 171 is weakened.
- the moving speed of the signal carrier becomes slow, so that the time required after the photoelectric conversion is performed and before the signal carrier is detected in the N + semiconductor region of the active tap becomes long.
- arrows in the substrate 171 indicate electric fields in a direction perpendicular to the substrate 171 in the substrate 171.
- FIG. 7 shows the relationship between the position in the thickness direction of the substrate 171 and the moving speed of the signal carrier.
- Region R21 corresponds to the diffusion current region.
- the driving frequency is high, that is, when switching between the active state and the inactive state of the tap (signal extraction unit) is performed at high speed, electrons generated at a position far from the active tap such as the region R21 are removed. It cannot be completely drawn into the N + semiconductor region of the active tap. That is, if the time during which the tap is active is short, electrons (charges) generated in the region R21 or the like cannot be detected in the N + semiconductor region of the active tap, and the electron extraction efficiency decreases.
- the substrate 172 corresponds to the substrate 61 in FIG. 2, and an arrow in the substrate 172 indicates an electric field in a direction perpendicular to the substrate 172.
- FIG. 8 shows the relationship between the position in the thickness direction of the substrate 172 and the moving speed of the signal carrier.
- the electric field substantially in the direction perpendicular to the substrate 172 becomes strong, and electrons (charges) only in the drift current region where the moving speed of the signal carrier is high. Only electrons are used, and electrons in the diffusion current region where the moving speed of the signal carrier is low are not used.
- the time required from when the photoelectric conversion is performed to when the signal carrier is detected in the N + semiconductor region of the active tap is reduced. Also, as the thickness of the substrate 172 decreases, the moving distance of the signal carrier to the N + semiconductor region in the active tap also decreases.
- a voltage can be applied directly to the substrate 172, that is, the substrate 61, so that the response speed of switching between active and inactive taps is high, and driving is performed at a high driving frequency. Can be.
- a voltage can be directly applied to the substrate 61, a modulatable area in the substrate 61 is widened.
- the back-illuminated light-receiving element 1 (CAPD sensor)
- a sufficient aperture ratio can be obtained, so that the pixels can be miniaturized by that much, and the miniaturization resistance of the pixels can be improved.
- BEOL Back-End-Of-Line
- Qs saturation signal amount
- the N + semiconductor region 71 and the P + semiconductor region 73 may have a circular shape.
- portions corresponding to those in FIG. 3 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- FIG. 9 shows the N + semiconductor region 71 and the P + semiconductor region 73 when the portion of the signal extraction section 65 in the pixel 51 is viewed from a direction perpendicular to the substrate 61.
- an oxide film 64 (not shown) is formed at the central portion of the pixel 51, and the signal extracting portion 65 is formed at a portion slightly from the center of the pixel 51.
- two signal extraction portions 65 are formed in the pixel 51.
- each signal extraction section 65 a circular P + semiconductor region 73 is formed at the center position, and the periphery of the P + semiconductor region 73 is circular with the P + semiconductor region 73 as a center, more specifically, a circle. It is surrounded by an annular N + semiconductor region 71.
- FIG. 10 is a plan view in which the on-chip lens 62 is superimposed on a part of the pixel array unit 20 in which the pixels 51 having the signal extraction unit 65 shown in FIG. 9 are two-dimensionally arranged in a matrix.
- the on-chip lens 62 is formed for each pixel as shown in FIG. In other words, a unit area in which one on-chip lens 62 is formed corresponds to one pixel.
- the separation portion 75 formed of an oxide film or the like is disposed between the N + semiconductor region 71 and the P + semiconductor region 73, but the separation portion 75 may or may not be provided. .
- FIG. 11 is a plan view showing a modification of the planar shape of the signal extraction unit 65 in the pixel 51.
- the signal extracting unit 65 may have a planar shape other than the rectangular shape shown in FIG. 3 and the circular shape shown in FIG. 9, for example, an octagonal shape as shown in FIG. 11.
- FIG. 11 is a plan view showing a case where an isolation portion 75 made of an oxide film or the like is formed between the N + semiconductor region 71 and the P + semiconductor region 73.
- the line A-A shown in FIG. 11 indicates a sectional line of FIG. 37 described later, and the line B-B 'indicates a sectional line of FIG. 36 described later.
- the pixel 51 is configured, for example, as shown in FIG. In FIG. 12, parts corresponding to those in FIG. 3 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- FIG. 12 shows the arrangement of the N + semiconductor region and the P + semiconductor region when the portion of the signal extraction section 65 in the pixel 51 is viewed from a direction perpendicular to the substrate 61.
- an oxide film 64 (not shown) is formed at the central portion of the pixel 51, and a signal extracting portion 65-1 is formed at an upper portion in the figure slightly from the center of the pixel 51.
- a signal extraction portion 65-2 is formed at a lower portion in the figure slightly from the center of FIG.
- the formation position of the signal extraction unit 65 in the pixel 51 is the same as that in FIG.
- a rectangular N + semiconductor region 201-1 corresponding to the N + semiconductor region 71-1 shown in FIG. 3 is formed at the center of the signal extraction unit 65-1.
- the periphery of the N + semiconductor region 201-1 is surrounded by a P + semiconductor region 202-1 having a rectangular shape corresponding to the P + semiconductor region 73-1 shown in FIG. 3, more specifically, a rectangular frame shape. That is, the P + semiconductor region 202-1 is formed so as to surround the periphery of the N + semiconductor region 201-1.
- a rectangular N + semiconductor region 201-2 corresponding to the N + semiconductor region 71-2 shown in FIG. 3 is formed at the center of the signal extraction unit 65-2.
- the periphery of the N + semiconductor region 201-2 is surrounded by a P + semiconductor region 202-2 having a rectangular shape corresponding to the P + semiconductor region 73-2 shown in FIG. 3, more specifically, a rectangular frame shape.
- the N + semiconductor region 201-1 and the N + semiconductor region 201-2 are simply referred to as the N + semiconductor region 201 unless it is particularly necessary to distinguish them.
- the P + semiconductor region 202-1 and the P + semiconductor region 202-2 are simply referred to as the P + semiconductor region 202 unless it is particularly necessary to distinguish them.
- the N + semiconductor region 201 functions as a charge detection unit for detecting the amount of signal carriers.
- the P + semiconductor region 202 functions as a voltage application unit for applying a voltage directly to the substrate 61 to generate an electric field.
- the N + semiconductor region 201 and the P + semiconductor region 202 may be formed in a circular shape as shown in FIG.
- parts corresponding to those in FIG. 12 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- FIG. 13 shows the N + semiconductor region 201 and the P + semiconductor region 202 when the portion of the signal extraction section 65 in the pixel 51 is viewed from a direction perpendicular to the substrate 61.
- an oxide film 64 (not shown) is formed at the central portion of the pixel 51, and the signal extracting portion 65 is formed at a portion slightly from the center of the pixel 51.
- two signal extraction portions 65 are formed in the pixel 51.
- each signal extraction section 65 a circular N + semiconductor region 201 is formed at the center position, and the periphery of the N + semiconductor region 201 is circular around the N + semiconductor region 201, more specifically, a circle. It is surrounded by an annular P + semiconductor region 202.
- the N + semiconductor region and the P + semiconductor region formed in the signal extraction unit 65 may have a line shape (rectangular shape).
- the pixel 51 is configured as shown in FIG. Note that, in FIG. 14, the portions corresponding to those in FIG. 3 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- FIG. 14 shows the arrangement of the N + semiconductor region and the P + semiconductor region when the portion of the signal extraction section 65 in the pixel 51 is viewed from a direction perpendicular to the substrate 61.
- an oxide film 64 (not shown) is formed at the central portion of the pixel 51, and a signal extracting portion 65-1 is formed at an upper portion in the figure slightly from the center of the pixel 51.
- a signal extraction portion 65-2 is formed at a lower portion in the figure slightly from the center of FIG.
- the formation position of the signal extraction unit 65 in the pixel 51 is the same as that in FIG.
- a line-shaped P + semiconductor region 231 corresponding to the P + semiconductor region 73-1 shown in FIG. 3 is formed at the center of the signal extracting portion 65-1.
- the N + semiconductor region 232-1 and the N + semiconductor region 232-2 will be simply referred to as the N + semiconductor region 232 unless it is particularly necessary to distinguish them.
- the P + semiconductor region 73 is configured to be surrounded by the N + semiconductor region 71.
- two N + semiconductors in which the P + semiconductor region 231 is provided adjacent to each other are provided. The structure is sandwiched between the regions 232.
- a line-shaped P + semiconductor region 233 corresponding to the P + semiconductor region 73-2 shown in FIG. 3 is formed at the center of the signal extraction portion 65-2.
- line-shaped N + semiconductor regions 234-1 and 234-2 corresponding to the N + semiconductor region 71-2 shown in FIG. 3 sandwich the P + semiconductor region 233. Is formed.
- the N + semiconductor region 234-1 and the N + semiconductor region 234-2 will be simply referred to as the N + semiconductor region 234 unless it is particularly necessary to distinguish them.
- the P + semiconductor region 231 and the P + semiconductor region 233 function as a voltage application unit corresponding to the P + semiconductor region 73 shown in FIG. 3, and the N + semiconductor region 232 and the N + semiconductor region 234 It functions as a charge detection unit corresponding to the N + semiconductor region 71 shown in FIG. In this case, for example, both the N + semiconductor region 232-1 and the N + semiconductor region 232-2 are connected to the FD portion A.
- the horizontal length may be any length. , These areas do not have to be the same length.
- the pixel 51 is configured as shown in FIG.
- portions corresponding to those in FIG. 3 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- FIG. 15 shows an arrangement of the N + semiconductor region and the P + semiconductor region when the portion of the signal extraction section 65 in the pixel 51 is viewed from a direction perpendicular to the substrate 61.
- an oxide film 64 (not shown) is formed at the central portion of the pixel 51, and the signal extracting portion 65 is formed at a portion slightly from the center of the pixel 51.
- the formation positions of the two signal extraction portions 65 in the pixel 51 are the same as those in FIG.
- a line-shaped N + semiconductor region 261 corresponding to the N + semiconductor region 71-1 shown in FIG. 3 is formed at the center of the signal extraction portion 65-1. Then, a line-shaped P + semiconductor region 262-1 and a P + semiconductor region 262-2 corresponding to the P + semiconductor region 73-1 shown in FIG. 3 are formed around the N + semiconductor region 261 so as to sandwich the N + semiconductor region 261. Is formed. That is, the N + semiconductor region 261 is formed at a position between the P + semiconductor region 262-1 and the P + semiconductor region 262-2.
- the P + semiconductor region 262-1 and the P + semiconductor region 262-2 will be simply referred to as the P + semiconductor region 262 unless it is particularly necessary to distinguish them.
- a line-shaped N + semiconductor region 263 corresponding to the N + semiconductor region 71-2 shown in FIG. 3 is formed at the center of the signal extraction unit 65-2. Then, around the N + semiconductor region 263, line-shaped P + semiconductor regions 264-1 and 264-2 corresponding to the P + semiconductor region 73-2 shown in FIG. Is formed.
- the P + semiconductor region 264-1 and the P + semiconductor region 264-2 will be simply referred to as the P + semiconductor region 264 unless it is particularly necessary to distinguish them.
- the P + semiconductor region 262 and the P + semiconductor region 264 function as a voltage application unit corresponding to the P + semiconductor region 73 shown in FIG. 3, and the N + semiconductor region 261 and the N + semiconductor region 263 It functions as a charge detection unit corresponding to the N + semiconductor region 71 shown in FIG.
- the N + semiconductor region 261, the P + semiconductor region 262, the N + semiconductor region 263, and the P + semiconductor region 264 in a line shape may have any length in the horizontal direction in the drawing. , These areas do not have to be the same length.
- the configuration of the pixel is configured as shown in FIG. 16, for example.
- FIG. 16 portions corresponding to those in FIG. 3 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- FIG. 16 shows the arrangement of the N + semiconductor region and the P + semiconductor region when the signal extraction portion of some of the pixels provided in the pixel array section 20 is viewed from a direction perpendicular to the substrate.
- a pixel 51 provided in the pixel array unit 20 and pixels 291-1 to 291-3 indicated by reference numerals as pixels 51 adjacent to the pixel 51 are shown.
- One signal extraction portion is formed in the pixel.
- one signal extraction unit 65 is formed in the center of the pixel 51.
- a circular P + semiconductor region 301 is formed at the center position, and the periphery of the P + semiconductor region 301 is circular with the P + semiconductor region 301 as the center, more specifically, an annular shape.
- the P + semiconductor region 301 corresponds to the P + semiconductor region 73 shown in FIG. 3 and functions as a voltage application unit.
- the N + semiconductor region 302 corresponds to the N + semiconductor region 71 shown in FIG. 3 and functions as a charge detection unit. Note that the P + semiconductor region 301 and the N + semiconductor region 302 may have any shape.
- the pixels 291-1 to 291-3 around the pixel 51 have the same structure as the pixel 51.
- one signal extraction unit 303 is formed at the center of the pixel 291-1.
- a circular P + semiconductor region 304 is formed at the center position, and the periphery of the P + semiconductor region 304 is circular with the P + semiconductor region 304 as a center, more specifically, an annular shape.
- N + semiconductor region 305 is formed at the center position.
- ⁇ P + semiconductor region 304 and N + semiconductor region 305 correspond to P + semiconductor region 301 and N + semiconductor region 302, respectively.
- the pixels 291-1 to 291-3 are also simply referred to as the pixels 291 unless it is necessary to particularly distinguish them.
- the signal extraction units 303 of some pixels 291 adjacent to the pixel 51 including the pixel 291-1 are inactive.
- Each pixel is driven to be a tap.
- the signal extraction units of the pixels adjacent to the pixel 51 are driven to be inactive taps.
- the signal extraction units 303 of some pixels 291 adjacent to the pixel 51, including the pixel 291-1, are switched this time. Is set to be the active tap.
- each pixel of the pixel array unit 20 is configured as shown in FIG.
- portions corresponding to those in FIG. 16 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- FIG. 17 shows the arrangement of the N + semiconductor region and the P + semiconductor region when the signal extraction portion of some of the pixels provided in the pixel array section 20 is viewed from a direction perpendicular to the substrate.
- a sectional view taken along the line C-C 'shown in FIG. 17 is as shown in FIG. 36 described later.
- each pixel has four signal extraction units.
- a signal extracting unit 331-1, a signal extracting unit 331-2, a signal extracting unit 331-3, and a signal extracting unit 331-4 are formed at positions.
- These signal extracting units 331-1 to 331-4 correspond to the signal extracting unit 65 shown in FIG.
- a circular P + semiconductor region 341 is formed at the center position, and the periphery of the P + semiconductor region 341 is circular with the P + semiconductor region 341 as a center, more specifically, a circle. It is surrounded by an annular N + semiconductor region 342.
- the P + semiconductor region 341 corresponds to the P + semiconductor region 301 shown in FIG. 16 and functions as a voltage application unit.
- the N + semiconductor region 342 corresponds to the N + semiconductor region 302 shown in FIG. 16 and functions as a charge detection unit. Note that the P + semiconductor region 341 and the N + semiconductor region 342 may have any shape.
- the signal extraction units 331-2 to 331-4 have the same configuration as the signal extraction unit 331-1, and each of the P + semiconductor region functions as a voltage application unit and the N + function functions as a charge detection unit. A semiconductor region. Further, the pixel 291 formed around the pixel 51 has the same structure as the pixel 51.
- the signal extraction units 331-1 to 331-4 will be simply referred to as the signal extraction unit 331 unless it is particularly necessary to distinguish them.
- each pixel is provided with four signal extraction units as described above, for example, at the time of distance measurement using the indirect ToF method, the distance information is calculated using the four signal extraction units in the pixel.
- the pixel 51 is driven such that
- each signal extraction unit 331 is switched. That is, the pixel 51 is driven such that the signal extraction unit 331-1 and the signal extraction unit 331-3 are inactive taps, and the signal extraction unit 331-2 and the signal extraction unit 331-4 are active taps.
- the pixel signals read from the signal extraction units 331-1 and 331-3 and the signal extraction units The distance information is calculated based on the pixel signals read from the signal extraction unit 331-2 and the signal extraction unit 331-4 in a state where the unit 331-2 and the signal extraction unit 331-4 are active taps. Is done.
- a signal extraction unit may be shared between mutually adjacent pixels of the pixel array unit 20.
- each pixel of the pixel array section 20 is configured as shown in FIG. 18, for example. Note that, in FIG. 18, portions corresponding to those in FIG. 16 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- FIG. 18 shows the arrangement of the N + semiconductor region and the P + semiconductor region when the signal extraction portion of some of the pixels provided in the pixel array section 20 is viewed from a direction perpendicular to the substrate.
- each pixel is formed with two signal extraction units.
- the signal extraction unit 371 is formed at the upper end of the pixel 51 in the drawing, and the signal extraction unit 372 is formed at the lower end of the pixel 51 in the drawing.
- the signal extraction unit 371 is shared by the pixel 51 and the pixel 291-1. That is, the signal extraction unit 371 is used as a tap of the pixel 51 and also as a tap of the pixel 291-1.
- the signal extraction unit 372 is shared by the pixel 51 and a pixel (not shown) adjacent to the pixel 51 on the lower side in the drawing.
- the P + semiconductor region 381 is formed at the boundary between the pixel 51 and the pixel 291-1.
- the N + semiconductor region 382-1 is formed in a region inside the pixel 51, and the N + semiconductor region 382-2 is formed in a region inside the pixel 291-1.
- the P + semiconductor region 381 functions as a voltage application unit
- the N + semiconductor region 382-1 and the N + semiconductor region 382-2 function as charge detection units.
- the N + semiconductor region 382-1 and the N + semiconductor region 382-2 will be simply referred to as the N + semiconductor region 382 unless it is particularly necessary to distinguish them.
- the P + semiconductor region 381 and the N + semiconductor region 382 may have any shape. Further, the N + semiconductor region 382-1 and the N + semiconductor region 382-2 may be connected to the same FD unit, or may be connected to different FD units.
- a line-shaped P + semiconductor region 383, an N + semiconductor region 384-1, and an N + semiconductor region 384-2 are formed.
- N + semiconductor region 384-1, and N + semiconductor region 384-2 correspond to P + semiconductor region 381, N + semiconductor region 382-1, and N + semiconductor region 382-2, respectively, and have the same arrangement. And shape and function.
- the N + semiconductor region 384-1 and the N + semiconductor region 384-2 will be simply referred to as the N + semiconductor region 384 unless it is particularly necessary to distinguish them.
- the distance measurement by the indirect ToF method can be performed by the same operation as the example shown in FIG.
- a distance between the P + semiconductor region 381 and the P + semiconductor region 383 such as a P + which is a pair for generating an electric field, that is, a current increases.
- the distance between the P + semiconductor regions can be maximized.
- one signal extraction unit may be shared by three or more pixels adjacent to each other.
- the charge detection unit for detecting a signal carrier in the signal extraction unit may be shared, May be shared only by the voltage application unit for generating the voltage.
- the on-chip lens and the inter-pixel light-shielding portion provided for each pixel such as the pixel 51 of the pixel array section 20 may not be particularly provided.
- the pixel 51 can be configured as shown in FIG. In FIG. 19, parts corresponding to those in FIG. 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the configuration of the pixel 51 shown in FIG. 19 is different from the pixel 51 shown in FIG. 2 in that the on-chip lens 62 is not provided, and has the same configuration as the pixel 51 in FIG. 2 in other points.
- the pixel 51 shown in FIG. 19 is not provided with the on-chip lens 62 on the light incident surface side of the substrate 61, the attenuation of infrared light entering the substrate 61 from the outside can be further reduced. . Accordingly, the amount of infrared light that can be received by the substrate 61 increases, and the sensitivity of the pixel 51 can be improved.
- the configuration of the pixel 51 may be, for example, the configuration illustrated in FIG. In FIG. 20, portions corresponding to those in FIG. 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the configuration of the pixel 51 shown in FIG. 20 is different from the pixel 51 shown in FIG. 2 in that the pixel light-shielding film 63-1 and the pixel-to-pixel light-shielding film 63-2 are not provided. It has the same configuration as 51.
- the inter-pixel light-shielding film 63 is not provided on the light incident surface side of the substrate 61, the effect of suppressing crosstalk is reduced. Since light also enters the substrate 61, the sensitivity of the pixel 51 can be improved.
- ⁇ Modification 2 of the eighth embodiment> ⁇ Configuration example of pixel>
- the thickness of the on-chip lens in the optical axis direction may be optimized.
- the same reference numerals are given to the portions corresponding to the case in FIG.
- the configuration of the pixel 51 illustrated in FIG. 21 is different from the pixel 51 illustrated in FIG. 2 in that an on-chip lens 411 is provided instead of the on-chip lens 62, and the other configurations are the same as those of the pixel 51 in FIG. It has become.
- the on-chip lens 411 is formed on the light incident surface side of the substrate 61, that is, on the upper side in the figure.
- the thickness of the on-chip lens 411 in the optical axis direction, that is, the thickness in the vertical direction in the drawing is smaller than that of the on-chip lens 62 shown in FIG.
- a thicker on-chip lens provided on the surface of the substrate 61 is advantageous for condensing light incident on the on-chip lens.
- the transmittance is increased by that amount and the sensitivity of the pixel 51 can be improved. Therefore, depending on the thickness of the substrate 61, the position where infrared light is to be condensed, and the like. What is necessary is just to determine the thickness of the on-chip lens 411 appropriately.
- ⁇ Ninth embodiment> ⁇ Configuration example of pixel> Further, between the pixels formed in the pixel array section 20, a separation region for improving the separation characteristics between adjacent pixels and suppressing crosstalk may be provided.
- the pixel 51 is configured as shown in FIG. 22, for example.
- portions corresponding to those in FIG. 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the configuration of the pixel 51 illustrated in FIG. 22 is different from the pixel 51 illustrated in FIG. 2 in that the separation region 441-1 and the separation region 441-2 are provided in the substrate 61, and the pixel illustrated in FIG. It has the same configuration as 51.
- a separation region 441 that separates an adjacent pixel is provided at a boundary portion between the pixel 51 and another pixel adjacent to the pixel 51 in the substrate 61, that is, at the left and right end portions of the pixel 51 in the drawing.
- -1 and the isolation region 441-2 are formed of a light shielding film or the like.
- the separation region 441-1 and the separation region 441-2 will be simply referred to as the separation region 441.
- a long groove is formed in the substrate 61 at a predetermined depth from the light incident surface side of the substrate 61, that is, from the upper surface in the drawing to the lower direction (the direction perpendicular to the surface of the substrate 61) in the drawing. (Trench) is formed, and a light-shielding film is formed by embedding a light-shielding film in the groove to form an isolation region 441.
- the separation region 441 functions as a pixel separation region that blocks infrared light that enters the substrate 61 from the light incident surface and travels to another pixel adjacent to the pixel 51.
- the buried isolation region 441 By forming the buried isolation region 441 in this manner, the infrared light isolation characteristics between pixels can be improved, and the occurrence of crosstalk can be suppressed.
- ⁇ Modification 1 of Ninth Embodiment> ⁇ Configuration example of pixel>
- a separation region 471-1 and a separation region 471-2 penetrating the entire substrate 61 may be provided.
- portions corresponding to those in FIG. 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the configuration of the pixel 51 shown in FIG. 23 is different from the pixel 51 shown in FIG. 2 in that the separation region 471-1 and the separation region 471-2 are provided in the substrate 61, and the pixel shown in FIG. It has the same configuration as 51. That is, the pixel 51 shown in FIG. 23 has a configuration in which a separation region 471-1 and a separation region 471-2 are provided instead of the separation region 441 of the pixel 51 shown in FIG.
- a separation region penetrating the entire substrate 61 is provided at a boundary portion between the pixel 51 and another pixel adjacent to the pixel 51 in the substrate 61, that is, at the left and right end portions of the pixel 51 in the drawing.
- 471-1 and an isolation region 471-2 are formed of a light shielding film or the like.
- a separation region 471 when there is no need to particularly distinguish the separation region 471-1 and the separation region 471-2, they are simply referred to as a separation region 471.
- a long groove (trench) is formed upward from the surface opposite to the light incident surface side of the substrate 61, that is, from the lower surface in the drawing. At this time, the grooves are formed so as to penetrate the substrate 61 until reaching the light incident surface of the substrate 61. Then, a light-shielding film is formed by embedding in the groove portion formed as described above to be an isolation region 471.
- the thickness of the substrate on which the signal extraction section 65 is formed can be determined according to various characteristics of the pixel and the like.
- the substrate 501 forming the pixel 51 can be made thicker than the substrate 61 shown in FIG. 24, parts corresponding to those in FIG. 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the configuration of the pixel 51 illustrated in FIG. 24 differs from the pixel 51 illustrated in FIG. 2 in that a substrate 501 is provided instead of the substrate 61, and has the same configuration as the pixel 51 in FIG. 2 in other points. .
- the on-chip lens 62, the fixed charge film 66, and the inter-pixel light-shielding film 63 are formed on the light incident surface side of the substrate 501.
- an oxide film 64, a signal extraction unit 65, and a separation unit 75 are formed near the surface of the substrate 501 opposite to the light incident surface.
- the substrate 501 is made of, for example, a P-type semiconductor substrate having a thickness of 20 ⁇ m or more.
- the substrate 501 and the substrate 61 differ only in the thickness of the substrate, and have an oxide film 64, a signal extraction unit 65, and a separation unit 75 formed thereon. Are the same positions on the substrate 501 and the substrate 61.
- the thickness of various layers (films) appropriately formed on the light incident surface side of the substrate 501 or the substrate 61 may be optimized according to the characteristics of the pixels 51 and the like.
- the substrate constituting the pixel 51 is formed of a P-type semiconductor substrate
- the substrate may be formed of, for example, an N-type semiconductor substrate as shown in FIG.
- portions corresponding to those in FIG. 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the configuration of the pixel 51 illustrated in FIG. 25 differs from the pixel 51 illustrated in FIG. 2 in that a substrate 531 is provided instead of the substrate 61, and has the same configuration as the pixel 51 in FIG. 2 in other points. .
- an on-chip lens 62, a fixed charge film 66, and an inter-pixel light-shielding film 63 are formed on a light incident surface side of a substrate 531 made of an N-type semiconductor layer such as a silicon substrate. .
- An oxide film 64, a signal extraction section 65, and a separation section 75 are formed near the surface of the substrate 531 opposite to the light incident surface.
- the positions where the oxide film 64, the signal extraction section 65, and the separation section 75 are formed are the same in the substrate 531 and the substrate 61, and the configuration of the signal extraction section 65 is the same in the substrate 531 and the substrate 61. It has become.
- the substrate 531 has a thickness in the vertical direction in the drawing, that is, a thickness in a direction perpendicular to the surface of the substrate 531 is 20 ⁇ m or less, for example.
- the substrate 531 is, for example, a high-resistance N-Epi substrate having a substrate concentration of the order of 1E + 13 or less, and the substrate 531 has a resistance (resistivity) of, for example, 500 [ ⁇ cm] or more. Thereby, power consumption in the pixel 51 can be reduced.
- the substrate concentration and the resistivity of the substrate 531 for example, substrate concentration 2.15e + 12 resistor 2000 [[Omega] cm] when [cm 3], the resistance when the substrate concentration 4.30E + 12 in [cm 3] 1000 [ ⁇ cm] , The resistance is 500 [ ⁇ cm] when the substrate density is 8.61E + 12 [cm 3 ], and the resistance is 100 [ ⁇ cm] when the substrate density is 4.32E + 13 [cm 3 ].
- the same effect can be obtained by the same operation as the example shown in FIG.
- the thickness of the N-type semiconductor substrate can be determined according to various characteristics of the pixel and the like.
- the substrate 561 forming the pixel 51 can be made thicker than the substrate 531 shown in FIG. 26, parts corresponding to those in FIG. 25 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the configuration of the pixel 51 illustrated in FIG. 26 is different from the pixel 51 illustrated in FIG. 25 in that a substrate 561 is provided instead of the substrate 531, and has the same configuration as the pixel 51 in FIG. 25 in other points. .
- the on-chip lens 62, the fixed charge film 66, and the inter-pixel light-shielding film 63 are formed on the light incident surface side of the substrate 561.
- An oxide film 64, a signal extraction section 65, and a separation section 75 are formed near the surface of the substrate 561 on the side opposite to the light incident surface side.
- the substrate 561 is, for example, an N-type semiconductor substrate having a thickness of 20 ⁇ m or more.
- the substrate 561 differs from the substrate 531 only in the thickness of the substrate, and the oxide film 64, the signal extraction unit 65, and the separation unit 75 are formed. This is the same position for the substrate 561 and the substrate 531.
- ⁇ Thirteenth embodiment> ⁇ Configuration example of pixel> Further, for example, by applying a bias to the light incident surface side of the substrate 61, the electric field in the direction perpendicular to the surface of the substrate 61 (hereinafter, also referred to as Z direction) in the substrate 61 may be enhanced. Good.
- the pixel 51 has, for example, the configuration shown in FIG. In FIG. 27, portions corresponding to those in FIG. 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- FIG. 27A shows the pixel 51 shown in FIG. 2, and the arrow in the substrate 61 of the pixel 51 indicates the strength of the electric field in the Z direction in the substrate 61.
- FIG. 27B shows the configuration of the pixel 51 when a bias (voltage) is applied to the light incident surface of the substrate 61.
- the configuration of the pixel 51 in FIG. 27B is basically the same as the configuration of the pixel 51 shown in FIG. 2, but a P + semiconductor region 601 is additionally formed at the light incident surface side interface of the substrate 61. Have been.
- the configuration for applying a voltage to the light incident surface side of the substrate 61 is not limited to the configuration in which the P + semiconductor region 601 is provided, but may be any other configuration.
- a transparent electrode film may be formed by lamination between the light incident surface of the substrate 61 and the on-chip lens 62, and a negative bias may be applied by applying a voltage to the transparent electrode film.
- a large-area reflecting member may be provided on the surface of the substrate 61 opposite to the light incident surface in order to improve the sensitivity of the pixel 51 to infrared light.
- the pixel 51 is configured, for example, as shown in FIG. In FIG. 28, portions corresponding to those in FIG. 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the configuration of the pixel 51 shown in FIG. 28 differs from the pixel 51 of FIG. 2 in that a reflective member 631 is provided on a surface of the substrate 61 opposite to the light incident surface, and the pixel of FIG. It has the same configuration as 51.
- a reflecting member 631 that reflects infrared light is provided so as to cover the entire surface of the substrate 61 opposite to the light incident surface.
- the reflecting member 631 may be of any type as long as it has a high infrared light reflectance.
- a metal (metal) such as copper or aluminum provided in a multilayer wiring layer laminated on the surface of the substrate 61 opposite to the light incident surface may be used as the reflection member 631,
- the reflection member 631 may be formed by forming a reflection structure such as polysilicon or an oxide film on the surface opposite to the light incidence surface.
- the reflection member 631 in the pixel 51 By providing the reflection member 631 in the pixel 51 in this way, the red light that has entered the substrate 61 from the light incident surface via the on-chip lens 62 and has passed through the substrate 61 without being photoelectrically converted in the substrate 61 is provided. External light can be reflected by the reflecting member 631 and reenter the substrate 61. Accordingly, the amount of infrared light photoelectrically converted in the substrate 61 can be increased, and the quantum efficiency (QE), that is, the sensitivity of the pixel 51 to the infrared light can be improved.
- QE quantum efficiency
- a light shielding member having a large area may be provided on the surface of the substrate 61 opposite to the light incident surface.
- the pixel 51 can have a configuration in which, for example, the reflection member 631 illustrated in FIG. 28 is replaced with a light shielding member. That is, in the pixel 51 shown in FIG. 28, the reflection member 631 that covers the entire surface of the substrate 61 opposite to the light incident surface is a light shielding member 631 'that shields infrared light.
- the light-shielding member 631 ' is replaced with the reflection member 631 of the pixel 51 in FIG.
- This light shielding member 631 ′ may be any material as long as it has a high infrared light shielding ratio.
- a metal (metal) such as copper or aluminum provided in a multilayer wiring layer laminated on the surface of the substrate 61 opposite to the light incident surface may be used as the light shielding member 631 ′
- a light-shielding structure such as polysilicon or an oxide film may be formed on the surface opposite to the light-incident surface of 61 to serve as a light-shielding member 631 ′.
- the light shielding member 631 ′ in the pixel 51 By providing the light shielding member 631 ′ in the pixel 51 in this manner, light enters the substrate 61 from the light incident surface via the on-chip lens 62, and passes through the substrate 61 without being photoelectrically converted in the substrate 61. It is possible to suppress the infrared light from being scattered by the wiring layer and entering the neighboring pixels. Thus, it is possible to prevent light from being erroneously detected in the neighboring pixels.
- the light shielding member 631 ′ can also serve as the reflection member 631 by being formed of, for example, a material containing metal.
- ⁇ Sixteenth embodiment> ⁇ Configuration example of pixel>
- a P-well region formed of a P-type semiconductor region instead of the oxide film 64 on the substrate 61 of the pixel 51, a P-well region formed of a P-type semiconductor region may be provided.
- the pixel 51 is configured as shown in FIG. 29, for example. 29, portions corresponding to those in FIG. 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the configuration of the pixel 51 shown in FIG. 29 differs from the pixel 51 shown in FIG. 2 in that a P-well region 671, a separation portion 672-1, and a separation portion 672-2 are provided instead of the oxide film 64. In other respects, the configuration is the same as that of the pixel 51 of FIG.
- a P-well region 671 made of a P-type semiconductor region is formed on the surface of the substrate 61 opposite to the light incident surface, that is, on the center inside the lower surface in the figure. ing. Further, between the P well region 671 and the N + semiconductor region 71-1 is formed an isolation portion 672-1 for separating these regions by an oxide film or the like. Similarly, between the P well region 671 and the N + semiconductor region 71-2, an isolation portion 672-2 for isolating those regions is formed by an oxide film or the like. In the pixel 51 shown in FIG. 29, the P-semiconductor region 74 is wider in the upward direction in the figure than the N-semiconductor region 72.
- a P-well region formed of a P-type semiconductor region may be further provided.
- the pixel 51 is configured, for example, as shown in FIG. In FIG. 30, portions corresponding to those in FIG. 2 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the configuration of the pixel 51 shown in FIG. 30 is different from the pixel 51 shown in FIG. 2 in that a P-well region 701 is newly provided, and has the same configuration as the pixel 51 in FIG. 2 in other points. That is, in the example shown in FIG. 30, a P-well region 701 made of a P-type semiconductor region is formed above the oxide film 64 in the substrate 61.
- the characteristics such as pixel sensitivity can be improved by using a back-illuminated configuration for the CAPD sensor.
- FIG. 31 shows an equivalent circuit of the pixel 51.
- the pixel 51 includes a transfer transistor 721A, a FD 722A, a reset transistor 723A, an amplification transistor 724A, and a selection transistor 725A with respect to a signal extraction unit 65-1 including an N + semiconductor region 71-1 and a P + semiconductor region 73-1. Have.
- the pixel 51 applies a transfer transistor 721B, a FD 722B, a reset transistor 723B, an amplification transistor 724B, and a selection transistor to the signal extraction unit 65-2 including the N + semiconductor region 71-2 and the P + semiconductor region 73-2. 725B.
- the ⁇ ⁇ tap drive section 21 applies a predetermined voltage MIX0 (first voltage) to the P + semiconductor region 73-1 and applies a predetermined voltage MIX1 (second voltage) to the P + semiconductor region 73-2.
- MIX0 first voltage
- MIX1 second voltage
- one of the voltages MIX0 and MIX1 is 1.5V and the other is 0V.
- the P + semiconductor regions 73-1 and 73-2 are voltage applying sections to which the first voltage or the second voltage is applied.
- the N + semiconductor regions 71-1 and 71-2 are charge detection units that detect and accumulate charges generated by photoelectrically converting light incident on the substrate 61.
- the transfer transistor 721A transfers the electric charge accumulated in the N + semiconductor region 71-1 to the FD 722A by being turned on in response to the drive signal TRG supplied to the gate electrode being activated when the drive signal TRG is activated.
- the transfer transistor 721B becomes conductive in response to the drive signal TRG, and thereby transfers the electric charge accumulated in the N + semiconductor region 71-2 to the FD 722B.
- $ FD 722A temporarily holds the charge DET0 supplied from the N + semiconductor region 71-1.
- the FD 722B temporarily holds the charge DET1 supplied from the N + semiconductor region 71-2.
- the FD 722A corresponds to the FD unit A described with reference to FIG. 2, and the FD 722B corresponds to the FD unit B.
- the reset transistor 723A resets the potential of the FD 722A to a predetermined level (power supply voltage VDD) by being turned on in response to the drive signal RST supplied to the gate electrode being activated, in response to this.
- the reset transistor 723B resets the potential of the FD 722B to a predetermined level (the power supply voltage VDD) by being turned on in response to the drive signal RST supplied to the gate electrode being activated. Note that when the reset transistors 723A and 723B are activated, the transfer transistors 721A and 721B are also activated at the same time.
- the source electrode of the amplification transistor 724A is connected to the vertical signal line 29A via the selection transistor 725A, thereby connecting the load MOS and the source follower circuit of the constant current source circuit section 726A connected to one end of the vertical signal line 29A.
- the source electrode of the amplification transistor 724B is connected to the vertical signal line 29B via the selection transistor 725B, so that the load MOS and the source follower circuit of the constant current source circuit section 726B connected to one end of the vertical signal line 29B are connected.
- the selection transistor 725A is connected between the source electrode of the amplification transistor 724A and the vertical signal line 29A.
- the selection transistor 725A becomes conductive in response to the selection signal SEL, and outputs the pixel signal output from the amplification transistor 724A to the vertical signal line 29A.
- the selection transistor 725B is connected between the source electrode of the amplification transistor 724B and the vertical signal line 29B.
- the selection transistor 725B becomes conductive in response to the selection signal SEL, and outputs the pixel signal output from the amplification transistor 724B to the vertical signal line 29B.
- the transfer transistors 721A and 721B, the reset transistors 723A and 723B, the amplification transistors 724A and 724B, and the selection transistors 725A and 725B of the pixel 51 are controlled by, for example, the vertical driving unit 22.
- FIG. 32 shows another equivalent circuit of the pixel 51.
- FIG. 32 in FIG. 32, portions corresponding to FIG. 31 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the equivalent circuit of FIG. 32 is different from the equivalent circuit of FIG. 31 in that an additional capacitor 727 and a switching transistor 728 for controlling the connection are added to both the signal extraction units 65-1 and 65-2.
- an additional capacitance 727A is connected between the transfer transistor 721A and the FD 722A via the switching transistor 728A
- an additional capacitance 727B is connected between the transfer transistor 721B and the FD 722B via the switching transistor 728B. It is connected.
- the switching transistor 728A is turned on in response to the drive signal FDG supplied to the gate electrode being activated, thereby connecting the additional capacitance 727A to the FD 722A.
- the switching transistor 728B becomes conductive in response to the drive signal FDG, thereby connecting the additional capacitor 727B to the FD 722B.
- the vertical drive unit 22 activates the switching transistors 728A and 728B to connect the FD 722A and the additional capacitance 727A and also connects the FD 722B and the additional capacitance 727B. Thereby, more charges can be accumulated at the time of high illuminance.
- the vertical drive unit 22 deactivates the switching transistors 728A and 728B to separate the additional capacitors 727A and 727B from the FDs 722A and 722B, respectively.
- the additional capacitance 727 may be omitted, but a high dynamic range can be secured by providing the additional capacitance 727 and selectively using it according to the amount of incident light.
- a of FIG. 33 is a plan view showing a first arrangement example of the voltage supply lines.
- a voltage supply line 741-1 or 741-2 is set between two pixels adjacent in the horizontal direction (boundary). Wired along the direction.
- the voltage supply line 741-1 is connected to the P + semiconductor region 73-1 of the signal extraction unit 65-1 which is one of the two signal extraction units 65 in the pixel 51.
- the voltage supply line 741-2 is connected to the P + semiconductor region 73-2 of the signal extraction unit 65-2 which is the other of the two signal extraction units 65 in the pixel 51.
- the number of voltage supply lines 741 arranged is , The number of columns of the pixels 51.
- BB in FIG. 33 is a plan view showing a second arrangement example of the voltage supply lines.
- two voltage supply lines 741-1 and 741-2 are wired in the vertical direction for one pixel column of a plurality of pixels 51 two-dimensionally arranged in a matrix. ing.
- the voltage supply line 741-1 is connected to the P + semiconductor region 73-1 of the signal extraction unit 65-1 which is one of the two signal extraction units 65 in the pixel 51.
- the voltage supply line 741-2 is connected to the P + semiconductor region 73-2 of the signal extraction unit 65-2 which is the other of the two signal extraction units 65 in the pixel 51.
- the number of voltage supply lines 741 arranged is about twice the number of columns of the pixels 51.
- the voltage supply line 741-1 is connected to the P + semiconductor region 73-1 of the signal extraction unit 65-1, and the voltage supply line 741-2 is connected to the signal extraction unit 65-.
- the configuration connected to the second P + semiconductor region 73-2 is a periodic arrangement (periodic arrangement) in which pixels arranged in the vertical direction are periodically repeated.
- the number of voltage supply lines 741-1 and 741-2 wired to the pixel array unit 20 can be reduced.
- the number of wirings is larger than in the first arrangement example, but the number of signal extraction units 65 connected to one voltage supply line 741 is 1 / Since this is 2, the load on the wiring can be reduced, which is effective when driving at high speed or when the total number of pixels of the pixel array section 20 is large.
- a of FIG. 34 is a plan view showing a third arrangement example of the voltage supply lines.
- the third arrangement example is an example in which two voltage supply lines 741-1 and 741-2 are arranged for two columns of pixels, as in the first arrangement example of FIG. 33A.
- the third arrangement example is different from the first arrangement example of FIG. 33A in that two pixels arranged in the vertical direction are different in the connection destination of the signal extraction units 65-1 and 65-2. .
- the voltage supply line 741-1 is connected to the P + semiconductor region 73-1 of the signal extraction unit 65-1, and the voltage supply line 741-2 is connected to the signal extraction unit 65-2.
- the voltage supply line 741-1 is connected to the P + semiconductor region 73-2 of the signal extraction unit 65-2.
- -2 is connected to the P + semiconductor region 73-1 of the signal extraction unit 65-1.
- FIG. 34B is a plan view illustrating a fourth arrangement example of the voltage supply lines.
- the fourth arrangement example is an example in which two voltage supply lines 741-1 and 741-2 are arranged for two columns of pixels, as in the second arrangement example of FIG. 33B.
- the fourth arrangement example is different from the second arrangement example of FIG. 33B in that two pixels arranged in the vertical direction are different in the connection destination of the signal extraction units 65-1 and 65-2. .
- the voltage supply line 741-1 is connected to the P + semiconductor region 73-1 of the signal extraction unit 65-1, and the voltage supply line 741-2 is connected to the signal extraction unit 65-2.
- the voltage supply line 741-1 is connected to the P + semiconductor region 73-2 of the signal extraction unit 65-2.
- -2 is connected to the P + semiconductor region 73-1 of the signal extraction unit 65-1.
- the number of voltage supply lines 741-1 and 741-2 to be wired to the pixel array unit 20 can be reduced.
- the number of wirings is larger than in the third arrangement example, but the number of signal extraction units 65 connected to one voltage supply line 741 is 1 / Since this is 2, the load on the wiring can be reduced, which is effective when driving at high speed or when the total number of pixels of the pixel array section 20 is large.
- the arrangement examples of A and B in FIG. 34 are both Mirror arrangements (mirror arrangements) in which the connection destinations of two vertically adjacent pixels are mirror-inverted.
- the voltage applied to the two signal extraction units 65 adjacent to each other across the pixel boundary becomes the same voltage. Is suppressed. Therefore, the charge transfer efficiency is inferior to the periodic arrangement, but the crosstalk characteristics of the adjacent pixels are better than the periodic arrangement.
- FIGS. 36 and 37 are cross-sectional views of a plurality of pixels of the fourteenth embodiment shown in FIG.
- the fourteenth embodiment shown in FIG. 28 has a configuration of a pixel provided with a large-area reflecting member 631 on the opposite side of the light incident surface of the substrate 61.
- FIG. 36 corresponds to a cross-sectional view taken along line B-B ′ in FIG. 11
- FIG. 37 corresponds to a cross-sectional view taken along line A-A ′ in FIG.
- a cross-sectional view taken along line C-C ′ in FIG. 17 can be shown as in FIG.
- an oxide film 64 is formed at a central portion, and signal extraction portions 65-1 and 65-2 are formed on both sides of the oxide film 64, respectively. ing.
- the N + semiconductor region 73-1 and the P- semiconductor region 74-1 are centered and N + semiconductor regions 73-1 and the P- semiconductor region 74-1 are surrounded.
- a semiconductor region 71-1 and an N-semiconductor region 72-1 are formed.
- P + semiconductor region 73-1 and N + semiconductor region 71-1 are in contact with multilayer wiring layer 811.
- the P ⁇ semiconductor region 74-1 is disposed above the P + semiconductor region 73-1 (on the side of the on-chip lens 62) so as to cover the P + semiconductor region 73-1. It is arranged above the N + semiconductor region 71-1 (on the on-chip lens 62 side) so as to cover the region 71-1.
- the P + semiconductor region 73-1 and the N + semiconductor region 71-1 are arranged on the multilayer wiring layer 811 side in the substrate 61, and the N ⁇ semiconductor region 72-1 and the P ⁇ semiconductor region 74-1 are Is disposed on the side of the on-chip lens 62. Further, between the N + semiconductor region 71-1 and the P + semiconductor region 73-1, an isolation portion 75-1 for isolating these regions is formed by an oxide film or the like.
- the N + semiconductor region 73-2 and the P ⁇ semiconductor region 74-2 are centered and the N + semiconductor region 73-2 and the P ⁇ semiconductor region 74-2 are surrounded.
- a semiconductor region 71-2 and an N-semiconductor region 72-2 are formed.
- P + semiconductor region 73-2 and N + semiconductor region 71-2 are in contact with multilayer wiring layer 811.
- the P ⁇ semiconductor region 74-2 is disposed above the P + semiconductor region 73-2 (on the on-chip lens 62 side) so as to cover the P + semiconductor region 73-2
- the N ⁇ semiconductor region 72-2 is It is arranged above the N + semiconductor region 71-2 (on the on-chip lens 62 side) so as to cover the region 71-2.
- the P + semiconductor region 73-2 and the N + semiconductor region 71-2 are disposed on the multilayer wiring layer 811 side in the substrate 61, and the N ⁇ semiconductor region 72-2 and the P ⁇ semiconductor region 74-2 are Is disposed on the side of the on-chip lens 62. Also, between the N + semiconductor region 71-2 and the P + semiconductor region 73-2, an isolation portion 75-2 for isolating those regions is formed by an oxide film or the like.
- the N + semiconductor region 71-1 of the signal extraction unit 65-1 of the predetermined pixel 51 which is a boundary region between adjacent pixels 51, and the N + semiconductor region 71-2 of the signal extraction unit 65-2 of the adjacent pixel 51.
- An oxide film 64 is also formed between them.
- a fixed charge film 66 is formed on the interface of the substrate 61 on the light incident surface side (the upper surface in FIGS. 36 and 37).
- the on-chip lens 62 formed for each pixel on the light incident surface side of the substrate 61 is provided with a raised portion 821 whose thickness is raised uniformly over the entire area within the pixel in the height direction.
- the thickness of the raised portion 821 is formed smaller than the thickness of the curved surface portion 822.
- Increasing the thickness of the raised portion 821 makes it easier for oblique incident light to be reflected by the inter-pixel light-shielding film 63. Therefore, by forming the raised portion 821 to be thinner, oblique incident light can be taken into the substrate 61. it can. Further, as the thickness of the curved surface portion 822 increases, incident light can be focused on the center of the pixel.
- a multilayer wiring layer 811 is formed on the opposite side of the light incident surface side of the substrate 61 on which the on-chip lens 62 is formed for each pixel.
- the substrate 61 as a semiconductor layer is disposed between the on-chip lens 62 and the multilayer wiring layer 811.
- the multilayer wiring layer 811 includes five metal films M1 to M5 and an interlayer insulating film 812 therebetween. 36, the outermost metal film M5 among the five metal films M1 to M5 of the multilayer wiring layer 811 is not shown because it is in a place where it cannot be seen. 37 is a sectional view of FIG.
- a pixel transistor Tr is formed in a pixel boundary region at an interface between the multilayer wiring layer 811 and the substrate 61.
- the pixel transistor Tr is one of the transfer transistor 721, the reset transistor 723, the amplification transistor 724, and the selection transistor 725 shown in FIGS.
- the metal film M1 closest to the substrate 61 has a power supply line 813 for supplying a power supply voltage, a P + semiconductor region 73-1 or 73-2, And a reflection member 815 that reflects incident light.
- wirings other than the power supply line 813 and the voltage application wiring 814 become reflection members 815, but some reference numerals are omitted to prevent the drawing from being complicated.
- the reflecting member 815 is a dummy wiring provided for the purpose of reflecting incident light, and corresponds to the reflecting member 631 shown in FIG.
- the reflecting member 815 is arranged below the N + semiconductor regions 71-1 and 71-2 so as to overlap the N + semiconductor regions 71-1 and 71-2, which are charge detection units, in a plan view.
- the light shielding member 631 'of the fifteenth embodiment is provided instead of the reflection member 631 of the fourteenth embodiment shown in FIG. 28, the part of the reflection member 815 of FIG. It becomes member 631 '.
- a charge extraction wiring (not shown in FIG. 36) for connecting the N + semiconductor region 71 and the transfer transistor 721 is formed to transfer the charges accumulated in the N + semiconductor region 71 to the FD 722.
- the reflection member 815 (reflection member 631) and the charge extraction wiring are arranged on the same layer of the metal film M1, but are not necessarily limited to those arranged on the same layer.
- the voltage application wiring 816 connected to the voltage application wiring 814 of the metal film M1 the drive signal TRG, the drive signal RST, the selection signal SEL, the drive signal FDG, and the like are transmitted.
- a control line 817 for transmission, a ground line, and the like are formed.
- an FD 722B and an additional capacitor 727A are formed.
- a vertical signal line 29, a VSS wiring for shielding, and the like are formed.
- a predetermined voltage MIX0 or MIX1 is applied to the P + semiconductor regions 73-1 and 73-2 which are voltage application units of the signal extraction unit 65.
- Voltage supply lines 741-1 and 741-2 (FIGS. 33 and 34) for application are formed.
- FIG. 38 is a cross-sectional view showing the pixel structure of the ninth embodiment shown in FIG. 22 for a plurality of pixels without omitting a multilayer wiring layer.
- a light-shielding film is formed by forming a long groove (trench) from the back surface (light incident surface) side of the substrate 61 to a predetermined depth at a pixel boundary portion in the substrate 61.
- a long groove from the back surface (light incident surface) side of the substrate 61 to a predetermined depth at a pixel boundary portion in the substrate 61.
- FIG. 39 is a cross-sectional view showing the pixel structure of Modification Example 1 of the ninth embodiment shown in FIG. 23 for a plurality of pixels without omitting a multilayer wiring layer.
- ⁇ ⁇ Modification 1 of the ninth embodiment shown in FIG. 23 is a configuration of a pixel including a separation region 471 penetrating the entire substrate 61 at a pixel boundary portion in the substrate 61.
- FIG. 40 is a cross-sectional view showing the pixel structure of the sixteenth embodiment shown in FIG. 29 for a plurality of pixels without omitting a multilayer wiring layer.
- the sixteenth embodiment shown in FIG. 29 has a configuration in which a P-well region 671 is provided on the surface of the substrate 61 opposite to the light incident surface, that is, on the central portion inside the lower surface in the drawing. It is.
- An isolation 672-1 is formed between the P well region 671 and the N + semiconductor region 71-1 by an oxide film or the like.
- an isolation portion 672-2 is formed by an oxide film or the like.
- a P-well region 671 is also formed at the pixel boundary on the lower surface of the substrate 61.
- FIG. 41 is a cross-sectional view showing the pixel structure of the tenth embodiment shown in FIG. 24 for a plurality of pixels without omitting a multilayer wiring layer.
- the tenth embodiment shown in FIG. 24 is a configuration of a pixel in which a thick substrate 501 is provided instead of the substrate 61.
- FIG. 42A shows an example of a planar arrangement of the first metal film M1 of the five metal films M1 to M5 of the multilayer wiring layer 811.
- FIG. 42A shows an example of a planar arrangement of the first metal film M1 of the five metal films M1 to M5 of the multilayer wiring layer 811.
- 42C illustrates an example of a planar arrangement of the third metal film M3 of the five metal films M1 to M5 of the multilayer wiring layer 811.
- FIG. 43 shows a planar arrangement example of the fourth metal film M4 among the five metal films M1 to M5 of the multilayer wiring layer 811.
- FIG. 43B shows an example of a plane layout of the fifth metal film M5 of the five metal films M1 to M5 of the multilayer wiring layer 811.
- FIG. 43B shows an example of a plane layout of the fifth metal film M5 of the five metal films M1 to M5 of the multilayer wiring layer 811.
- FIGS. 42A to 42C and FIGS. 43A and 43B the area of the pixel 51 and the area of the octagonal signal extraction units 65-1 and 65-2 shown in FIG. 11 are indicated by broken lines. ing.
- the vertical direction in the drawing is the vertical direction of the pixel array unit 20
- the horizontal direction in the drawing is the horizontal direction of the pixel array unit 20.
- a reflection member 631 that reflects infrared light is formed on the metal film M1, which is the first layer of the multilayer wiring layer 811.
- the metal film M1 which is the first layer of the multilayer wiring layer 811.
- two reflection members 631 are formed for each of the signal extraction units 65-1 and 65-2, and the two reflection members 631 of the signal extraction unit 65-1 and the signal extraction unit 65- are formed.
- One two reflecting members 631 are formed symmetrically with respect to the vertical direction.
- a pixel transistor wiring region 831 is arranged between the pixel 51 and the reflective member 631 of the adjacent pixel 51 in the horizontal direction.
- a wiring connecting the pixel transistors Tr of the transfer transistor 721, the reset transistor 723, the amplification transistor 724, or the selection transistor 725 is formed.
- the wiring for the pixel transistor Tr is also formed symmetrically in the vertical direction with reference to an intermediate line (not shown) between the two signal extraction units 65-1 and 65-2.
- wiring such as a ground line 832, a power supply line 833, and a ground line 834 are formed between the reflective member 631 of the adjacent pixel 51 in the vertical direction. These wirings are also formed symmetrically in the vertical direction with reference to an intermediate line between the two signal extraction units 65-1 and 65-2.
- the wiring load is reduced.
- the adjustment is made equally by the take-out sections 65-1 and 65-2. As a result, drive variations between the signal extraction units 65-1 and 65-2 are reduced.
- a large-area reflecting member 631 is formed below the signal extracting portions 65-1 and 65-2 formed on the substrate 61, so that the substrate 61
- the infrared light that has entered the inside and has passed through the substrate 61 without being photoelectrically converted in the substrate 61 can be reflected by the reflecting member 631 and made to enter the substrate 61 again. Accordingly, the amount of infrared light photoelectrically converted in the substrate 61 can be increased, and the quantum efficiency (QE), that is, the sensitivity of the pixel 51 to the infrared light can be improved.
- QE quantum efficiency
- the light shielding member 631 ′ is arranged in the same region as the reflection member 631 instead of the reflection member 631 in the first metal film M1, the inside of the substrate 61 from the light incident surface via the on-chip lens 62 is formed. , And infrared light transmitted through the substrate 61 without being photoelectrically converted in the substrate 61 can be suppressed from being scattered by the wiring layer and incident on neighboring pixels. Thus, it is possible to prevent light from being erroneously detected in the neighboring pixels.
- a predetermined signal is horizontally transmitted to a position between the signal extraction units 65-1 and 65-2 in the metal film M2, which is the second layer of the multilayer wiring layer 811.
- a control line area 851 in which control lines 841 to 844 to be formed are formed.
- the control lines 841 to 844 are lines that transmit, for example, the drive signal TRG, the drive signal RST, the selection signal SEL, or the drive signal FDG.
- control line area 851 By arranging the control line area 851 between the two signal extraction units 65, the influence on each of the signal extraction units 65-1 and 65-2 is equalized, and the signal extraction units 65-1 and 65-2 are not affected. Driving variations can be reduced.
- a capacitance region 852 in which the FD 722B and the additional capacitance 727A are formed is arranged in a predetermined region different from the control line region 851 of the second metal film M2.
- the FD 722B or the additional capacitance 727A is formed by patterning the metal film M2 in a comb shape.
- the pattern of the FD 722B or the additional capacitance 727A can be freely arranged according to a desired wiring capacitance in design, and the design is free. The degree can be improved.
- At least the vertical signal line 29 for transmitting the pixel signal output from each pixel 51 to the column processing unit 23 is provided on the metal film M3, which is the third layer of the multilayer wiring layer 811. Is formed. Three or more vertical signal lines 29 can be arranged for one pixel column in order to improve the reading speed of pixel signals. Further, a shield wiring may be arranged in addition to the vertical signal line 29 to reduce the coupling capacitance.
- a predetermined voltage MIX0 or MIX1 is applied to the P + semiconductor regions 73-1 and 73-2 of the signal extraction unit 65 of each pixel 51 in the fourth metal film M4 and the fifth metal film M5 of the multilayer wiring layer 811. Are formed. Voltage supply lines 741-1 and 741-2 for applying.
- the metal films M4 and M5 shown in FIGS. 43A and B show an example in which the voltage supply line 741 of the first arrangement example shown in FIG. 33A is adopted.
- the voltage supply line 741-1 of the metal film M4 is connected to the voltage application wiring 814 (for example, FIG. 36) of the metal film M1 via the metal films M3 and M2, and the voltage application wiring 814 is connected to the signal extraction unit of the pixel 51.
- 65-1 is connected to the P + semiconductor region 73-1.
- the voltage supply line 741-2 of the metal film M4 is connected to the voltage application wiring 814 (for example, FIG. 36) of the metal film M1 via the metal films M3 and M2. It is connected to the P + semiconductor region 73-2 of the signal extraction section 65-2.
- the voltage supply lines 741-1 and 741-2 of the metal film M5 are connected to the tap drive unit 21 around the pixel array unit 20.
- the voltage supply line 741-1 of the metal film M4 and the voltage supply line 741-1 of the metal film M5 are connected by a via or the like (not shown) at a predetermined position where both metal films are present in the plane area.
- the predetermined voltage MIX0 or MIX1 from the tap drive unit 21 is transmitted to the voltage supply lines 741-1 and 741-2 of the metal film M5 and supplied to the voltage supply lines 741-1 and 741-2 of the metal film M4.
- the light-receiving element 1 As a back-illuminated type CAPD sensor, for example, as shown in FIGS. 43A and 43B, a voltage for applying a predetermined voltage MIX0 or MIX1 to the signal extraction unit 65 of each pixel 51.
- the wiring width and layout of the drive wiring can be freely designed, for example, the supply lines 741-1 and 741-2 can be wired in the vertical direction. Further, wiring suitable for high-speed driving and wiring considering load reduction are also possible.
- FIG. 44 is a plan view in which the first-layer metal film M1 shown in FIG. 42A and a polysilicon layer forming a gate electrode and the like of the pixel transistor Tr formed thereon are overlapped.
- 44A is a plan view in which the metal film M1 of FIG. 44C and the polysilicon layer of FIG. 44B are overlapped, and FIG. 44B is a plan view of only the polysilicon layer.
- 44C is a plan view of only the metal film M1.
- the plan view of the metal film M1 in FIG. 44C is the same as the plan view shown in FIG. 42A, but hatching is omitted.
- the pixel transistor wiring region 831 is formed between the reflection members 631 of each pixel.
- the pixel transistors Tr corresponding to the signal extraction units 65-1 and 65-2 are arranged, for example, as shown in FIG.
- the reset transistors 723A and 723B, the transfer transistors 721A and 721B, the switching transistor 723A and 723B are arranged from the side near the intermediate line (not shown) of the two signal extraction units 65-1 and 65-2.
- Gate electrodes of 728A and 728B, select transistors 725A and 725B, and amplifying transistors 724A and 724B are formed.
- the wiring connecting the pixel transistors Tr of the metal film M1 shown in FIG. 44C is also formed symmetrically in the vertical direction with reference to the middle line (not shown) of the two signal extraction units 65-1 and 65-2. Have been.
- the signal extraction unit Driving variations 65-1 and 65-2 can be reduced.
- a large-area reflecting member 631 is arranged in a region around the signal extraction unit 65 in the pixel 51.
- the reflection members 631 can be arranged in a lattice-shaped pattern, for example, as shown in FIG.
- the pattern anisotropy can be eliminated, and the XY anisotropy of the reflection ability can be reduced.
- the reflection member 631 in a lattice-shaped pattern the reflection of incident light to a partial area that is deviated can be reduced and the light can be easily reflected isotropically, so that the distance measurement accuracy is improved.
- the reflection members 631 may be arranged in a stripe pattern, for example, as shown in FIG. 45B.
- the pattern of the reflecting member 631 can be used also as a wiring capacitance, so that a configuration in which the dynamic range is maximized can be realized. .
- 45B is an example of a vertical stripe shape, but may be a horizontal stripe shape.
- the reflection member 631 may be disposed only in the pixel central region, more specifically, only between the two signal extraction units 65, as shown in FIG. 45C.
- the reflecting member 631 since the reflecting member 631 is formed in the pixel central region and is not formed at the pixel end, the oblique light is incident on the pixel central region while obtaining the effect of improving the sensitivity by the reflecting member 631. Can be suppressed, and a configuration emphasizing suppression of crosstalk can be realized.
- a part of the reflective member 631 is arranged in a comb-like pattern, so that a part of the metal film M1 is allocated to the wiring capacitance of the FD 722 or the additional capacitance 727.
- the comb shape in the regions 861 to 864 surrounded by the solid circles constitutes at least a part of the FD 722 or the additional capacitance 727.
- the FD 722 or the additional capacitor 727 may be appropriately allocated to the metal film M1 and the metal film M2.
- the pattern of the metal film M1 can be arranged on the reflection member 631 and the capacitance of the FD 722 or the additional capacitance 727 in a well-balanced manner.
- BB of FIG. 46 shows a pattern of the metal film M1 when the reflection member 631 is not arranged.
- the light receiving element 1 of FIG. 1 can adopt any one of the substrate configurations of FIGS.
- FIG. 47A shows an example in which the light receiving element 1 is composed of one semiconductor substrate 911 and a supporting substrate 912 thereunder.
- the upper semiconductor substrate 911 includes a pixel array region 951 corresponding to the above-described pixel array unit 20, a control circuit 952 for controlling each pixel of the pixel array region 951, and a logic including a signal processing circuit for pixel signals.
- a circuit 953 is formed.
- the control circuit 952 includes the tap drive unit 21, the vertical drive unit 22, the horizontal drive unit 24, and the like described above.
- the logic circuit 953 includes a column processing unit 23 that performs an AD conversion process of a pixel signal, a distance calculation process of calculating a distance from a ratio of pixel signals obtained by two or more signal extraction units 65 in a pixel, A signal processing unit 31 that performs a calibration process and the like is included.
- the light receiving element 1 includes a first semiconductor substrate 921 on which a pixel array region 951 and a control circuit 952 are formed, and a second semiconductor substrate on which a logic circuit 953 is formed. 922 may be stacked. Note that the first semiconductor substrate 921 and the second semiconductor substrate 922 are electrically connected to each other by, for example, a through via or a metal bond of Cu—Cu.
- the light receiving element 1 includes a first semiconductor substrate 931 on which only the pixel array region 951 is formed, a control circuit for controlling each pixel, and a signal processing for processing pixel signals.
- the circuit may have a structure in which a second semiconductor substrate 932 provided with an area control circuit 954 provided in one pixel unit or an area unit of a plurality of pixels is stacked.
- the first semiconductor substrate 931 and the second semiconductor substrate 932 are electrically connected, for example, by through vias or Cu-Cu metal bonding.
- the optimal drive timing and gain can be set for each division control unit.
- optimized distance information can be obtained regardless of the distance and the reflectance.
- the distance information can be calculated by driving only a part of the pixel array region 951 instead of the entire surface, the power consumption can be suppressed according to the operation mode.
- pixel transistors Tr such as a reset transistor 723, an amplification transistor 724, and a selection transistor 725 are arranged at the boundary between the pixels 51 arranged in the horizontal direction in the pixel array unit 20, as shown in the cross-sectional view of FIG. Is done.
- the pixel transistor arrangement region at the pixel boundary portion shown in FIG. 37 is shown in more detail.
- the pixel transistors Tr such as the reset transistor 723, the amplification transistor 724, and the selection transistor 725 are provided on the substrate 61. Is formed in the P-well region 1011 formed on the surface side of the substrate.
- the P well region 1011 is formed so as to be separated from the oxide film 64 such as STI (Shallow Trench Isolation) formed around the N + semiconductor region 71 of the signal extraction unit 65 by a predetermined distance in the plane direction.
- an oxide film 1012 also serving as a gate insulating film of the pixel transistor Tr is formed on the back surface side interface of the substrate 61.
- the P-well region 1021 is formed to extend in the plane direction until it comes into contact with the adjacent oxide film 64, so that the gap region 1013 does not exist at the back surface side interface of the substrate 61. Can be formed. This can prevent electrons from accumulating in the gap region 1013 shown in FIG. 48, so that noise can be suppressed.
- the P well region 1021 is formed with a higher impurity concentration than the P type semiconductor region 1022 of the substrate 61 which is a photoelectric conversion region.
- the oxide film 1032 formed around the N + semiconductor region 71 of the signal extracting portion 65 is extended in the plane direction to the P well region 1031 to form the substrate. 61 may be formed such that the gap region 1013 does not exist at the backside interface.
- the oxide film 1033 also isolates the pixel transistors Tr such as the reset transistor 723, the amplification transistor 724, and the selection transistor 725 in the P well region 1031.
- the oxide film 1033 is formed of, for example, STI, and can be formed in the same step as the oxide film 1032.
- the insulating film (oxide film 64, oxide film 1032) and the P well region (P well region 1021, P well region 1031) at the boundary of the pixel are By contacting with each other, the gap region 1013 can be eliminated, so that accumulation of electrons can be prevented and noise can be suppressed.
- the configuration of A or B in FIG. 49 can be applied to any of the embodiments described in this specification.
- the accumulation of electrons generated in the gap region 1013 can be suppressed by adopting a configuration as shown in FIG. 50 or 51.
- FIG. 50 is a plan view in which a two-tap pixel 51 having two signal extraction portions 65-1 and 65-2 in one pixel is two-dimensionally arranged, and includes an oxide film 64, a P well region 1011 and a gap region 1013. Is shown.
- the P well region 1011 is connected to a plurality of pixels arranged in the column direction as shown in FIG. Formed in rows.
- an N-type diffusion layer 1061 is provided as a drain for discharging charges. Electrons can be discharged.
- the N-type diffusion layer 1061 is formed on the back surface side interface of the substrate 61, and GND (0 V) or a positive voltage is applied to the N-type diffusion layer 1061. Electrons generated in the gap region 1013 of each pixel 51 move in the vertical direction (column direction) to the N-type diffusion layer 1061 in the invalid pixel region 1052, and are collected by the N-type diffusion layer 1061 shared by the pixel columns. Therefore, noise can be suppressed.
- the N-type diffusion layer 1061 is provided in the gap region 1013 of each pixel 51.
- electrons generated in the gap region 1013 of each pixel 51 are discharged from the N-type diffusion layer 1061, so that noise can be suppressed.
- 50 and 51 can be applied to any of the embodiments described in this specification.
- the signal extraction unit 65 and the like are formed in the same manner as the pixel 51 in the effective pixel area.
- an inter-pixel light-shielding film 63 is formed on the entire pixel area, so that light does not enter. In many cases, no drive signal is applied to the light-shielded pixel 51X.
- the light-shielded pixel area adjacent to the effective pixel area oblique incident light from the lens, diffracted light from the inter-pixel light-shielding film 63, and reflected light from the multilayer wiring layer 811 are incident, and photoelectrons are generated. Since the generated photoelectrons have no discharge destination, they are accumulated in the light-shielded pixel area, diffused into the effective pixel area by the density gradient, mixed with the signal charges, and become noise. The noise around the effective pixel area becomes so-called frame unevenness.
- the light receiving element 1 can provide any one of the charge discharge areas 1101 of A to D in FIG. 53 around the effective pixel area 1051.
- 53A to 53D are plan views illustrating a configuration example of the charge discharging region 1101 provided on the outer periphery of the effective pixel region 1051.
- a charge discharging region 1101 is provided on the outer periphery of an effective pixel region 1051 arranged at the center of the substrate 61, and an OPB region 1102 is further provided outside the charge discharging region 1101. ing.
- the charge discharging region 1101 is a region with hatching between the inner broken rectangle and the outer broken rectangle.
- the OPB region 1102 is a region in which the inter-pixel light-shielding film 63 is formed on the entire surface, and in which the OPB pixels for detecting the black level signal are arranged by being driven in the same manner as the pixels 51 in the effective pixel region.
- gray areas indicate areas shielded from light by forming the inter-pixel light-shielding film 63.
- the charge discharging region 1101 in FIG. 53A is composed of an opening pixel region 1121 in which opening pixels are arranged and a light-shielding pixel region 1122 in which light-shielding pixels 51X are arranged.
- the aperture pixels in the aperture pixel area 1121 have the same pixel structure as the pixels 51 in the effective pixel area 1051, and are pixels that perform predetermined driving.
- the light-shielded pixel 51X in the light-shielded pixel area 1122 has the same pixel structure as the pixel 51 in the effective pixel area 1051 except that the inter-pixel light-shielding film 63 is formed over the entire pixel area, and is a pixel that performs predetermined driving. is there.
- the aperture pixel region 1121 has one or more pixel columns or rows in each column or each row on the four sides on the outer periphery of the effective pixel region 1051.
- the light-shielded pixel region 1122 also has one or more pixel columns or rows in each column or each row on the four sides on the outer periphery of the aperture pixel region 1121.
- the charge discharging region 1101 in FIG. 53B is composed of a light-shielded pixel region 1122 in which the light-shielded pixels 51X are arranged, and an N-type region 1123 in which the N-type diffusion layer is arranged.
- FIG. 54 is a cross-sectional view in the case where the charge discharging region 1101 includes the light-shielding pixel region 1122 and the N-type region 1123.
- the entire surface of the N-type region 1123 is shielded from light by the inter-pixel light-shielding film 63, and the P-type semiconductor region 1022 of the substrate 61 is replaced by the N-type semiconductor region of high concentration instead of the signal extraction portion 65.
- This is a region where the mold diffusion layer 1131 is formed.
- 0 V or a positive voltage is constantly or intermittently applied from the metal film M1 of the multilayer wiring layer 811.
- the N-type diffusion layer 1131 is formed, for example, over the entire region of the P-type semiconductor region 1022 in the N-type region 1123, and may be formed in a continuous substantially annular shape in plan view, or may be formed in the P-type semiconductor region 1022 in the N-type region 1123. And a plurality of N-type diffusion layers 1131 may be arranged in a substantially annular manner in a plan view.
- the light-shielded pixel region 1122 has one or more pixel columns or rows in each column or each row on the four sides on the outer periphery of the effective pixel region 1051.
- the N-type region 1123 also has a predetermined column width or row width in each column or each row on the four sides on the outer periphery of the light-shielded pixel region 1122.
- the charge discharging region 1101 of C in FIG. 53 is constituted by a light-shielded pixel region 1122 in which light-shielded pixels are arranged.
- the light-shielded pixel region 1122 has one or more pixel columns or rows in each column or each row on the four sides on the outer periphery of the effective pixel region 1051.
- the charge discharging region 1101 in FIG. 53D includes an opening pixel region 1121 in which opening pixels are arranged and an N-type region 1123 in which an N-type diffusion layer is arranged.
- the predetermined driving performed by the opening pixel in the opening pixel region 1121 and the light-shielding pixel 51X in the light-shielding pixel region 1122 includes an operation in which a positive voltage is constantly or intermittently applied to the N-type semiconductor region of the pixel.
- the configuration example of the charge discharging region 1101 shown in FIGS. 53A to 53D is an example, and is not limited to these examples.
- the charge discharging region 1101 includes one of an opening pixel that performs a predetermined driving, a light-shielding pixel that performs a predetermined driving, and an N-type region having an N-type diffusion layer to which 0 V or a positive voltage is constantly or intermittently applied. Any configuration may be provided. Therefore, for example, the opening pixel, the light-shielding pixel, and the N-type region may be mixed in one pixel column or pixel row, or the opening pixel, the light-shielding pixel Or different types of N-type regions.
- FIG. 55B is a plan view showing the arrangement of the pixel transistor wiring region 831 shown in FIG. 42A.
- the area of the signal extraction unit 65 can be reduced by changing the layout, whereas the area of the pixel transistor wiring region 831 is determined by the occupied area of one pixel transistor, the number of pixel transistors, and the wiring area.
- the area of the pixel transistor wiring region 831 is a major limiting factor. In order to increase the resolution while maintaining the optical size of the sensor, it is necessary to reduce the pixel size, but the area of the pixel transistor wiring region 831 is restricted.
- ⁇ Configuration example of pixel> Therefore, as shown in FIG. 56, a configuration is adopted in which the light receiving element 1 has a laminated structure in which two substrates are laminated, and all the pixel transistors are arranged on a substrate different from the substrate having the photoelectric conversion region. be able to.
- FIG. 56 is a sectional view of a pixel according to the eighteenth embodiment.
- FIG. 56 shows a cross-sectional view of a plurality of pixels corresponding to the line B-B ′ in FIG. 11, as in FIG. 36 and the like described above.
- FIG. 56 portions corresponding to the cross-sectional views of a plurality of pixels of the fourteenth embodiment shown in FIG. 36 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the light receiving element 1 is configured by laminating two substrates, a substrate 1201 and a substrate 1211.
- the substrate 1201 corresponds to the substrate 61 in the fourteenth embodiment shown in FIG. 36, and is formed of, for example, a silicon substrate having a P-type semiconductor region 1204 as a photoelectric conversion region.
- the substrate 1211 is also formed of a silicon substrate or the like.
- the substrate 1201 having a photoelectric conversion region is formed using a silicon substrate or the like, a compound semiconductor such as GaAs, InP, or GaSb, a narrow band gap semiconductor such as Ge, a glass substrate coated with an organic photoelectric conversion film, or a plastic substrate. It may be composed of a substrate.
- the substrate 1201 is made of a compound semiconductor, improvement in quantum efficiency and sensitivity due to a direct transition type band structure, and reduction in sensor height due to thinning of the substrate can be expected.
- electron mobility is increased, so that electron collection efficiency can be improved. Since hole mobility is low, power consumption can be reduced.
- the substrate 1201 is made of a narrow band gap semiconductor, improvement in quantum efficiency and sensitivity in the near infrared region due to the narrow band gap can be expected.
- the substrate 1201 and the substrate 1211 are bonded together such that the wiring layer 1202 of the substrate 1201 and the wiring layer 1212 of the substrate 1211 face each other.
- the metal wiring 1203 of the wiring layer 1202 on the substrate 1201 side and the metal wiring 1213 of the wiring layer 1212 on the substrate 1211 side are electrically connected by, for example, Cu-Cu bonding.
- the electrical connection between the wiring layers is not limited to the Cu-Cu junction, for example, a similar metal junction such as an Au-Au junction or an Al-Al junction, a Cu-Au junction, a Cu-Al junction, or an Au- Dissimilar metal bonding such as Al bonding may be used.
- the reflection member 631 of the fourteenth embodiment or the light shielding member 631 ′ of the fifteenth embodiment is further provided on one of the wiring layer 1202 of the substrate 1201 and the wiring layer 1212 of the substrate 1211. Can be.
- the difference between the substrate 1201 having the photoelectric conversion region and the substrate 61 of the above-described first to seventeenth embodiments is that all the pixel transistors Tr such as the reset transistor 723, the amplification transistor 724, and the selection transistor 725 are This is a point not formed on the substrate 1201.
- the pixel transistors Tr such as the reset transistor 723, the amplification transistor 724, and the selection transistor 725 are formed on the lower substrate 1211 side in the figure. 56 shows the reset transistor 723, the amplification transistor 724, and the selection transistor 725, the transfer transistor 721 is also formed in a region (not shown) of the substrate 1211.
- An insulating film (oxide film) 1214 which also serves as a gate insulating film of the pixel transistor is formed between the substrate 1211 and the wiring layer 1212.
- the light receiving element 1 according to the eighteenth embodiment is shown, and as shown in FIG. 58, the light receiving element 1 is configured by stacking a substrate 1201 and a substrate 1211.
- a portion excluding the transfer transistor 721, the FD 722, the reset transistor 723, the amplification transistor 724, and the selection transistor 725 from the pixel array region 951 shown in FIG. 47C is formed. I have.
- the area control circuit 1232 of the substrate 1211 includes, in addition to the area control circuit 954 shown in FIG. 47C, the transfer transistor 721, FD722, reset transistor 723, amplification transistor 724, and selection transistor of each pixel of the pixel array unit 20.
- a transistor 725 is formed.
- the tap drive unit 21, the vertical drive unit 22, the column processing unit 23, the horizontal drive unit 24, the system control unit 25, the signal processing unit 31, and the data storage unit 32 illustrated in FIG. 1 are also formed on the substrate 1211. .
- FIG. 59 shows a MIX junction that is an electrical junction between the substrate 1201 and the substrate 1211 that exchanges the voltage MIX, and a DET junction that is an electrical junction between the substrate 1201 and the substrate 1211 that exchanges the signal charge DET.
- MIX junction that is an electrical junction between the substrate 1201 and the substrate 1211 that exchanges the voltage MIX
- DET junction that is an electrical junction between the substrate 1201 and the substrate 1211 that exchanges the signal charge DET.
- FIG. 59 some of the reference numerals of the MIX joining section 1251 and the DET joining section 1252 are omitted to prevent the figure from being complicated.
- the MIX junction 1251 for supplying the voltage MIX and the DET junction 1252 for acquiring the signal charge DET are provided for each pixel 51, for example.
- the voltage MIX and the signal charge DET are transferred between the substrate 1201 and the substrate 1211 in pixel units.
- the DET junction 1252 for acquiring the signal charge DET is provided in a pixel area in a pixel unit, but the MIX junction 1251 for supplying the voltage MIX is provided in the pixel region. It may be provided in a peripheral portion 1261 outside the array unit 20. In the peripheral portion 1261, the voltage MIX supplied from the substrate 1211 is supplied to the P + semiconductor region 73, which is a voltage application unit of each pixel 51, via a voltage supply line 1253 wired in the substrate 1201 in a vertical direction. In this way, the MIX junction 1251 that supplies the voltage MIX is shared by a plurality of pixels, so that the number of MIX junctions 1251 in the entire substrate can be reduced, and the pixel size and chip size can be easily miniaturized. become.
- FIG. 60 is an example in which the voltage supply lines 1253 are wired in the vertical direction and are shared by the pixel columns, the voltage supply lines 1253 may be wired in the horizontal direction and shared by the pixel rows. Good.
- the light receiving element 1 is configured by a laminated structure of the substrate 1201 and the substrate 1211, and the electric charge is transferred to the substrate 1211 different from the substrate 1201 having the P-type semiconductor region 1204 as the photoelectric conversion region.
- All the pixel transistors that perform the read operation of the signal charge DET of the N + semiconductor region 71 as the detection unit, that is, the transfer transistor 721, the reset transistor 723, the amplification transistor 724, and the selection transistor 725 are arranged.
- the problem described with reference to FIG. 55 can be solved.
- the area of the pixel 51 can be reduced irrespective of the area of the pixel transistor wiring region 831, and high resolution can be achieved without changing the optical size. Further, an increase in current from the signal extraction unit 65 to the pixel transistor wiring region 831 is avoided, so that current consumption can be reduced.
- the P + semiconductor region 73 or the P ⁇ semiconductor region 74 is extended to a deep position in the semiconductor layer, or a positive voltage applied is applied. the need to be raising to a higher voltage VA 2 than the voltage VA 1.
- the current Imix easily flows due to the reduction in resistance between the voltage applying units, and a problem of an increase in current consumption becomes a problem.
- the distance between the voltage applying units is shortened, thereby lowering the resistance and increasing the current consumption.
- FIG. 62A is a plan view of a pixel according to a first configuration example of the nineteenth embodiment
- FIG. 62B is a cross-sectional view of a pixel according to the first configuration example of the nineteenth embodiment. is there.
- FIG. 62A is a plan view taken along line B-B 'in FIG. 62B, and FIG. 62B is a cross-sectional view taken along line A-A' in FIG. 62A.
- the pixel 62 shows only a portion of the pixel 51 formed on the substrate 61, for example, an on-chip lens 62 formed on the light incident surface side or a multilayer formed on the opposite side of the light incident surface. Illustration of the wiring layer 811 and the like is omitted. Portions not shown can be configured in the same manner as the other embodiments described above. For example, a reflective member 631 or a light blocking member 631 'can be provided on the multilayer wiring layer 811 on the opposite side of the light incident surface.
- the electrode unit 1311-1 functions as a voltage application unit that applies a predetermined voltage MIX0 to a predetermined position of a P-type semiconductor region 1301 that is a photoelectric conversion region of the substrate 61.
- an electrode unit 1311-2 that functions as a voltage application unit that applies a predetermined voltage MIX1.
- the electrode portion 1311-1 includes a buried portion 1311A-1 embedded in the P-type semiconductor region 1301 of the substrate 61, and a protrusion 1311B-1 protruding above the first surface 1321 of the substrate 61. .
- the electrode portion 1311-2 includes a buried portion 1311A-2 buried in the P-type semiconductor region 1301 of the substrate 61, and a protrusion 1311B-2 protruding above the first surface 1321 of the substrate 61. Is done.
- the electrode portions 1311-1 and 1311-2 are formed of, for example, a metal material such as tungsten (W), aluminum (Al), or copper (Cu), or a conductive material such as silicon or polysilicon.
- the electrode portion 1311-1 (embedded portion 1311A-1) and the electrode portion 1311-2 (embedded portion 1311A-2), each having a circular planar shape, are located at the center of the pixel.
- the points are arranged symmetrically with the points as symmetry points.
- An N + semiconductor region 1312-1 that functions as a charge detection unit is formed on the outer periphery (periphery) of the electrode unit 1311-1, and an insulating film is provided between the electrode unit 1311-1 and the N + semiconductor region 1312-1. 1313-1 and the hole concentration enhancement layer 1314-1 are inserted.
- an N + semiconductor region 1312-2 functioning as a charge detection unit is formed on the outer periphery (periphery) of the electrode unit 1311-2, and between the electrode unit 1311-2 and the N + semiconductor region 1312-2.
- the insulating film 1313-2 and the hole concentration enhancement layer 1314-2 are inserted.
- the electrode portion 1311-1 and the N + semiconductor region 1312-1 constitute the above-described signal extraction portion 65-1, and the electrode portion 1311-2 and the N + semiconductor region 1312-2 constitute the above-described signal extraction portion 65-2. I do.
- the electrode portion 1311-1 is covered with an insulating film 1313-1 in the substrate 61, as shown in FIG. 62B.
- the insulating film 1313-1 is covered with a hole concentration enhancement layer 1314-1. Have been done. The same applies to the relationship between the electrode part 1311-2, the insulating film 1313-1, and the hole concentration enhancement layer 1314-2.
- the insulating films 1313-1 and 1313-2 are made of, for example, an oxide film (SiO 2 ) and are formed in the same step as the insulating film 1322 formed on the first surface 1321 of the substrate 61. Note that an insulating film 1332 is also formed on the second surface 1331 of the substrate 61 opposite to the first surface 1321.
- the hole concentration enhancement layers 1314-1 and 1314-2 are formed of a P-type semiconductor region, and can be formed by, for example, an ion implantation method, a solid-phase diffusion method, or a plasma doping method.
- the electrode portion 1311-1 is also simply referred to as the electrode portion 1311, and the N + semiconductor region 1312-1 and the N + semiconductor region 1312-2 do not need to be particularly distinguished.
- hole concentration enhancement layer 1314-1 and the hole concentration enhancement layer 1314-2 do not need to be particularly distinguished from each other, they are simply referred to as a hole concentration enhancement layer 1314, and the insulation films 1313-1 and 1313-2 are particularly distinguished. If not necessary, it is simply referred to as an insulating film 1313.
- the electrode portion 1311, the insulating film 1313, and the hole concentration enhancement layer 1314 can be formed in the following procedure. First, a trench is formed to a predetermined depth by etching the P-type semiconductor region 1301 of the substrate 61 from the first surface 1321 side. Next, a hole concentration enhancement layer 1314 is formed on the inner periphery of the formed trench by an ion implantation method, a solid phase diffusion method, a plasma doping method, or the like, and then an insulating film 1313 is formed. Next, a buried portion 1311A is formed by burying a conductive material inside the insulating film 1313.
- the depth of the electrode portion 1311 is configured to be at least a position deeper than the N + semiconductor region 1312 as the charge detection portion, but is preferably configured to be a position deeper than half of the substrate 61.
- a trench is formed in the depth direction of the substrate 61, and the electrode portion 1311 buried with a conductive material is used. Since the charge distribution effect is obtained for the charges photoelectrically converted in a wide area in the depth direction of the substrate 61, the charge separation efficiency Cmod for long-wavelength light can be increased.
- the outer peripheral portion of the electrode portion 1311 is covered with the insulating film 1313, current flowing between the voltage applying portions is suppressed, so that current consumption can be reduced.
- a high voltage can be applied to the voltage application unit.
- the current consumption can be suppressed, so that the resolution can be increased by miniaturizing the pixel size and increasing the number of pixels.
- the protrusion 1311B of the electrode 1311 may be omitted, but by providing the protrusion 1311B, the electric field in the direction perpendicular to the substrate 61 is increased. It becomes easier to collect charges.
- the hole concentration enhancement layer 1314 may be omitted. In the case where the hole concentration enhancement layer 1314 is provided, it is possible to suppress damage during etching for forming a trench and electrons generated due to contaminants.
- either the first surface 1321 or the second surface 1331 of the substrate 61 may be the light incident surface, and both the back side illumination type and the front side illumination type are possible. However, a back-illuminated type is more preferable.
- FIG. 63A is a plan view of a pixel according to a second configuration example of the nineteenth embodiment
- FIG. 63B is a cross-sectional view of a pixel according to the second configuration example of the nineteenth embodiment. is there.
- FIG. 63A is a plan view taken along line B-B 'in FIG. 63B, and FIG. 63B is a cross-sectional view taken along line A-A' in FIG. 63A.
- the second configuration example of FIG. 63 is different in that the buried portion 1311A of the electrode portion 1311 penetrates the substrate 61 which is a semiconductor layer, and is common in other points.
- the buried portion 1311A of the electrode portion 1311 is formed from the first surface 1321 to the second surface 1331 of the substrate 61.
- An insulating film 1313 and a hole concentration enhancement layer 1314 are also provided on the outer periphery of the electrode portion 1311. Is formed.
- the entire surface of the second surface 1331 on which the N + semiconductor region 1312 as the charge detection portion is not formed is covered with the insulating film 1332.
- the buried portion 1311A of the electrode portion 1311 as a voltage applying portion may be configured to penetrate the substrate 61. Also in this case, an effect of distributing charges can be obtained for charges photoelectrically converted in a wide area in the depth direction of the substrate 61, so that the charge separation efficiency Cmod for long-wavelength light can be increased.
- the outer peripheral portion of the electrode portion 1311 is covered with the insulating film 1313, current flowing between the voltage applying portions is suppressed, so that current consumption can be reduced.
- a high voltage can be applied to the voltage application unit.
- the current consumption can be suppressed, so that the resolution can be increased by miniaturizing the pixel size and increasing the number of pixels.
- either the first surface 1321 or the second surface 1331 of the substrate 61 may be the light incident surface, and both the back side illumination type and the front side illumination type are possible. However, a back-illuminated type is more preferable.
- planar shape of the electrode portion 1311 serving as the voltage applying portion and the N + semiconductor region 1312 serving as the charge detecting portion are formed in a circular shape.
- planar shape of the electrode portion 1311 and the N + semiconductor region 1312 is not limited to a circle, but may be an octagon shown in FIG. 11, a rectangle shown in FIG. 12, or a square.
- the number of signal extraction units 65 (tap) arranged in one pixel is not limited to two, but may be four as shown in FIG.
- FIGS. 64A to 64C are plan views corresponding to the line BB ′ of FIG. 62B.
- the number of the signal extraction unit 65 is two, and the electrode unit 1311 and the N + semiconductor constituting the signal extraction unit 65 are shown.
- An example is shown in which the planar shape of the region 1312 is a shape other than a circle.
- AA in FIG. 64 is an example of a vertically long rectangle in which the planar shapes of the electrode portion 1311 and the N + semiconductor region 1312 are long in the vertical direction.
- the electrode units 1311-1 and 1311-2 are arranged point-symmetrically with the center point of the pixel as the symmetric point. Further, the electrode portion 1311-1 and the electrode portion 1311-2 are arranged to face each other.
- the shape and positional relationship of the insulating film 1313, the hole concentration enhancement layer 1314, and the N + semiconductor region 1312 formed on the outer periphery of the electrode portion 1311 are the same as those of the electrode portion 1311.
- BB in FIG. 64 is an example in which the planar shapes of the electrode portion 1311 and the N + semiconductor region 1312 are L-shaped.
- CC in FIG. 64 is an example in which the planar shape of the electrode portion 1311 and the N + semiconductor region 1312 is a comb shape.
- the electrode units 1311-1 and 1311-2 are arranged point-symmetrically with the center point of the pixel as the symmetric point. Further, the electrode portion 1311-1 and the electrode portion 1311-2 are arranged to face each other. The same applies to the shape and positional relationship of the insulating film 1313, the hole concentration enhancement layer 1314, and the N + semiconductor region 1312 formed on the outer periphery of the electrode portion 1311.
- the 65A to 65C are plan views corresponding to the line BB 'of FIG. 62B.
- the number of the signal extraction unit 65 is four, and the electrode unit 1311 and the N + semiconductor constituting the signal extraction unit 65.
- An example is shown in which the planar shape of the region 1312 is a shape other than a circle.
- AA in FIG. 65 is an example of a vertically long rectangle in which the planar shape of the electrode portion 1311 and the N + semiconductor region 1312 is long in the vertical direction.
- the vertically long electrode portions 1311-1 to 1311-4 are arranged at predetermined intervals in the horizontal direction, and are arranged point-symmetrically with the center point of the pixel as the point of symmetry. Also, the electrode units 1311-1 and 1311-2 and the electrode units 1311-1 and 1311-1 are arranged to face each other.
- the electrode unit 1311-1 and the electrode unit 1311-3 are electrically connected by a wiring 1351 and constitute, for example, a voltage application unit of a signal extraction unit 65-1 (first tap TA) to which the voltage MIX0 is applied.
- the N + semiconductor region 1312-1 and the N + semiconductor region 1312-3 are electrically connected by a wiring 1352 and constitute a charge detection unit of a signal extraction unit 65-1 (first tap TA) for detecting a signal charge DET1. .
- the electrode portion 1311-2 and the electrode portion 1311-4 are electrically connected by a wiring 1353, and constitute, for example, a voltage application unit of a signal extraction unit 65-2 (second tap TB) to which the voltage MIX1 is applied.
- the N + semiconductor region 1312-2 and the N + semiconductor region 1312-4 are electrically connected by a wiring 1354, and constitute a charge detection unit of a signal extraction unit 65-2 (second tap TB) for detecting a signal charge DET2. .
- a set of the voltage application unit and the charge detection unit of the signal extraction unit 65-1 having a rectangular planar shape and the voltage of the signal extraction unit 65-2 having a rectangular planar shape are provided.
- the set of the application unit and the charge detection unit are alternately arranged in the horizontal direction.
- BB in FIG. 65 is an example in which the planar shape of the electrode portion 1311 and the N + semiconductor region 1312 is a square.
- a set of a voltage application unit and a charge detection unit of a signal extraction unit 65-1 having a rectangular planar shape is arranged to face the pixel 51 in the diagonal direction, and a rectangular signal extraction unit is provided.
- a set of a voltage application unit and a charge detection unit of the unit 65-2 is arranged to face the signal extraction unit 65-1 in a different diagonal direction.
- CC in FIG. 65 is an example in which the planar shapes of the electrode portion 1311 and the N + semiconductor region 1312 are triangular.
- a set of the voltage application unit and the charge detection unit of the signal extraction unit 65-1 having a triangular planar shape is disposed to face the pixel 51 in the first direction (horizontal direction).
- a pair of a voltage application unit and a charge detection unit of the triangular signal extraction unit 65-2 is arranged orthogonal to the first direction and opposed to a second direction (vertical direction) different from the signal extraction unit 65-1. Have been.
- the four electrode units 1311-1 to 1311-4 are arranged point-symmetrically with respect to the center point of the pixel, and the electrode unit 1311-1 and the electrode unit 1311-3.
- the N + semiconductor region 1312-1 and the N + semiconductor region 1312-3 are electrically connected by a wiring 1352
- the electrode portion 1311-2 and the electrode portion 1311-4 are electrically connected by a wiring 1353
- the N + semiconductor region 1312-2 and the N + semiconductor region 1312-4 are electrically connected by a wiring 1354.
- the shape and positional relationship of the insulating film 1313 and the hole concentration enhancement layer 1314 formed on the outer periphery of the electrode portion 1311 are the same as those of the electrode portion 1311.
- FIG. 66A is a plan view of a pixel according to a third configuration example of the nineteenth embodiment
- FIG. 66B is a cross-sectional view of a pixel according to the third configuration example of the nineteenth embodiment. is there.
- FIG. 66A is a plan view taken along line B-B 'in FIG. 66B, and FIG. 66B is a cross-sectional view taken along line A-A' in FIG. 66A.
- the electrode portion 1311 as the voltage application portion and the N + semiconductor region 1312 as the charge detection portion are on the same plane side of the substrate 61, that is, in the first configuration example. In the vicinity (near) of the surface 1321 side.
- the electrode portion 1311 serving as the voltage applying portion is located on the side opposite to the first surface 1321 of the substrate 61 on which the N + semiconductor region 1312 serving as the charge detecting portion is formed. It is arranged on the plane side, that is, on the second surface 1331 side.
- the protruding part 1311B of the electrode part 1311 is formed above the second surface 1331 of the substrate 61.
- the electrode portion 1311 is arranged at a position where the center position overlaps with the N + semiconductor region 1312 in plan view.
- the example of FIG. 66 is an example in which the electrode portion 1311 and the circular planar region of the N + semiconductor region 1312 completely coincide with each other. However, it is not always necessary to completely coincide with each other. The area may be large. Also, the center positions may be in a range that does not completely match but can be regarded as substantially matching.
- the third configuration example is the same as the above-described first configuration example, except for the positional relationship between the electrode portion 1311 and the N + semiconductor region 1312.
- the embedded portion 1311A of the electrode portion 1311 as the voltage application portion is formed by detecting the charge on the first surface 1321 opposite to the second surface 1331 on which the electrode portion 1311 is formed. It is formed to a deep position near the N + semiconductor region 1312 which is a portion. Also in this case, an effect of distributing charges can be obtained for charges photoelectrically converted in a wide area in the depth direction of the substrate 61, so that the charge separation efficiency Cmod for long-wavelength light can be increased.
- the outer peripheral portion of the electrode portion 1311 is covered with the insulating film 1313, current flowing between the voltage applying portions is suppressed, so that current consumption can be reduced.
- a high voltage can be applied to the voltage application unit.
- the current consumption can be suppressed, so that the resolution can be increased by miniaturizing the pixel size and increasing the number of pixels.
- either the first surface 1321 or the second surface 1331 of the substrate 61 may be the light incident surface, and both the back side illumination type and the front side illumination type are possible. However, a back-illuminated type is more preferable.
- the second surface 1331 is a surface on which the on-chip lens 62 is formed.
- the voltage supply line 1253 to be supplied is wired in the vertical direction of the pixel array section 20, and is connected to the wiring on the front side by a through electrode penetrating the substrate 61 in the peripheral portion 1261 outside the pixel array section 20. Can be.
- planar shapes of the electrode portion 1311 serving as the voltage applying portion and the N + semiconductor region 1312 serving as the charge detecting portion are formed in a circular shape.
- planar shape of the electrode portion 1311 and the N + semiconductor region 1312 is not limited to a circle, but may be an octagon shown in FIG. 11, a rectangle shown in FIG. 12, or a square.
- the number of signal extraction units 65 (tap) arranged in one pixel is not limited to two, but may be four as shown in FIG.
- 67A to 67C are plan views corresponding to the line BB 'of FIG. 66B.
- the number of the signal extraction unit 65 is two, and the electrode unit 1311 and the N + semiconductor constituting the signal extraction unit 65.
- An example is shown in which the planar shape of the region 1312 is a shape other than a circle.
- AA of FIG. 67 is an example of a vertically long rectangle in which the planar shape of the electrode portion 1311 and the N + semiconductor region 1312 is long in the vertical direction.
- the N + semiconductor region 1312-1 and the N + semiconductor region 1312-2 which are charge detection units, are arranged point-symmetrically with the center point of the pixel as the symmetric point. Further, the N + semiconductor region 1312-1 and the N + semiconductor region 1312-2 are arranged to face each other.
- the shape and position of the electrode portion 1311 disposed on the second surface 1331 opposite to the surface on which the N + semiconductor region 1312 is formed, and the shape and position of the insulating film 1313 and the hole concentration enhancement layer 1314 formed on the outer periphery of the electrode portion 1311 The relationship is similar to that of the N + semiconductor region 1312.
- FIG. 67 is an example in which the planar shape of the electrode portion 1311 and the N + semiconductor region 1312 is L-shaped.
- CC in FIG. 67 is an example in which the planar shape of the electrode portion 1311 and the N + semiconductor region 1312 is a comb shape.
- the N + semiconductor region 1312-1 and the N + semiconductor region 1312-2 are arranged point-symmetrically with the center point of the pixel as the symmetric point. Further, the N + semiconductor region 1312-1 and the N + semiconductor region 1312-2 are arranged to face each other.
- the shape and position of the electrode portion 1311 disposed on the second surface 1331 opposite to the surface on which the N + semiconductor region 1312 is formed, and the shape and position of the insulating film 1313 and the hole concentration enhancement layer 1314 formed on the outer periphery of the electrode portion 1311 The relationship is similar to that of the N + semiconductor region 1312.
- 68A to 68C are plan views corresponding to the line BB ′ of FIG. 66B.
- the number of the signal extraction units 65 is four, and the electrode unit 1311 and the N + semiconductor constituting the signal extraction unit 65.
- An example is shown in which the planar shape of the region 1312 is a shape other than a circle.
- AA in FIG. 68 is an example of a vertically long rectangle in which the planar shape of the electrode portion 1311 and the N + semiconductor region 1312 is long in the vertical direction.
- N + semiconductor regions 1312-1 to 1312-4 are arranged at predetermined intervals in the horizontal direction, and are arranged point-symmetrically with the center point of the pixel as a symmetry point. Further, N + semiconductor regions 1312-1 and 1312-2 and N + semiconductor regions 1312-3 and 1312-4 are arranged to face each other.
- the electrode section 1311-1 (not shown) and the electrode section 1311-3 formed on the second surface 1331 side are electrically connected by a wiring 1351, and for example, a signal extraction section 65-1 to which a voltage MIX0 is applied.
- (First tap TA) constitutes a voltage application unit.
- the N + semiconductor region 1312-1 and the N + semiconductor region 1312-3 are electrically connected by a wiring 1352 and constitute a charge detection unit of a signal extraction unit 65-1 (first tap TA) for detecting a signal charge DET1. .
- the electrode portion 1311-2 (not shown) formed on the second surface 1331 side and the electrode portion 1311-4 are electrically connected by a wiring 1353, and for example, a signal extraction portion 65-2 to which a voltage MIX1 is applied.
- (2nd tap TB) constitutes a voltage application unit.
- the N + semiconductor region 1312-2 and the N + semiconductor region 1312-4 are electrically connected by a wiring 1354, and constitute a charge detection unit of a signal extraction unit 65-2 (second tap TB) for detecting a signal charge DET2. .
- the set of the voltage application unit and the charge detection unit of the signal extraction unit 65-1 having a rectangular planar shape and the voltage of the signal extraction unit 65-2 having a rectangular planar shape are provided.
- the set of the application unit and the charge detection unit are alternately arranged in the horizontal direction.
- BB in FIG. 68 is an example in which the planar shapes of the electrode portion 1311 and the N + semiconductor region 1312 are square.
- a pair of the voltage application unit and the charge detection unit of the signal extraction unit 65-1 having a rectangular planar shape are arranged in the diagonal direction of the pixel 51, and the signal extraction unit 65-1 has a rectangular planar shape.
- a set of a voltage application unit and a charge detection unit of the unit 65-2 is arranged to face the signal extraction unit 65-1 in a different diagonal direction.
- CC in FIG. 68 is an example in which the planar shapes of the electrode portion 1311 and the N + semiconductor region 1312 are triangular.
- a pair of the voltage application unit and the charge detection unit of the signal extraction unit 65-1 having a triangular planar shape are arranged to face each other in the first direction (horizontal direction).
- a set of a voltage application unit and a charge detection unit of the signal extraction unit 65-2 is arranged to be orthogonal to the first direction and to face a second direction (vertical direction) different from the signal extraction unit 65-1. .
- the four electrode units 1311-1 to 1311-4 are arranged symmetrically with respect to the center point of the pixel, and the electrode unit 1311-1 and the electrode unit 1311-3.
- the N + semiconductor region 1312-1 and the N + semiconductor region 1312-3 are electrically connected by a wiring 1352
- the electrode portion 1311-2 and the electrode portion 1311-4 are electrically connected by a wiring 1353
- the N + semiconductor region 1312-2 and the N + semiconductor region 1312-4 are electrically connected by a wiring 1354.
- the shape and positional relationship of the insulating film 1313 and the hole concentration enhancement layer 1314 formed on the outer periphery of the electrode portion 1311 are the same as those of the electrode portion 1311.
- FIG. 69 shows an example of a circuit configuration of the pixel array unit 20 in the case where pixel signals of a total of four taps of two pixels adjacent in the vertical direction are simultaneously output.
- FIG. 69 shows a circuit configuration of 2 ⁇ 2 four pixels among a plurality of pixels 51 two-dimensionally arranged in a matrix in the pixel array unit 20. In the case of distinguishing the four pixels 51 of 2x2 in FIG. 69, expressed as pixels 51 1 to 51 4.
- each pixel 51 is the circuit configuration including the additional capacitor 727 and the switching transistor 728 that controls the connection described with reference to FIG. The description of the circuit configuration will be omitted because it is repeated.
- ⁇ ⁇ Voltage supply lines 30A and 30B are wired in one pixel column of the pixel array section 20 in the vertical direction. Then, a predetermined voltage MIX0 is supplied to the first tap TA of the plurality of pixels 51 arranged in the vertical direction via the voltage supply line 30A, and to the second tap TB via the voltage supply line 30B. Thus, a predetermined voltage MIX1 is supplied.
- the vertical signal line 29A transmits a pixel signal of the first tap TA of pixels 51 1 to the column processing unit 23 (FIG. 1)
- the vertical signal line 29B is transmits a pixel signal of the second tap TB of pixels 51 1 to the column processing unit 23
- the vertical signal line 29C transmits a pixel signal of the first tap TA of the pixel 51 2 adjacent in the same column 1 and the pixel 51 and transmits to the processing unit 23
- the vertical signal line 29D transmits a pixel signal of the second tap TB pixel 51 2 to the column processing unit 23.
- the vertical signal line 29A transmits a pixel signal of the first tap TA of the pixel 51 3 to the column processing unit 23 (FIG. 1)
- the vertical signal line 29B is transmits a pixel signal of the second tap TB of pixels 51 3 to the column processing unit 23
- the vertical signal lines 29C column pixel signal of the first tap TA of the pixel 51 4 adjacent in the same column as the pixel 51 3 and transmits to the processing unit 23
- the vertical signal line 29D transmits a pixel signal of the second tap TB pixels 51 4 to the column processing unit 23.
- control line 841 for transmitting the drive signal RST to the reset transistor 723, the control line 842 for transmitting the drive signal TRG to the transfer transistor 721, and the drive to the switching transistor 728 are arranged in units of pixel rows.
- a control line 843 for transmitting the signal FDG and a control line 844 for transmitting the selection signal SEL to the selection transistor 725 are provided.
- the same drive signal RST, drive signal FDG, drive signal TRG, and selection signal SEL are supplied from the vertical drive unit 22 to the pixels 51 in two rows adjacent in the vertical direction.
- pixel signals can be simultaneously read in units of two rows.
- FIG. 70 shows a layout of the metal film M3, which is the third layer of the multilayer wiring layer 811 when four vertical signal lines 29A to 29D are arranged in one pixel column.
- FIG. 70 is a modification of the layout of the metal film M3 shown in FIG. 42C.
- four vertical signal lines 29A to 29D are arranged in one pixel column.
- four power supply lines 1401A to 1401D for supplying a power supply voltage VDD are arranged in one pixel column.
- FIG. 70 the area of the pixel 51 and the areas of the octagonal signal extraction units 65-1 and 65-2 shown in FIG. 11 are indicated by broken lines for reference. The same applies to FIGS. 71 to 76 described later.
- a VSS wiring (ground wiring) 1411 of the GND potential is arranged next to the vertical signal lines 29A to 29D and the power supply lines 1401A to 1401D.
- the VSS wiring 1411 includes a narrow VSS wiring 1411B disposed adjacent to the vertical signal lines 29A to 29D, a space between the vertical signal line 29B and a power supply line 1401C at a pixel boundary portion, and a space between the vertical signal line 29C and the vertical signal line 29C.
- FIG. 70 shows an example in which two VSS wirings 1411A are provided symmetrically in a pixel region for one pixel column.
- the VSS wiring 1411 (1411A or 1411B) is arranged next to each of the vertical signal lines 29A to 29D. Thereby, the vertical signal line 29 can be made hard to receive a potential change from the outside.
- the wiring adjacent to the signal line, the power supply line, and the control line is similarly referred to as the VSS wiring for the metal film of another layer. can do.
- VSS wirings can be arranged on both sides of each of the control lines 841 to 844. Accordingly, the control lines 841 to 844 can reduce the influence of the potential fluctuation from the outside.
- FIG. 71 shows a first modification of the layout of the metal film M3, which is the third layer of the multilayer wiring layer 811 when four vertical signal lines 29A to 29D are arranged in one pixel column.
- the layout of the metal film M3 in FIG. 71 is different from the layout of the metal film M3 shown in FIG. 70 in that the VSS wiring 1411 adjacent to each of the four vertical signal lines 29A to 29D has the same line width. It is.
- the VSS wiring 1411A having a large line width and the VSS wiring 1411B having a small line width are arranged on both sides of the vertical signal line 29C.
- a thick VSS line 1411A and a narrow VSS line 1411B were also arranged.
- both sides of the vertical signal line 29C are provided with the VSS wirings 1411B having a small line width, and both sides of the vertical signal line 29B are both line widths.
- the thin VSS wiring 1411B is arranged.
- Both sides of each of the other vertical signal lines 29A and 29D are also VSS wirings 1411B having a small line width.
- the line widths of the VSS wirings 1411B on both sides of the four vertical signal lines 29A to 29D are the same.
- FIG. 72 shows a second modification of the layout of the metal film M3 which is the third layer of the multilayer wiring layer 811 in the case where four vertical signal lines 29A to 29D are arranged in one pixel column.
- the layout of the metal film M3 in FIG. 72 is different from the layout of the metal film M3 shown in FIG. 70 in that the VSS wiring 1411A having a thick line width is different from the VSS wiring 1411C in which a plurality of gaps 1421 are regularly provided inside. Is replaced by
- the VSS wiring 1411C has a line width larger than that of the power supply line 1401, and a plurality of gaps 1421 are repeatedly arranged in the vertical direction at a predetermined cycle.
- the shape of the gap 1421 is a rectangle, but is not limited to a rectangle, and may be a circle or a polygon.
- FIG. 72 shows a layout in which the VSS wiring 1411A of the metal film M3 shown in FIG. 70 is replaced with a VSS wiring 1411C.
- the VSS wiring 1411A of the metal film M3 shown in FIG. 71 is replaced with a VSS wiring 1411C.
- a different layout is, of course, possible.
- FIG. 73A is a diagram showing again the arrangement of the pixel transistors shown in FIG. 44B.
- FIG. 73B shows a modification of the arrangement of the pixel transistors.
- FIG. 73A As described with reference to FIG. 44B, with reference to the middle line (not shown) of the two signal extraction units 65-1 and 65-2, from the side close to the middle line to the outside, Gate electrodes of reset transistors 723A and 723B, transfer transistors 721A and 721B, switching transistors 728A and 728B, selection transistors 725A and 725B, and amplification transistors 724A and 724B are formed in this order.
- a contact 1451 of the first power supply voltage VDD (VDD_1) is arranged between the reset transistors 723A and 723B, and a second electrode is provided outside the gate electrodes of the amplification transistors 724A and 724B.
- Contacts 1452 and 1453 for power supply voltage VDD (VDD_2) are arranged.
- a contact 1461 with the first VSS wiring (VSS_A) is arranged between the selection transistor 725A and the gate electrode of the switching transistor 728A, and the second VSS is connected between the selection transistor 725B and the gate electrode of the switching transistor 728B.
- a contact 1462 with the wiring (VSS_B) is provided.
- one power supply line 1401A to 1401D is required for one pixel column.
- the switching transistors 728A and 728B are sequentially arranged from the side closer to the intermediate line to the outer side with respect to the intermediate line (not shown) of the two signal extraction units 65-1 and 65-2. , Transfer transistors 721A and 721B, reset transistors 723A and 723B, amplification transistors 724A and 724B, and selection transistors 725A and 725B.
- a contact 1471 with the first VSS wiring (VSS_1) is arranged between the switching transistors 728A and 728B, and the second one is provided outside the gate electrodes of the selection transistors 725A and 725B, respectively.
- Contacts 1472 and 1473 for VSS wiring (VSS_2) are arranged.
- a contact 1481 of the first power supply voltage VDD (VDD_A) is arranged between the gate electrodes of the amplification transistor 724A and the reset transistor 723A, and the second power supply is connected between the gate electrodes of the amplification transistor 724B and the reset transistor 723B.
- a contact 1482 of voltage VDD (VDD_B) is arranged.
- the number of contacts of the power supply voltage can be reduced as compared with the pixel transistor layout of FIG. 73A, so that the circuit can be simplified. Further, the number of power supply lines 1401 for wiring the pixel array unit 20 can be reduced, and one power supply line can be constituted by two power supply lines 1401.
- the contact 1471 with the first VSS wiring (VSS_1) between the switching transistors 728A and 728B can be omitted.
- the density of the pixel transistors in the vertical direction can be reduced.
- the current flowing between the voltage supply line 741 (FIGS. 33 and 34) for applying the voltage MIX0 or MIX1 and the VSS wiring can be reduced.
- the amplification transistors 724A and 724B can be formed large in the vertical direction. Thus, noise of the pixel transistor can be reduced, and variation in signal can be reduced.
- the contacts 1472 and 1473 for the second VSS wiring may be omitted.
- the density of the pixel transistors in the vertical direction can be reduced.
- the current flowing between the voltage supply line 741 (FIGS. 33 and 34) for applying the voltage MIX0 or MIX1 and the VSS wiring can be reduced.
- the amplification transistors 724A and 724B can be formed large in the vertical direction. Thus, noise of the pixel transistor can be reduced, and variation in signal can be reduced.
- FIG. 74 shows a wiring layout for connecting the pixel transistors Tr of the metal film M1 in the pixel transistor layout of FIG. 73B.
- FIG. 74 corresponds to the wiring connecting the pixel transistors Tr of the metal film M1 shown in C of FIG.
- the wiring connecting the pixel transistors Tr may be connected across other wiring layers such as the metal films M2 and M3.
- FIG. 75 shows the layout of the metal film M3 which is the third layer of the multilayer wiring layer 811 when the pixel transistor layout of FIG. 73B is used and two power supply lines 1401 are provided in one pixel column.
- FIG. 75 the same reference numerals are given to the portions corresponding to FIG. 70, and the description of the portions will be appropriately omitted.
- the current density can be further reduced, and the reliability of the wiring can be improved.
- FIG. 76 shows another layout of the metal film M3 which is the third layer of the multilayer wiring layer 811 when the pixel transistor layout of FIG. 73 is the pixel transistor layout and two power lines 1401 are provided in one pixel column. I have.
- FIG. 76 the same reference numerals are given to the portions corresponding to FIG. 70, and the description of the portions will be appropriately omitted.
- the current density can be further reduced, and the reliability of the wiring can be improved.
- the layout of the metal film M3 shown in FIGS. 75 and 76 is an example in which the layout of the metal film M3 shown in FIG. 70 is changed to two power supply lines 1401, but is shown in FIGS. 71 and 72.
- the degree of influence of crosstalk can be made uniform, and variations in characteristics can be reduced.
- FIG. 72 when forming a wide VSS wiring 1411C, The effect that the stability can be improved can be further obtained.
- FIG. 77 is a plan view showing a wiring example of the VSS wiring in the multilayer wiring layer 811.
- the VSS wiring is formed in a plurality of wiring layers, such as a first wiring layer 1521, a second wiring layer 1522, and a third wiring layer 1523, in the multilayer wiring layer 811. can do.
- the first wiring layer 1521 for example, a plurality of vertical wirings 1511 extending in the pixel array section 20 in the vertical direction are arranged at predetermined intervals in the horizontal direction
- the second wiring layer 1522 for example, A plurality of horizontal wirings 1512 extending in the pixel array unit 20 in the horizontal direction are arranged at predetermined intervals in the vertical direction
- the third wiring layer 1523 includes, for example, a line thicker than the vertical wiring 1511 and the horizontal wiring 1512.
- a wiring 1513 having a width and extending in the vertical or horizontal direction so as to surround at least the outside of the pixel array section 20 is arranged and connected to the GND potential.
- the wiring 1513 is also wired in the pixel array unit 20 so as to connect the wirings 1513 facing each other on the outer periphery.
- the vertical wiring 1511 of the first wiring layer 1521 and the horizontal wiring 1512 of the second wiring layer 1522 are connected by a via or the like in each of the overlapping portions 1531 where both overlap in plan view.
- the vertical wiring 1511 of the first wiring layer 1521 and the wiring 1513 of the third wiring layer 1523 are connected by a via or the like in each of the overlapping portions 1532 where both overlap in plan view.
- the horizontal wiring 1512 of the second wiring layer 1522 and the wiring 1513 of the third wiring layer 1523 are connected by a via or the like at each of the overlapping portions 1533 where they overlap in plan view.
- the VSS wiring is formed in a plurality of wiring layers of the multilayer wiring layer 811, and can be wired in the pixel array unit 20 such that the vertical wiring 1511 and the horizontal wiring 151 are formed in a lattice shape in plan view. . Thereby, propagation delay in the pixel array unit 20 can be reduced, and variation in characteristics can be suppressed.
- FIG. 78 is a plan view showing another wiring example of the VSS wiring in the multilayer wiring layer 811.
- FIG. 78 is a plan view showing another wiring example of the VSS wiring in the multilayer wiring layer 811.
- FIG. 78 portions corresponding to those in FIG. 77 are denoted by the same reference numerals, and description thereof will be omitted as appropriate.
- the vertical wiring 1511 of the first wiring layer 1521 and the horizontal wiring 1512 of the second wiring layer 1522 are not formed outside the wiring 1513 formed on the outer periphery of the pixel array section 20. 78, it extends to the outside of the wiring 1513 on the outer periphery of the pixel array section 20.
- Each of the vertical wirings 1511 is connected to the GND potential at an outer peripheral portion 1542 of the substrate 1541 outside the pixel array section 20, and each of the horizontal wirings 1512 is connected to the outer peripheral section 1543 of the substrate 1541 outside the pixel array section 20. Is connected to the GND potential.
- the vertical wiring 1511 of the first wiring layer 1521 and the horizontal wiring 1512 of the second wiring layer 1522 are connected to the GND potential via the outer wiring 1513.
- the vertical wiring 1511 and the horizontal wiring 1512 themselves are directly connected to the GND potential.
- the region where the vertical wiring 1511 and the horizontal wiring 1512 themselves are connected to the GND potential may be four sides of the substrate 1541 as in the outer peripheral portions 1542 and 1543 in FIG. , Or three sides.
- the VSS wiring is formed in a plurality of wiring layers of the multilayer wiring layer 811 and can be wired in the pixel array unit 20 so as to form a lattice shape in a plan view. Thereby, propagation delay in the pixel array unit 20 can be reduced, and variation in characteristics can be suppressed.
- FIGS. 77 and 78 have been described as wiring examples of the VSS wiring, but the power supply line can be similarly wired.
- the VSS wiring 1411 and the power supply line 1401 described in FIGS. 70 to 76 can be arranged in a plurality of wiring layers of the multilayer wiring layer 811 like the VSS wiring or the power supply line shown in FIGS.
- the VSS wiring 1411 and the power supply line 1401 described in FIGS. 70 to 76 can be applied to any of the embodiments described in this specification.
- the light receiving element 1 as a CAPD sensor is provided with an on-chip lens 62 and an inter-pixel light-shielding film 63 according to the difference in the incident angle of the principal ray according to the in-plane position of the pixel array section 20. It is possible to perform pupil correction for shifting toward the center of the plane of the array unit 20.
- the pixel 51 at the position 1701-5 at the center of the pixel array section 20 has an on-chip lens.
- the center of 62 coincides with the center between the signal extraction units 65-1 and 65-2 formed on the substrate 61, but the positions 1701-1 to 1701-4 and 1701-6 of the periphery of the pixel array unit 20 and In the pixel 51 of 1701-9, the center of the on-chip lens 62 is shifted toward the center of the plane of the pixel array section 20.
- the inter-pixel light-shielding films 63-1 and 63-2 are also displaced toward the center of the plane of the pixel array unit 20.
- the substrate depth is set at the pixel boundary from the back surface side of the substrate 61 which is the on-chip lens 62 side.
- the DTIs 1711-1 and 1711-2 in which a trench (groove) is formed to a predetermined depth in the direction are formed, positions 1701-1 to 1701-4 and 1701-6 in the peripheral portion of the pixel array section 20 and
- the DTIs 1711-1 and 1711-2 are also displaced toward the center of the plane of the pixel array section 20. You.
- the depth of the substrate from the front side, which is the multilayer wiring layer 811 side, of the substrate 61 is reduced.
- positions 1701-1 to 1701-4 and 1701-6 at the peripheral portion of the pixel array section 20 are formed.
- the DTIs 1712-1 and 1712-2 are also displaced toward the center of the plane of the pixel array section 20. Is done.
- a substrate 61 is used as a pixel separating unit that separates the substrate 61 between adjacent pixels to prevent incident light from entering the adjacent pixels. It is also possible to provide a through-separation unit that penetrates the pixel 61 and separates adjacent pixels. In this case, similarly, positions 1701-1 to 1701-4, 1701-6, and 1701- In the pixel 51 of Ninth Embodiment, the through separation portion is arranged to be shifted toward the center of the plane of the pixel array portion 20.
- the principal ray can be centered in each pixel.
- the light receiving element 1 which is a CAPD sensor, since modulation is performed by applying a voltage between two signal extraction units 65 (tap) and flowing a current, an optimum incident position in each pixel is different. Therefore, unlike the optical pupil correction performed by the image sensor, the light receiving element 1 requires an optimum pupil correction technique for distance measurement.
- FIGS. 82A to 82C three 3 ⁇ 3 nine pixels 51 indicate the pixels 51 corresponding to the positions 1701-1 to 1701-9 of the pixel array unit 20 in FIGS. 79 to 81.
- 82A illustrates the position of the on-chip lens 62 when pupil correction is not performed and the position 1721 of the principal ray on the substrate surface side.
- the center of the on-chip lens 62 is at the center of two taps in the pixel, that is, in the pixel 51 at any of the positions 1701-1 to 1701-9 in the pixel array unit 20,
- the first tap TA (the signal extracting unit 65-1) and the second tap TB (the signal extracting unit 65-2) are arranged so as to coincide with the centers thereof.
- the position 1721 of the principal ray on the substrate surface side is different depending on the positions 1701-1 to 1701-9 in the pixel array section 20, as shown in FIG.
- the position 1721 of the chief ray is determined by the first tap at the pixel 51 at any of the positions 1701-1 to 1701-9 in the pixel array unit 20.
- On-chip lens 62 is arranged so as to coincide with TA and the center of second tap TB. More specifically, the on-chip lens 62 is arranged so as to be shifted toward the center of the plane of the pixel array section 20, as shown in FIGS.
- the position 1721 of the principal ray shown in FIG. 82B is shifted between the first tap TA and the second tap TB.
- the on-chip lens 62 is further disposed on the first tap TA side from the position of the on-chip lens 62 that is the center position.
- the displacement of the position 1721 of the principal ray between B in FIG. 82 and C in FIG. 82 increases from the center position of the pixel array unit 20 to the outer periphery.
- FIG. 83 is a view for explaining the amount of shift of the on-chip lens 62 when shifting the position 1721 of the principal ray to the first tap TA side.
- the shift amount LD between the position 1721 c of the principal ray at the position 1701-5 at the center of the pixel array unit 20 and the position 1721 X of the principal ray at the position 1701-4 at the periphery of the pixel array unit 20 is the pixel It is equal to the optical path difference LD for pupil correction at the position 1701-4 at the periphery of the array section 20.
- the first tap TA (the signal extraction unit 65-1) and the second tap TB (the signal extraction unit 65-2) such that the optical path length of the principal ray matches each pixel of the pixel array unit 20. Is shifted from the center position to the first tap TA side.
- the reason for shifting to the first tap TA side is that the light receiving timing is set to 4 Phase and only the output value of the first tap TA is used, and the phase shift corresponding to the delay time ⁇ T according to the distance to the object is performed. This is because it is assumed that a method of calculating (Phase) will be adopted.
- FIG. 84 is a timing chart for explaining a detection method using 2Phase (2Phase method) and a detection method using 4Phase (4Phase method) in the ToF sensor using the indirect ToF method.
- the light receiving element 1 receives light at a timing shifted by 180 degrees between the first tap TA and the second tap TB. It is possible to detect and signal values q A received by the first tap TA, the phase shift amount ⁇ corresponding to the delay time ⁇ T in the distribution ratio of the signal values q B received by the second tap TB.
- phase shift detection tap there are four phases, the same phase as the irradiation light (that is, Phase 0), a phase shifted by 90 degrees (Phase90), a phase shifted by 180 degrees (Phase180), and a phase shifted by 270 degrees (Phase270).
- Phase 0 a phase shifted by 90 degrees
- Phase180 a phase shifted by 180 degrees
- Phase 270 a phase shifted by 270 degrees
- the signal value TA Phase180 detected at 180 ° shifted phase is the same as the signal value q B received by the second tap TB in 2Phase scheme. Therefore, if the detection is performed at 4 Phase, the phase shift amount ⁇ corresponding to the delay time ⁇ T can be detected only by the signal value of one of the first tap TA and the second tap TB.
- a tap for detecting the phase shift amount ⁇ is referred to as a phase shift detection tap.
- each of the pixel array units 20 is used in the pupil correction.
- the pixel is shifted to the first tap TA side so that the optical path length of the principal ray substantially matches.
- Cmod A of the 4-Phase method when detected by the first tap TA is calculated by the following equation (3).
- Cmod A in 4Phase scheme is greater of (q 0A -q 2A) / ( q 0A + q 2A) and (q 1A -q 3A) / ( q 1A + q 3A) Value.
- the light receiving element 1 changes the positions of the on-chip lens 62 and the inter-pixel light-shielding film 63 so that the optical path length of the principal ray becomes substantially the same for each pixel in the plane of the pixel array unit 20. Make corrections. In other words, the light receiving element 1 performs pupil correction so that a phase shift amount theta A in the first tap TA is a phase shift detection tap of each pixel within the plane of the pixel array portion 20 are substantially the same. As a result, the in-plane dependence of the chip can be eliminated, and the distance measurement accuracy can be improved.
- substantially identical or substantially identical described above means that they are identical within a predetermined range that can be regarded as identical, in addition to being completely identical or completely identical.
- the first method of pupil correction can be applied to any of the embodiments described in this specification.
- pupil correction when it is determined that the phase shift (Phase) is calculated using the signal of the first tap TA of the first tap TA and the second tap TB. Is preferred, but it may not be possible to determine which tap to use. In such a case, pupil correction can be performed by the following second method.
- a DC contrast DC A first tap TA, DC contrast DC B of the second tap TB is calculated by the following equation (4) and (5).
- a H represents a signal value detected by the first tap TA to which the light receiving element 1 is directly irradiated with continuous light continuously and continuously irradiated without interruption, and a positive voltage is applied;
- L represents a signal value detected at the second tap TB to which 0 or a negative voltage is applied.
- B H represents a signal value detected at the second tap TB to which the light receiving element 1 is directly irradiated with continuous light continuously and intermittently irradiated, and a positive voltage is applied.
- L represents a signal value detected at the first tap TA to which 0 or a negative voltage is applied.
- DC contrast DC B of DC contrast DC A and second tap TB of first tap TA It is equal to the DC contrast DC B of DC contrast DC A and second tap TB of first tap TA, and, DC contrast DC B pixels of the DC contrast DC A and second tap TB of first tap TA it is desirable to substantially match at any position in the plane of the array 20, the position of the plane of the pixel array unit 20, DC contrast DC B of DC contrast DC a and second tap TB of first tap TA are different, the DC contrast DC A of the first tap TA between the center and the outer periphery of the pixel array unit 20 and the DC difference of the second tap TB between the center and the outer periphery of the pixel array unit 20 are different.
- on-chip lens 62, the position of such inter-pixel light shielding film 63 are arranged offset in the plane center side.
- the pupil correction is performed so that each pixel in the plane of the unit 20 substantially matches.
- substantially identical or substantially identical described above means that they are identical within a predetermined range that can be regarded as identical, in addition to being completely identical or completely identical.
- the second method of pupil correction can be applied to any of the embodiments described in this specification.
- the light reception timing of the first tap TA and the second tap TB shown in FIG. 84 is controlled by the voltages MIX0 and MIX1 supplied from the tap drive unit 21 via the voltage supply line 30. Since the voltage supply line 30 is wired in the vertical direction of the pixel array unit 20 in common to one pixel column, the longer the distance from the tap driving unit 21 is, the longer the delay due to the RC component occurs.
- the resistance and capacitance of the voltage supply line 30 are changed in accordance with the distance from the tap driving unit 21 to make the driving capability of each pixel 51 substantially uniform, thereby achieving a phase shift ( Phase) or DC contrast DC can be corrected so as to be substantially uniform in the plane of the pixel array unit 20.
- the voltage supply lines 30 are arranged so that the line widths are increased according to the distance from the tap driving unit 21.
- the light receiving element 1 that can acquire phase difference information as auxiliary information other than the distance measurement information obtained from the signal distribution ratio of the first tap TA and the second tap TB will be described.
- FIG. 86A is a cross-sectional view of a pixel according to the first configuration example of the twentieth embodiment
- FIGS. 86B and C are plan views of the pixel according to the first configuration example of the twentieth embodiment.
- FIG. 86A is a cross-sectional view of a pixel according to the first configuration example of the twentieth embodiment
- FIGS. 86B and C are plan views of the pixel according to the first configuration example of the twentieth embodiment.
- a phase difference light-shielding film 1801 for detecting a phase difference is newly provided in some of the pixels 51 on the upper surface which is the surface of the substrate 61 on the side of the on-chip lens 62.
- the phase difference light-shielding film 1801 shields one half of the pixel area on either the first tap TA side or the second tap TB side.
- FIG. 86B illustrates an example of the pixel 51 in which the first tap TA and the second tap TB are arranged in the vertical direction (vertical direction).
- FIG. 86C illustrates the pixel 51 in which the first tap TA and the second tap TB are arranged. This is an example of a pixel 51 in which taps TB are arranged in the left-right direction (horizontal direction).
- the pixels 51 according to the first configuration example of the twentieth embodiment can be arranged in the pixel array section 20 as shown in any one of A to F in FIG.
- FIG. 87 illustrates an example of the arrangement of the pixels 51 in which the pixels 51 in which the first tap TA and the second tap TB are arranged in the vertical direction are arranged in a row.
- 87B illustrates an example of the arrangement of the pixels 51 in which the pixels 51 in which the first tap TA and the second tap TB are arranged in the left-right direction are arranged in a row example.
- FIG. 87C illustrates a pixel 51 in which the first tap TA and the second tap TB are arranged in the up-down direction, and the pixels 51 are arranged in a row example, and the pixel positions of adjacent pixels are shifted by half a pixel in the up-down direction. 51 shows an example of the arrangement of 51.
- FIG. 87D illustrates a pixel 51 in which the first tap TA and the second tap TB are arranged in the left-right direction, arranged in a row example, and the pixel position is shifted by half a pixel vertically in an adjacent column. 51 shows an example of the arrangement of 51.
- FIG. 87E shows a pixel 51 in which the first tap TA and the second tap TB are arranged in the vertical direction, and a pixel 51 in which the first tap TA and the second tap TB are arranged in the horizontal direction. And an example of the arrangement of the pixels 51 arranged alternately in the column direction.
- FIG. 87F shows a pixel 51 in which the first tap TA and the second tap TB are arranged in the vertical direction, and a pixel 51 in which the first tap TA and the second tap TB are arranged in the horizontal direction.
- the pixels 51 which are alternately arranged in the column direction and whose pixel positions in the adjacent columns are shifted by half a pixel in the vertical direction.
- the pixels 51 in FIG. 86 are arranged in any one of the arrangements A to F in FIG. 87.
- the pixel array unit 20 one half of the first tap TA side as shown in B or C in FIG.
- the pixel 51 that blocks light and the pixel 51 that blocks half of one side on the second tap TB side are arranged in the vicinity.
- a plurality of pairs of the pixels 51 that shield one half of the first tap TA side and the pixels 51 that shield one half of the second tap TB side are scattered in the pixel array unit 20. Have been.
- FIG. 86 shows the other configuration in a simplified manner.
- the pixel 51 has a substrate 61 made of a P-type semiconductor layer and an on-chip lens 62 formed on the substrate 61. . Between the on-chip lens 62 and the substrate 61, an inter-pixel light-shielding film 63 and a phase difference light-shielding film 1801 are formed. In the pixel 51 on which the phase difference light shielding film 1801 is formed, the inter-pixel light shielding film 63 adjacent to the phase difference light shielding film 1801 is formed continuously (integrally) with the phase difference light shielding film 1801.
- the fixed charge film 66 is also formed on the lower surfaces of the inter-pixel light-shielding film 63 and the phase difference light-shielding film 1801, as shown in FIG.
- a first tap TA and a second tap TB are formed on the surface of the substrate 61 on which the on-chip lens 62 is formed, on the side opposite to the light incident surface side.
- the first tap TA corresponds to the above-described signal extracting unit 65-1
- the second tap TB corresponds to the signal extracting unit 65-2.
- a predetermined voltage MIX0 is supplied to the first tap TA from the tap drive unit 21 (FIG. 1) via a voltage supply line 30A formed in the multilayer wiring layer 811.
- a predetermined voltage MIX1 is supplied via the voltage supply line 30B.
- FIG. 88 is a table summarizing drive modes when the tap drive unit 21 drives the first tap TA and the second tap TB in the first configuration example of the twentieth embodiment.
- the phase difference can be detected by five types of driving methods of Mode 1 to Mode 5 shown in FIG.
- the mode 1 is the same driving as the other pixels 51 not having the phase difference light shielding film 1801.
- the tap driving unit 21 applies a positive voltage (for example, 1.5 V) to the first tap TA to be an active tap and the second tap TB to be an inactive tap during a predetermined light receiving period. Is applied with a voltage of 0V.
- a positive voltage for example, 1.5 V
- a voltage of 0 V is applied to the first tap TA as an inactive tap.
- 0 V (VSS potential) is applied to the pixel transistors Tr (FIG. 37) such as the transfer transistor 721 and the reset transistor 723 formed in the pixel boundary region of the substrate 61 of the multilayer wiring layer 811.
- a signal in which the second tap TB is an active tap in the pixel 51 in which one half of the first tap TA is shielded from light and a signal in the pixel 51 in which one half of the second tap TB is shielded from light.
- a phase difference can be detected from a signal in which one tap TA is an active tap.
- the tap drive section 21 applies a positive voltage (for example, 1.5 V) to both the first tap TA and the second tap TB.
- a positive voltage for example, 1.5 V
- 0 V (VSS potential) is applied to the pixel transistor Tr formed in the pixel boundary region of the substrate 61 of the multilayer wiring layer 811.
- the signal can be detected evenly at both the first tap TA and the second tap TB. Therefore, the signal of the pixel 51 whose one half on one side of the first tap TA is shielded and the second signal are detected. The phase difference can be detected from the signal of the pixel 51 whose one half on the tap TB side is shaded.
- Mode 3 is a mode 2 drive in which the voltages applied to the first tap TA and the second tap TB are weighted according to the image height in the pixel array unit 20 in the mode 2 drive. More specifically, as the image height (distance from the optical center) in the pixel array section 20 increases, the potential difference applied to the first tap TA and the second tap TB is provided. More specifically, the driving is performed so that the applied voltage on the tap side inside (at the center of) the pixel array unit 20 increases as the image height in the pixel array unit 20 increases. Thus, pupil correction can be performed based on the potential difference of the voltage applied to the tap.
- Mode 4 is a mode in which a negative bias (for example, -1.5 V) is applied to the pixel transistor Tr formed in the pixel boundary region of the substrate 61 instead of 0 V (VSS potential) in the driving of Mode 2. It is.
- a negative bias for example, -1.5 V
- the electric field from the pixel transistor Tr to the first tap TA and the second tap TB can be strengthened, and the electron as a signal charge Can be easily pulled into the tap.
- Mode 5 is a mode in which a negative bias (for example, -1.5 V) is applied to the pixel transistor Tr formed in the pixel boundary region of the substrate 61 instead of 0 V (VSS potential) in the driving of mode 3. It is. Thereby, the electric field from the pixel transistor Tr to the first tap TA and the second tap TB can be strengthened, and electrons as signal charges can be easily drawn into the tap.
- a negative bias for example, -1.5 V
- the pixel 51 in which one half of the first tap TA is shielded from light and the pixel 51 in which one half of the second tap TB is shielded from light In the case, a phase difference (image shift) occurs in a signal to be read due to a difference in a light-shielding region, so that the phase difference can be detected.
- the light receiving element 1 includes the pixel array unit in which the plurality of pixels 51 each including the first tap TA and the second tap TB are arranged. Some of the pixels 51 of the pixel 20 have a pixel 51 whose one half on the first tap TA side is shielded from light by the phase difference light shielding film 1801 and a pixel 51 whose one side half on the second tap TB side is shielded from light by the phase difference light shielding film 1801. Pixel 51.
- phase difference information can be obtained as auxiliary information other than the distance measurement information obtained from the signal distribution ratio of the first tap TA and the second tap TB.
- the focal position can be determined based on the detected phase difference information, and the accuracy in the depth direction can be improved.
- FIG. 89 is a cross-sectional view of a pixel according to the second configuration example of the twentieth embodiment.
- the on-chip lens 62 is formed in units of one pixel, but in the second configuration example in FIG. 89, one on-chip lens 1821 is provided for a plurality of pixels 51. Is formed.
- a phase difference light shielding film 1811 for detecting a phase difference is newly provided in some of the pixels 51 on the upper surface, which is the surface on the side of the on-chip lens 1821 of the substrate 61.
- the phase difference light-shielding film 1811 is formed in a predetermined pixel 51 among a plurality of pixels 51 sharing the same on-chip lens 1821.
- the inter-pixel light-shielding film 63 adjacent to the phase difference light-shielding film 1811 is formed continuously (integrally) with the phase difference light-shielding film 1811 as in the first configuration example.
- FIGS. 90A to 90F are plan views showing the arrangement of the phase difference light shielding film 1811 and the on-chip lens 1821 that can be taken by the second configuration example of the twentieth embodiment.
- FIG. 90 shows a first arrangement example of the phase difference light shielding film 1811 and the on-chip lens 1821.
- the pixel set 1831 shown in A of FIG. 90 includes two pixels 51 arranged in the vertical direction (vertical direction), and one on-chip lens 1821 is provided for two pixels 51 arranged in the vertical direction. Are located. Further, the arrangement of the first tap TA and the second tap TB of the two pixels 51 sharing one on-chip lens 1821 is the same. Then, a phase difference is detected by using two pixels 51 in which the phase difference light shielding film 1811 is not formed, of the two pixel sets 1831 in which the formation positions of the phase difference light shielding film 1811 are symmetric.
- BB of FIG. 90 shows a second arrangement example of the phase difference light shielding film 1811 and the on-chip lens 1821.
- the pixel set 1831 shown in A of FIG. 90 includes two pixels 51 arranged in the vertical direction (vertical direction), and one on-chip lens 1821 is provided for two pixels 51 arranged in the vertical direction. Are located. Further, the arrangement of the first tap TA and the second tap TB of the two pixels 51 sharing one on-chip lens 1821 is opposite. Then, a phase difference is detected by using two pixels 51 in which the phase difference light shielding film 1811 is not formed, of the two pixel sets 1831 in which the formation positions of the phase difference light shielding film 1811 are symmetric.
- CC of FIG. 90 shows a third arrangement example of the phase difference light shielding film 1811 and the on-chip lens 1821.
- the pixel set 1831 shown in C of FIG. 90 includes two pixels 51 arranged in the left-right direction (horizontal direction), and one on-chip lens 1821 is provided for two pixels 51 arranged in the left-right direction. Are located. Further, the arrangement of the first tap TA and the second tap TB of the two pixels 51 sharing one on-chip lens 1821 is the same. Then, a phase difference is detected by using two pixels 51 in which the phase difference light shielding film 1811 is not formed, of the two pixel sets 1831 in which the formation positions of the phase difference light shielding film 1811 are symmetric.
- DD of FIG. 90 shows a fourth arrangement example of the phase difference light shielding film 1811 and the on-chip lens 1821.
- the pixel set 1831 illustrated in D of FIG. 90 includes two pixels 51 arranged in the left-right direction (horizontal direction), and one on-chip lens 1821 is provided for the two pixels 51 arranged in the left-right direction. Are located. Further, the arrangement of the first tap TA and the second tap TB of the two pixels 51 sharing one on-chip lens 1821 is opposite. Then, a phase difference is detected by using two pixels 51 in which the phase difference light shielding film 1811 is not formed, of the two pixel sets 1831 in which the formation positions of the phase difference light shielding film 1811 are symmetric.
- FIG. 90 shows a fifth arrangement example of the phase difference light shielding film 1811 and the on-chip lens 1821.
- the pixel set 1831 shown in E of FIG. 90 includes four pixels 51 arranged in 2 ⁇ 2, and one on-chip lens 1821 is arranged for each of the four pixels 51. Further, the arrangement of the first tap TA and the second tap TB of the four pixels 51 sharing one on-chip lens 1821 is the same. Then, the phase difference is detected by using the four pixels 51 in which the phase difference light shielding film 1811 is not formed, of the two pixel sets 1831 in which the formation positions of the phase difference light shielding film 1811 are symmetric.
- FF of FIG. 90 shows a sixth arrangement example of the phase difference light shielding film 1811 and the on-chip lens 1821.
- the pixel set 1831 shown in F of FIG. 90 includes four pixels 51 arranged in 2 ⁇ 2, and one on-chip lens 1821 is arranged for each of the four pixels 51.
- the arrangement of the first tap TA and the second tap TB of the four pixels 51 sharing one on-chip lens 1821 is opposite between the left and right pixels. Then, the phase difference is detected by using the four pixels 51 in which the phase difference light shielding film 1811 is not formed, of the two pixel sets 1831 in which the formation positions of the phase difference light shielding film 1811 are symmetric.
- phase difference light-shielding film 1811 shields a plurality of pixels on one half under one on-chip lens 1821 from light.
- the light receiving element 1 is a part of the pixel array unit 20 in which the plurality of pixels 51 each including the first tap TA and the second tap TB are arranged.
- the pixel 51 has two pixel sets 1831 in which the formation positions of the phase difference light shielding films 1811 are symmetric.
- phase difference information can be obtained as auxiliary information other than the distance measurement information obtained from the signal distribution ratio of the first tap TA and the second tap TB.
- the focal position can be determined based on the detected phase difference information, and the accuracy in the depth direction can be improved.
- the pixel 51 of the first configuration example of the twentieth embodiment and the pixel 51 of the second configuration example of the twentieth embodiment are mixed. Is also good.
- phase difference information can be obtained.
- the phase difference information can be obtained by driving one half of the pixel 51 among the plurality of pixels under one on-chip lens 1821 in mode 2 to mode 5.
- phase difference information can be obtained by driving in mode 2 to mode 5.
- the phase difference information may be obtained by performing the driving in the mode 2 to the mode 5 in the pixel 51 having no phase difference light shielding film 1801 or 1811. Even in this case, the focal position can be determined based on the detected phase difference information, and the accuracy in the depth direction can be improved.
- the irradiation light emitted from the light source is continuously emitted without interruption. Then, the phase difference information can be obtained.
- FIG. 91 is a sectional view of a pixel according to the twenty-first embodiment.
- FIG. 91 the same reference numerals are given to the portions corresponding to the above-described twentieth embodiment, and the description of those portions will be omitted as appropriate.
- a polarizer filter 1841 is formed between the on-chip lens 62 and the substrate 61.
- the pixel 51 according to the twenty-first embodiment has, for example, the first embodiment shown in FIG. 2 and the fourteenth or fifteenth embodiment described with reference to FIG. 36 except that a polarizer filter 1841 is provided.
- the configuration is the same as that of the embodiment.
- the polarizer filter 1841, the on-chip lens 62, and the first tap TA and the second tap TB are arranged in either A or B in FIG.
- FIG. 92A is a plan view showing a first arrangement example of the polarizer filter 1841, the on-chip lens 62, and the first tap TA and the second tap TB in the twenty-first embodiment.
- the polarizer filter 1841 has any one of the polarization directions of 0 degree, 45 degrees, 135 degrees, and 135 degrees, and four kinds of polarization directions different from each other by 45 degrees.
- the child filter 1841 is formed on a predetermined pixel 51 in the pixel array unit 20 in units of 2 ⁇ 2 pixels.
- the on-chip lens 62 is provided for each pixel, and the positional relationship between the first tap TA and the second tap TB is the same for all pixels.
- FIG. 92B is a plan view showing a second arrangement example of the polarizer filter 1841, the on-chip lens 62, and the first tap TA and the second tap TB in the twenty-first embodiment.
- the polarizer filter 1841 has any one of the polarization directions of 0 degree, 45 degrees, 135 degrees, and 135 degrees, and four kinds of polarization directions different from each other by 45 degrees.
- the child filter 1841 is formed on a predetermined pixel 51 in the pixel array unit 20 in units of 2 ⁇ 2 pixels.
- the on-chip lens 62 is provided for each pixel, and the positional relationship between the first tap TA and the second tap TB is opposite for horizontally adjacent pixels. In other words, pixel columns in which the arrangement of the first tap TA and the arrangement of the second tap TB are opposite are alternately arranged in the horizontal direction.
- some of the plurality of pixels 51 include the polarizer filter 1841 as illustrated in FIGS. I have.
- polarization degree information can be obtained.
- information on the surface state (irregularity) and relative distance difference of the object surface as the subject is acquired, the reflection direction is calculated, the transparent object itself such as glass, and the object ahead of the transparent object.
- the distance measurement information up to can be obtained.
- the polarization direction of the polarizer filter 1841 By setting a plurality of types of frequencies of the irradiation light emitted from the light source and making the polarization direction different for each frequency, parallel ranging of multiple frequencies becomes possible. For example, four types of irradiation light of 20 MHz, 40 MHz, 60 MHz, and 100 MHz are simultaneously irradiated, and the respective polarization directions are set to 0 degree, 45 degrees, 135 degrees, and 135 degrees according to the polarization direction of the polarizer filter 1841. This makes it possible to simultaneously receive the reflected lights of the four types of irradiation light and acquire the distance measurement information.
- all the pixels 51 of the pixel array unit 20 of the light receiving element 1 may be the pixels 51 including the polarizer filter 1841.
- FIG. 93 is a sectional view of a pixel according to the twenty-second embodiment.
- the light receiving element 1 has at least one of the pixels 51 of A or B in FIG. 93 as a part of the pixels 51 of the pixel array section 20.
- FIGS. 93A and 93B parts corresponding to those in the twentieth embodiment described above are denoted by the same reference numerals, and descriptions of those parts will be omitted as appropriate.
- the pixel 51 shown in FIG. 93A has a color filter 1861 that transmits any one of R (Red), G (Green), or B (Blue) between the on-chip lens 62 and the substrate 61. Is formed.
- the pixel 51 shown in FIG. 93A is, for example, the first embodiment shown in FIG. 2 or the fourteenth or fifteenth embodiment described in FIG. 36 except that a color filter 1861 is provided. The configuration is the same as that of the first embodiment.
- FIG. 93B a pixel 51 in which an IR cut filter 1871 and a color filter 1872 for cutting infrared light are laminated between the on-chip lens 62 and the substrate 61, and an IR cut filter
- the pixel 1871 and the pixel 51 on which the color filter 1872 is not formed are arranged adjacent to each other.
- a photodiode 1881 is formed instead of the first tap TA and the second tap TB.
- a pixel separation portion 1882 that separates the substrate 61 from an adjacent pixel is formed.
- the pixel separating portion 1882 is formed so as to cover the outer periphery of a metal material such as tungsten (W), aluminum (Al), copper (Cu), or a conductive material such as polysilicon with an insulating film. The movement of electrons between adjacent pixels is restricted by the pixel separating section 1882.
- the pixel 51 having the photodiode 1881 is separately driven through a different control wiring from the pixel 51 having the first tap TA and the second tap TB.
- Other configurations are the same as, for example, the first embodiment shown in FIG. 2 and the fourteenth embodiment shown in FIG.
- 94A is a plan view showing the arrangement of the color filters 1861 in a four-pixel area in which the pixels 51 shown in FIG. 93A are arranged in a 2 ⁇ 2 array.
- the color filter 1861 is a 2 ⁇ 2 filter composed of four filters consisting of a filter transmitting G, a filter transmitting R, a filter transmitting B, and a filter transmitting IR. The arrangement is arranged.
- BB of FIG. 94 is a plan view taken along line A-A ′ of A of FIG. 93 for a four-pixel region in which the pixels 51 shown in A of FIG.
- the first tap TA and the second tap TB are arranged in pixel units.
- FIG. 94C of FIG. 94 is a plan view showing the arrangement of the color filters 1872 in a four-pixel area in which the pixels 51 shown in FIG. 93B are arranged in a 2 ⁇ 2 array.
- the color filter 1872 is a 2 ⁇ 2 color filter consisting of a filter transmitting G, a filter transmitting R, a filter transmitting B, and air (no filter). The arrangement is arranged. Note that a clear filter that transmits all wavelengths (R, G, B, and IR) may be provided instead of air.
- an IR cut filter 1871 is disposed above a filter that transmits G, a filter that transmits R, and a filter that transmits B.
- 94D is a plan view taken along the line B-B ′ of FIG. 93B for a four-pixel area in which the pixels 51 shown in FIG. 93B are arranged in a 2 ⁇ 2 array.
- a photodiode 1881 is formed in a pixel 51 having a filter that transmits G, R, or B in a substrate 61 portion of a 2 ⁇ 2 four-pixel region, and a pixel 51 having air (without a filter) is formed in a pixel 51 having a filter that transmits G, R, or B.
- a first tap TA and a second tap TB is formed at a pixel boundary portion of the pixel 51 where the photodiode 1881 is formed.
- the pixel 51 shown in FIG. 93A has a combination of the color filter 1861 shown in FIG. 94A and the photoelectric conversion region shown in FIG.
- the illustrated pixel 51 has a combination of the color filter 1872 illustrated in FIG. 94C and the photoelectric conversion region illustrated in FIG. 94D.
- the combination of the color filters of A and C in FIG. 94 and the photoelectric conversion regions of B and D in FIG. 94 may be interchanged. That is, as the configuration of the pixel 51 in the twenty-second embodiment, a configuration in which the color filter 1861 shown in FIG. 94A and the photoelectric conversion region shown in FIG. 94D are combined, or the configuration shown in FIG. A configuration in which the illustrated color filter 1872 and the photoelectric conversion region illustrated in FIG. 94B can be combined.
- ⁇ Driving of the pixel 51 including the first tap TA and the second tap TB can be performed in five driving modes, that is, mode 1 to mode 5 described with reference to FIG.
- the driving of the pixel 51 having the photodiode 1881 is performed in the same manner as the driving of the pixel of the normal image sensor, separately from the driving of the pixel 51 having the first tap TA and the second tap TB.
- the light receiving element 1 is shown in FIG. 93A as a part of the pixel array unit 20 in which a plurality of pixels 51 each including a first tap TA and a second tap TB are arranged.
- the pixel 51 having the color filter 1861 can be provided on the light incident surface side of the substrate 61 on which the first tap TA and the second tap TB are formed. Thereby, a signal can be acquired for each of the wavelengths of G, R, B, and IR, and the object identification power can be improved.
- the light receiving element 1 is configured as a part of the pixel array unit 20 in which a plurality of pixels 51 each including the first tap TA and the second tap TB are arranged, as shown in FIG.
- the pixel 51 having the photodiode 1881 in the substrate 61 instead of the first tap TA and the second tap TB as shown in FIG. 1 and having the color filter 1872 on the light incident surface side can be provided.
- the same G signal, R signal, and B signal as those of the image sensor can be obtained, and the object identification power can be improved.
- the pixel 51 includes the first tap TA and the second tap TB illustrated in FIG. 93A and the color filter 1861, and includes the photodiode 1881 and the color filter 1872 illustrated in FIG. 93B. Both of the pixels 51 may be formed in the pixel array unit 20.
- all the pixels 51 of the pixel array unit 20 of the light receiving element 1 are pixels formed by combining A and B in FIG. 94, pixels formed by combining C and D in FIG. 94, pixels formed by combining A and D in FIG. It may be composed of at least one kind of pixel formed by a combination of C and B in FIG.
- FIG. 95 is a block diagram illustrating a configuration example of a ranging module that outputs ranging information using the light receiving element 1 of FIG.
- the distance measuring module 5000 includes a light emitting unit 5011, a light emission control unit 5012, and a light receiving unit 5013.
- the light emitting unit 5011 has a light source that emits light of a predetermined wavelength, and emits irradiation light whose brightness varies periodically to irradiate the object.
- the light-emitting unit 5011 includes, as a light source, a light-emitting diode that emits infrared light having a wavelength in the range of 780 nm to 1000 nm, and emits light in synchronization with a rectangular-wave light emission control signal CLKp supplied from the light emission control unit 5012. Generates light.
- the light emission control signal CLKp is not limited to a rectangular wave as long as it is a periodic signal.
- the light emission control signal CLKp may be a sine wave.
- the light emission control unit 5012 supplies the light emission control signal CLKp to the light emission unit 5011 and the light reception unit 5013, and controls the irradiation timing of the irradiation light.
- the frequency of the light emission control signal CLKp is, for example, 20 megahertz (MHz).
- the frequency of the light emission control signal CLKp is not limited to 20 megahertz (MHz), but may be 5 megahertz (MHz) or the like.
- the light receiving unit 5013 receives the reflected light reflected from the object, calculates distance information for each pixel according to the light reception result, and generates a depth image in which the distance to the object is represented by a gradation value for each pixel. Output.
- the light receiving element 50 described above is used as the light receiving section 5013.
- the light receiving element 1 serving as the light receiving section 5013 is, for example, a signal extraction section 65-1 of each pixel 51 of the pixel array section 20 based on the light emission control signal CLKp.
- the distance information is calculated for each pixel from the signal intensities detected by the charge detectors (N + semiconductor region 71) of the pixels 65-2 and 65-2.
- the light receiving element 1 of FIG. 1 can be incorporated as the light receiving unit 5013 of the distance measuring module 5000 that obtains and outputs distance information to the subject by the indirect ToF method.
- the light receiving section 5013 of the distance measuring module 5000 the light receiving element 1 of each of the above-described embodiments, specifically, a light receiving element with improved pixel sensitivity as a back-illuminated type is adopted, so that the distance measuring module 5000 is formed. Distance measurement characteristics can be improved.
- the technology (the present technology) according to the present disclosure can be applied to various products.
- the technology according to the present disclosure is realized as a device mounted on any type of moving object such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
- FIG. 96 is a block diagram illustrating a schematic configuration example of a vehicle control system that is an example of a mobile object control system to which the technology according to the present disclosure may be applied.
- Vehicle control system 12000 includes a plurality of electronic control units connected via communication network 12001.
- the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an inside information detection unit 12040, and an integrated control unit 12050.
- a microcomputer 12051, an audio / video output unit 12052, and a vehicle-mounted network I / F (interface) 12053 are illustrated.
- the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
- the driving system control unit 12010 includes a driving force generating device for generating driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting driving force to wheels, and a steering angle of the vehicle. It functions as a control mechanism such as a steering mechanism for adjusting and a braking device for generating a braking force of the vehicle.
- the body control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
- the body control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a head lamp, a back lamp, a brake lamp, a blinker, and a fog lamp.
- a radio wave or various switch signals transmitted from a portable device replacing the key may be input to the body control unit 12020.
- the body control unit 12020 receives the input of these radio waves or signals and controls a door lock device, a power window device, a lamp, and the like of the vehicle.
- Out-of-vehicle information detection unit 12030 detects information external to the vehicle on which vehicle control system 12000 is mounted.
- an imaging unit 12031 is connected to the outside-of-vehicle information detection unit 12030.
- the out-of-vehicle information detection unit 12030 causes the imaging unit 12031 to capture an image outside the vehicle, and receives the captured image.
- the out-of-vehicle information detection unit 12030 may perform an object detection process or a distance detection process of a person, a vehicle, an obstacle, a sign, a character on a road surface, or the like based on the received image.
- the imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of received light.
- the imaging unit 12031 can output an electric signal as an image or can output the information as distance measurement information.
- the light received by the imaging unit 12031 may be visible light or non-visible light such as infrared light.
- the in-vehicle information detection unit 12040 detects information in the vehicle.
- the in-vehicle information detection unit 12040 is connected to, for example, a driver status detection unit 12041 that detects the status of the driver.
- the driver state detection unit 12041 includes, for example, a camera that captures an image of the driver, and the in-vehicle information detection unit 12040 determines the degree of driver fatigue or concentration based on the detection information input from the driver state detection unit 12041. The calculation may be performed, or it may be determined whether the driver has fallen asleep.
- the microcomputer 12051 calculates a control target value of the driving force generation device, the steering mechanism or the braking device based on the information on the inside and outside of the vehicle acquired by the outside information detection unit 12030 or the inside information detection unit 12040, and the drive system control unit A control command can be output to 12010.
- the microcomputer 12051 implements functions of ADAS (Advanced Driver Assistance System) including vehicle collision avoidance or impact mitigation, following running based on the following distance, vehicle speed maintaining running, vehicle collision warning, vehicle lane departure warning, and the like. Cooperative control for the purpose.
- ADAS Advanced Driver Assistance System
- the microcomputer 12051 controls the driving force generation device, the steering mechanism, the braking device, and the like based on the information about the surroundings of the vehicle obtained by the outside information detection unit 12030 or the inside information detection unit 12040, so that the driver 120 It is possible to perform cooperative control for automatic driving or the like in which the vehicle travels autonomously without depending on the operation.
- the microcomputer 12051 can output a control command to the body system control unit 12020 based on information on the outside of the vehicle acquired by the outside information detection unit 12030.
- the microcomputer 12051 controls the headlamp in accordance with the position of the preceding vehicle or the oncoming vehicle detected by the outside-of-vehicle information detection unit 12030, and performs cooperative control for the purpose of preventing glare such as switching a high beam to a low beam. It can be carried out.
- the sound image output unit 12052 transmits at least one of a sound signal and an image signal to an output device capable of visually or audibly notifying a passenger of the vehicle or the outside of the vehicle of information.
- an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices.
- the display unit 12062 may include, for example, at least one of an on-board display and a head-up display.
- FIG. 97 is a diagram illustrating an example of an installation position of the imaging unit 12031.
- the vehicle 12100 includes imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
- the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at positions such as a front nose, a side mirror, a rear bumper, a back door of the vehicle 12100, and an upper portion of a windshield in the vehicle interior.
- the imaging unit 12101 provided on the front nose and the imaging unit 12105 provided above the windshield in the passenger compartment mainly acquire an image in front of the vehicle 12100.
- the imaging units 12102 and 12103 provided in the side mirror mainly acquire images of the side of the vehicle 12100.
- the imaging unit 12104 provided in the rear bumper or the back door mainly acquires an image behind the vehicle 12100.
- the forward images acquired by the imaging units 12101 and 12105 are mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, and the like.
- FIG. 97 shows an example of the imaging range of the imaging units 12101 to 12104.
- the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
- the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
- the imaging range 12114 indicates 13 shows an imaging range of an imaging unit 12104 provided in a rear bumper or a back door.
- a bird's-eye view image of the vehicle 12100 viewed from above is obtained by superimposing image data captured by the imaging units 12101 to 12104.
- At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
- at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of imaging elements or an imaging element having pixels for detecting a phase difference.
- the microcomputer 12051 calculates a distance to each three-dimensional object in the imaging ranges 12111 to 12114 and a temporal change of the distance (relative speed with respect to the vehicle 12100). , It is possible to extract, as a preceding vehicle, a three-dimensional object that travels at a predetermined speed (for example, 0 km / h or more) in a direction substantially the same as that of the vehicle 12100, which is the closest three-dimensional object on the traveling path of the vehicle 12100. it can.
- a predetermined speed for example, 0 km / h or more
- microcomputer 12051 can set an inter-vehicle distance to be secured before the preceding vehicle and perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform cooperative control for automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
- the microcomputer 12051 converts the three-dimensional object data relating to the three-dimensional object into other three-dimensional objects such as a motorcycle, a normal vehicle, a large vehicle, a pedestrian, a telephone pole, and the like based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines a collision risk indicating a risk of collision with each obstacle, and when the collision risk is equal to or more than the set value and there is a possibility of collision, via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver through forced driving and avoidance steering via the drive system control unit 12010, driving assistance for collision avoidance can be performed.
- driving assistance for collision avoidance can be performed.
- At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared light.
- the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian exists in the captured images of the imaging units 12101 to 12104. The recognition of such a pedestrian is performed by, for example, extracting a feature point in an image captured by the imaging units 12101 to 12104 as an infrared camera, and performing a pattern matching process on a series of feature points indicating the outline of the object to determine whether the object is a pedestrian.
- the audio image output unit 12052 outputs a rectangular contour for emphasis to the recognized pedestrian.
- the display unit 12062 is controlled so that is superimposed. Further, the sound image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
- the technology according to the present disclosure can be applied to the imaging unit 12031 among the configurations described above. Specifically, for example, by applying the light receiving element 1 illustrated in FIG. 1 to the imaging unit 12031, characteristics such as sensitivity can be improved.
- the charge detection unit for detecting the signal carrier is constituted by the P + semiconductor region
- the voltage application unit for generating the electric field in the substrate is constituted by the N + semiconductor region.
- holes as signal carriers may be detected.
- the distance measurement characteristics can be improved by configuring the CAPD sensor as a back-illuminated light receiving element.
- the driving method is described in which a voltage is directly applied to the P + semiconductor region 73 formed on the substrate 61 and the electric charge that has been photoelectrically converted by the generated electric field is moved.
- the present invention is not limited to the driving method, and can be applied to other driving methods.
- a driving method may be used in which the converted charges are distributed to the first floating diffusion region via the first transfer transistor or distributed to the second floating diffusion region via the second transfer transistor and accumulated. .
- the first and second transfer transistors formed on the substrate 61 function as first and second voltage applying units each of which applies a predetermined voltage to the gate, and the first and second transfer transistors formed on the substrate 61 are respectively.
- the first and second floating diffusion regions function as first and second charge detection units for detecting charges generated by photoelectric conversion, respectively.
- the first and second voltage applying units are used.
- the two P + semiconductor regions 73 are control nodes to which a predetermined voltage is applied, and the two N + semiconductor regions 71 serving as the first and second charge detection units are detection nodes for detecting charges.
- a driving method in which a predetermined voltage is applied to the gates of the first and second transfer transistors formed on the substrate 61, and the photoelectrically converted charges are distributed to the first floating diffusion region or the second floating diffusion region and accumulated.
- the gates of the first and second transfer transistors are control nodes to which a predetermined voltage is applied, and the first and second floating diffusion regions formed on the substrate 61 are detection nodes for detecting charges. is there.
- the present technology can also have the following configurations.
- (A1) On-chip lens, A wiring layer, A semiconductor layer disposed between the on-chip lens and the wiring layer, The semiconductor layer, A first tap having a first voltage application unit and a first charge detection unit disposed therearound; A second tap having a second voltage application unit and a second charge detection unit disposed therearound; A light receiving element configured to detect a phase difference using a signal detected by the first tap and a signal detected by the second tap.
- the wiring layer has at least one layer including a reflection member, The light receiving element according to (A1), wherein the reflection member is provided so as to overlap the first charge detection unit or the second charge detection unit in a plan view.
- the wiring layer has at least one layer including a light shielding member, The light receiving element according to (A1) or (A2), wherein the light blocking member is provided so as to overlap the first charge detection unit or the second charge detection unit in a plan view.
- A6 The light receiving element according to any one of (A1) to (A5), wherein the on-chip lens is provided in units of a plurality of pixels.
- the light receiving element according to (A6) further including a phase difference light-shielding film between the on-chip lens and the semiconductor layer, which shields one half of a plurality of pixels below the one on-chip lens.
- the light receiving element according to any one of (A1) to (A7) further including a driving unit that supplies a positive voltage to both the first voltage applying unit and the second voltage applying unit.
- the light receiving element according to (A8) wherein the positive voltage supplied to the first tap and the second tap is configured such that a potential difference is provided toward an outside of a pixel array unit.
- (B1) On-chip lens, A wiring layer, A semiconductor layer disposed between the on-chip lens and the wiring layer; A polarizer disposed between the on-chip lens and the semiconductor layer, The semiconductor layer, A first tap having a first voltage application unit and a first charge detection unit disposed therearound; A light receiving element comprising: a second voltage applying unit; and a second tap having a second charge detecting unit disposed therearound.
- the wiring layer has at least one layer including a reflection member, The light receiving element according to (B1), wherein the reflection member is provided so as to overlap the first charge detection unit or the second charge detection unit in a plan view.
- the wiring layer has at least one layer including a light shielding member, The light receiving element according to (B1) or (B2), wherein the light blocking member is provided so as to overlap the first charge detection unit or the second charge detection unit in a plan view.
- (B6) The liquid crystal display according to any one of (B1) to (B5), including at least a first pixel having the polarizer having a first polarization degree and a second pixel having the polarizer having a second polarization degree.
- Light receiving element (B7) The light receiving element according to (B6), wherein the first pixel and the second pixel receive light having different frequencies.
- (B8) The light-receiving element according to any one of (B1) to (B7), wherein the first and second voltage applying units are respectively configured by first and second P-type semiconductor regions formed in the semiconductor layer. .
- (B9) The light receiving element according to any one of (B1) to (B7), wherein the first and second voltage applying units are respectively configured by first and second transfer transistors formed in the semiconductor layer.
- (C1) On-chip lens, A wiring layer, A semiconductor layer disposed between the on-chip lens and the wiring layer; A color filter disposed between the on-chip lens and the semiconductor layer, The semiconductor layer, A first tap having a first voltage application unit and a first charge detection unit disposed therearound; A light receiving element comprising: a second voltage applying unit; and a second tap having a second charge detecting unit disposed therearound.
- the wiring layer has at least one layer including a reflection member, The light receiving element according to (C1), wherein the reflection member is provided so as to overlap the first charge detection unit or the second charge detection unit in a plan view.
- the wiring layer has at least one layer including a light shielding member, The light receiving element according to (C1) or (C2), wherein the light shielding member is provided so as to overlap the first charge detection unit or the second charge detection unit in a plan view.
- the light receiving element according to any one of (C1) to (C3), wherein the pixel having the color filter further includes an IR cut filter disposed between the on-chip lens and the semiconductor layer.
- (C5) The light receiving element according to any one of (C1) to (C4), wherein the pixel having the color filter includes a photodiode in the semiconductor layer.
- (C6) The light receiving element according to (C5), wherein the pixel including the photodiode further includes a pixel separating unit that separates an adjacent pixel at a pixel boundary part of the semiconductor layer.
- (C7) The light receiving element according to any one of (C1) to (C6), further including a driving unit that supplies a positive voltage to both the first voltage application unit and the second voltage application unit.
- (C8) The light-receiving element according to (C7), wherein the positive voltage supplied to the first tap and the second tap is configured such that a potential difference is provided toward an outside of a pixel array unit.
- (C9) The light-receiving element according to any one of (C1) to (C8), wherein the first and second voltage applying units are respectively configured by first and second P-type semiconductor regions formed in the semiconductor layer. .
- (C10) The light receiving element according to any one of (C1) to (C8), wherein the first and second voltage applying units are respectively configured by first and second transfer transistors formed in the semiconductor layer.
- (D) A light-receiving element according to any one of (A1), (B1), and (C1), A light source for irradiating irradiation light whose brightness varies periodically, A light emission control unit that controls the irradiation timing of the irradiation light.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Power Engineering (AREA)
- General Physics & Mathematics (AREA)
- Electromagnetism (AREA)
- Condensed Matter Physics & Semiconductors (AREA)
- Computer Hardware Design (AREA)
- Microelectronics & Electronic Packaging (AREA)
- Computer Networks & Wireless Communication (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Solid State Image Pick-Up Elements (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
- Light Receiving Elements (AREA)
Abstract
Description
オンチップレンズと、
配線層と、
前記オンチップレンズと前記配線層との間に配される半導体層とを備え、
前記半導体層は、
第1の電圧印加部と、その周囲に配置される第1の電荷検出部とを有する第1のタップと、
第2の電圧印加部と、その周囲に配置される第2の電荷検出部とを有する第2のタップと
を備え、
前記第1のタップと前記第2のタップで検出された信号を用いて位相差が検出されるように構成される。
オンチップレンズと、
配線層と、
前記オンチップレンズと前記配線層との間に配される半導体層と、
前記オンチップレンズと前記半導体層との間に配される偏光子と
を備え、
前記半導体層は、
第1の電圧印加部と、その周囲に配置される第1の電荷検出部とを有する第1のタップと、
第2の電圧印加部と、その周囲に配置される第2の電荷検出部とを有する第2のタップと
を備える。
オンチップレンズと、
配線層と、
前記オンチップレンズと前記配線層との間に配される半導体層と、
前記オンチップレンズと前記半導体層との間に配されるカラーフィルタと
を備え、
前記半導体層は、
第1の電圧印加部と、その周囲に配置される第1の電荷検出部とを有する第1のタップと、
第2の電圧印加部と、その周囲に配置される第2の電荷検出部とを有する第2のタップと
を備える。
上記第1乃至第3の側面のいずれかに記載の受光素子と、
周期的に明るさが変動する照射光を照射する光源と、
前記照射光の照射タイミングを制御する発光制御部と
を備える。
<受光素子の構成例>
本技術は、CAPDセンサを裏面照射型の構成とすることで、画素感度等の特性を向上させることができるようにするものである。
次に、画素アレイ部20に設けられた画素の構成例について説明する。画素アレイ部20に設けられた画素は、例えば図2に示すように構成される。
Cmod={|I0-I1|/(I0+I1)}×100・・・(1)
<画素の構成例>
なお、以上においては基板61内の信号取り出し部65の部分は、図3に示したようにN+半導体領域71とP+半導体領域73が矩形状の領域とされる場合を例として説明した。しかし、基板61と垂直な方向から見たときのN+半導体領域71とP+半導体領域73の形状は、どのような形状とされてもよい。
<画素の構成例>
図11は、画素51における信号取り出し部65の平面形状の変形例を示す平面図である。
<画素の構成例>
さらに、以上においては、信号取り出し部65内において、P+半導体領域73の周囲がN+半導体領域71により囲まれる構成を例として説明したが、N+半導体領域の周囲がP+半導体領域により囲まれるようにしてもよい。
<画素の構成例>
また、図9に示した例と同様に、N+半導体領域201の周囲がP+半導体領域202に囲まれるような配置とされる場合においても、それらのN+半導体領域201およびP+半導体領域202の形状は、どのような形状とされてもよい。
<画素の構成例>
さらに、信号取り出し部65内に形成されるN+半導体領域とP+半導体領域は、ライン形状(長方形状)とされてもよい。
<画素の構成例>
さらに、図14に示した例ではP+半導体領域231やP+半導体領域233が、N+半導体領域232やN+半導体領域234に挟み込まれる構造を例として説明したが、逆にN+半導体領域がP+半導体領域に挟み込まれる形状とされてもよい。
<画素の構成例>
さらに、以上においては画素アレイ部20を構成する各画素内には、それぞれ2つの信号取り出し部65が設けられる例について説明したが、画素内に設けられる信号取り出し部の数は1つであってもよいし、3以上であってもよい。
<画素の構成例>
また、上述したように各画素内に3以上の信号取り出し部(タップ)が設けられるようにしてもよい。
<画素の構成例>
さらに、画素アレイ部20の互いに隣接する画素間で信号取り出し部(タップ)が共有されるようにしてもよい。
<画素の構成例>
さらに、画素アレイ部20の画素51等の各画素に設けられるオンチップレンズや画素間遮光部は、特に設けられないようにしてもよい。
<画素の構成例>
また、画素51の構成を例えば図20に示す構成とするようにしてもよい。なお、図20において図2における場合と対応する部分には同一の符号を付してあり、その説明は適宜省略する。
<画素の構成例>
その他、例えば図21に示すように、オンチップレンズの光軸方向の厚さも最適化するようにしてもよい。なお、図21において図2における場合と対応する部分には同一の符号を付してあり、その説明は適宜省略する。
<画素の構成例>
さらに、画素アレイ部20に形成された画素と画素の間に、隣接画素間の分離特性を向上させ、クロストークを抑制するための分離領域を設けるようにしてもよい。
<画素の構成例>
さらに、画素51に埋め込み型の分離領域を形成する場合、例えば図23に示すように基板61全体を貫通する分離領域471-1および分離領域471-2が設けられるようにしてもよい。なお、図23において図2における場合と対応する部分には同一の符号を付してあり、その説明は適宜省略する。
<画素の構成例>
さらに、信号取り出し部65が形成される基板の厚さは、画素の各種の特性等に応じて定めるようにすることができる。
<画素の構成例>
さらに、以上においては画素51を構成する基板がP型半導体基板からなる例について説明したが、例えば図25に示すようにN型半導体基板からなるようにしてもよい。なお、図25において図2における場合と対応する部分には同一の符号を付してあり、その説明は適宜省略する。
<画素の構成例>
さらに、図24を参照して説明した例と同様に、N型半導体基板の厚さも画素の各種の特性等に応じて定めるようにすることができる。
<画素の構成例>
また、例えば基板61の光入射面側にバイアスをかけることで、基板61内における、基板61の面と垂直な方向(以下、Z方向とも称することとする)の電界を強化するようにしてもよい。
<画素の構成例>
さらに、赤外線に対する画素51の感度を向上させるために基板61の光入射面とは反対側の面上に大面積の反射部材を設けるようにしてもよい。
<画素の構成例>
さらに、近傍画素における光の誤検知を抑制するために、基板61の光入射面とは反対側の面上に大面積の遮光部材を設けるようにしてもよい。
<画素の構成例>
さらに、画素51の基板61における酸化膜64に代えて、P型半導体領域からなるPウェル領域が設けられるようにしてもよい。
<画素の構成例>
また、画素51の基板61における酸化膜64に加えて、さらにP型半導体領域からなるPウェル領域が設けられるようにしてもよい。
図31は、画素51の等価回路を示している。
図32は、画素51のその他の等価回路を示している。
次に、図33乃至図35を参照して、各画素51の信号取り出し部65の電圧印加部であるP+半導体領域73-1および73-2に、所定の電圧MIX0またはMIX1を印加するための電圧供給線の配置について説明する。図33および図34に示される電圧供給線741は、図1に示した電圧供給線30に対応する。
図2等で示した画素の断面構成では、基板61の光入射面とは反対の表面側に形成された多層配線層の図示が省略されていた。
図38は、図22で示した第9の実施の形態の画素構造を、多層配線層を省略しない形で、複数画素について示した断面図である。
図39は、図23で示した第9の実施の形態の変形例1の画素構造を、多層配線層を省略しない形で、複数画素について示した断面図である。
図40は、図29で示した第16の実施の形態の画素構造を、多層配線層を省略しない形で、複数画素について示した断面図である。
図41は、図24で示した第10の実施の形態の画素構造を、多層配線層を省略しない形で、複数画素について示した断面図である。
次に、図42および図43を参照して、図36乃至図41で示した多層配線層811の5層の金属膜M1乃至M5の平面配置例について説明する。
図44は、図42のAで示した1層目の金属膜M1と、その上に形成された画素トランジスタTrのゲート電極等を形成するポリシリコン層とを重ね合わせた平面図である。
次に、図45および図46を参照して、金属膜M1に形成される反射部材631の変形例について説明する。
図1の受光素子1は、図47のA乃至Cのいずれかの基板構成を採用することができる。
ところで、画素アレイ部20において水平方向に並ぶ画素51の境界部には、図37の断面図に示したように、リセットトランジスタ723、増幅トランジスタ724、及び、選択トランジスタ725等の画素トランジスタTrが配置される。
次に、有効画素領域周辺の電荷排出についてさらに説明する。
次に、図55を参照して、光電変換領域を有する基板61に画素トランジスタを配置した場合の電流の流れについて説明する。
そこで、図56に示されるように、受光素子1を、2枚の基板を積層した積層構造とし、光電変換領域を有する基板とは別の基板に、全ての画素トランジスタを配置する構成を採用することができる。
次に、第19の実施の形態について説明する。
図62のAは、第19の実施の形態の第1構成例に係る画素の平面図であり、図62のBは、第19の実施の形態の第1構成例に係る画素の断面図である。
図63のAは、第19の実施の形態の第2構成例に係る画素の平面図であり、図63のBは、第19の実施の形態の第2構成例に係る画素の断面図である。
上述した第19の実施の形態の第1構成例および第2構成例では、電圧印加部である電極部1311と、電荷検出部であるN+半導体領域1312との平面形状が、円形に形成されていた。
図66のAは、第19の実施の形態の第3構成例に係る画素の平面図であり、図66のBは、第19の実施の形態の第3構成例に係る画素の断面図である。
上述した第19の実施の形態の第3構成例では、電圧印加部である電極部1311と、電荷検出部であるN+半導体領域1312との平面形状が、円形に形成されていた。
上述した図31および図32の画素回路や、図42の金属膜M3の例では、2つの信号取り出し部65(2つのタップTAおよびTB)に対応して、1つの画素列に2本の垂直信号線29を配置する構成について説明した。
次に、図73を参照して、図44のBに示した画素トランジスタの配置例の変形例について説明する。
図77は、多層配線層811におけるVSS配線の配線例を示す平面図である。
次に、受光素子1における瞳補正の第1の方法について説明する。
次に、受光素子1における瞳補正の第2の方法について説明する。
以下の第20乃至第22の実施の形態では、第1のタップTAと第2のタップTBの信号の配分比から求める測距情報以外の補助情報を取得可能な受光素子1の構成例について説明する。
図86のAは、第20の実施の形態の第1構成例に係る画素の断面図であり、図86のBおよびCは、第20の実施の形態の第1構成例に係る画素の平面図である。
図89は、第20の実施の形態の第2構成例に係る画素の断面図を示している。
上述した第20の実施の形態の第1構成例および第2構成例では、オンチップレンズ62と基板61との間に、位相差遮光膜1801または1811が形成された構成について説明した。
次に、第1のタップTAと第2のタップTBの信号の配分比から求める測距情報以外の補助情報として、偏光度情報を取得できる受光素子1の構成例について説明する。
次に、第1のタップTAと第2のタップTBの信号の配分比から求める測距情報以外の補助情報として、RGBの波長ごとの感度情報を取得できる受光素子1の構成例について説明する。
図95は、図1の受光素子1を用いて測距情報を出力する測距モジュールの構成例を示すブロック図である。
本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。
(A1)
オンチップレンズと、
配線層と、
前記オンチップレンズと前記配線層との間に配される半導体層とを備え、
前記半導体層は、
第1の電圧印加部と、その周囲に配置される第1の電荷検出部とを有する第1のタップと、
第2の電圧印加部と、その周囲に配置される第2の電荷検出部とを有する第2のタップと
を備え、
前記第1のタップと前記第2のタップで検出された信号を用いて位相差が検出されるように構成される
受光素子。
(A2)
前記配線層は、反射部材を備える1層を少なくとも有し、
前記反射部材は、平面視において前記第1の電荷検出部または前記第2の電荷検出部と重なるように設けられている
前記(A1)に記載の受光素子。
(A3)
前記配線層は、遮光部材を備える1層を少なくとも有し、
前記遮光部材は、平面視において前記第1の電荷検出部または前記第2の電荷検出部と重なるように設けられている
前記(A1)または(A2)に記載の受光素子。
(A4)
前記オンチップレンズは、1画素単位で設けられている
前記(A1)乃至(A3)のいずれかに記載の受光素子。
(A5)
前記オンチップレンズと前記半導体層との間に、画素領域の片側半分を遮光する位相差遮光膜をさらに備える
前記(A4)に記載の受光素子。
(A6)
前記オンチップレンズは、複数画素単位で設けられている
前記(A1)乃至(A5)のいずれかに記載の受光素子。
(A7)
前記オンチップレンズと前記半導体層との間に、1つの前記オンチップレンズ下の複数画素の片側半分を遮光する位相差遮光膜をさらに備える
前記(A6)に記載の受光素子。
(A8)
前記第1の電圧印加部および前記第2の電圧印加部の両方に正の電圧を供給する駆動部をさらに備える
前記(A1)乃至(A7)のいずれかに記載の受光素子。
(A9)
前記第1のタップと前記第2のタップに供給する前記正の電圧は、画素アレイ部の外側にいくほど、電位差が設けられるように構成される
前記(A8)に記載の受光素子。
(A10)
前記第1および第2の電圧印加部は、それぞれ前記半導体層に形成された第1および第2のP型半導体領域で構成される
前記(A1)乃至(A9)のいずれかに記載の受光素子。
(A11)
前記第1および第2の電圧印加部は、それぞれ前記半導体層に形成された第1および第2の転送トランジスタで構成される
前記(A1)乃至(A9)のいずれかに記載の受光素子。
(B1)
オンチップレンズと、
配線層と、
前記オンチップレンズと前記配線層との間に配される半導体層と、
前記オンチップレンズと前記半導体層との間に配される偏光子と
を備え、
前記半導体層は、
第1の電圧印加部と、その周囲に配置される第1の電荷検出部とを有する第1のタップと、
第2の電圧印加部と、その周囲に配置される第2の電荷検出部とを有する第2のタップと
を備える
受光素子。
(B2)
前記配線層は、反射部材を備える1層を少なくとも有し、
前記反射部材は、平面視において前記第1の電荷検出部または前記第2の電荷検出部と重なるように設けられている
前記(B1)に記載の受光素子。
(B3)
前記配線層は、遮光部材を備える1層を少なくとも有し、
前記遮光部材は、平面視において前記第1の電荷検出部または前記第2の電荷検出部と重なるように設けられている
前記(B1)または(B2)に記載の受光素子。
(B4)
前記第1の電圧印加部および前記第2の電圧印加部の両方に正の電圧を供給する駆動部をさらに備える
前記(B1)乃至(B3)のいずれかに記載の受光素子。
(B5)
前記第1のタップと前記第2のタップに供給する前記正の電圧は、画素アレイ部の外側にいくほど、電位差が設けられるように構成される
前記(B4)に記載の受光素子。
(B6)
第1の偏光度の前記偏光子を有する第1の画素と、第2の偏光度の前記偏光子を有する第2の画素とを少なくとも備える
前記(B1)乃至(B5)のいずれかに記載の受光素子。
(B7)
前記第1の画素と前記第2の画素は、周波数の異なる光を受光する
前記(B6)に記載の受光素子。
(B8)
前記第1および第2の電圧印加部は、それぞれ前記半導体層に形成された第1および第2のP型半導体領域で構成される
前記(B1)乃至(B7)のいずれかに記載の受光素子。
(B9)
前記第1および第2の電圧印加部は、それぞれ前記半導体層に形成された第1および第2の転送トランジスタで構成される
前記(B1)乃至(B7)のいずれかに記載の受光素子。
(C1)
オンチップレンズと、
配線層と、
前記オンチップレンズと前記配線層との間に配される半導体層と、
前記オンチップレンズと前記半導体層との間に配されるカラーフィルタと
を備え、
前記半導体層は、
第1の電圧印加部と、その周囲に配置される第1の電荷検出部とを有する第1のタップと、
第2の電圧印加部と、その周囲に配置される第2の電荷検出部とを有する第2のタップと
を備える
受光素子。
(C2)
前記配線層は、反射部材を備える1層を少なくとも有し、
前記反射部材は、平面視において前記第1の電荷検出部または前記第2の電荷検出部と重なるように設けられている
前記(C1)に記載の受光素子。
(C3)
前記配線層は、遮光部材を備える1層を少なくとも有し、
前記遮光部材は、平面視において前記第1の電荷検出部または前記第2の電荷検出部と重なるように設けられている
前記(C1)または(C2)に記載の受光素子。
(C4)
前記カラーフィルタを有する画素は、前記オンチップレンズと前記半導体層との間に配されるIRカットフィルタをさらに備える
前記(C1)乃至(C3)のいずれかに記載の受光素子。
(C5)
前記カラーフィルタを有する画素は、前記半導体層にフォトダイオードを備える
前記(C1)乃至(C4)のいずれかに記載の受光素子。
(C6)
前記フォトダイオードを有する前記画素は、前記半導体層の画素境界部に、隣接画素を分離する画素分離部をさらに備える
前記(C5)に記載の受光素子。
(C7)
前記第1の電圧印加部および前記第2の電圧印加部の両方に正の電圧を供給する駆動部をさらに備える
前記(C1)乃至(C6)のいずれかに記載の受光素子。
(C8)
前記第1のタップと前記第2のタップに供給する前記正の電圧は、画素アレイ部の外側にいくほど、電位差が設けられるように構成される
前記(C7)に記載の受光素子。
(C9)
前記第1および第2の電圧印加部は、それぞれ前記半導体層に形成された第1および第2のP型半導体領域で構成される
前記(C1)乃至(C8)のいずれかに記載の受光素子。
(C10)
前記第1および第2の電圧印加部は、それぞれ前記半導体層に形成された第1および第2の転送トランジスタで構成される
前記(C1)乃至(C8)のいずれかに記載の受光素子。
(D)
前記(A1)、(B1)、または、(C1)のいずれかに記載の受光素子と、
周期的に明るさが変動する照射光を照射する光源と、
前記照射光の照射タイミングを制御する発光制御部と
を備える測距モジュール。
Claims (31)
- オンチップレンズと、
配線層と、
前記オンチップレンズと前記配線層との間に配される半導体層とを備え、
前記半導体層は、
第1の電圧印加部と、その周囲に配置される第1の電荷検出部とを有する第1のタップと、
第2の電圧印加部と、その周囲に配置される第2の電荷検出部とを有する第2のタップと
を備え、
前記第1のタップと前記第2のタップで検出された信号を用いて位相差が検出されるように構成される
受光素子。 - 前記配線層は、反射部材を備える1層を少なくとも有し、
前記反射部材は、平面視において前記第1の電荷検出部または前記第2の電荷検出部と重なるように設けられている
請求項1に記載の受光素子。 - 前記配線層は、遮光部材を備える1層を少なくとも有し、
前記遮光部材は、平面視において前記第1の電荷検出部または前記第2の電荷検出部と重なるように設けられている
請求項1に記載の受光素子。 - 前記オンチップレンズは、1画素単位で設けられている
請求項1に記載の受光素子。 - 前記オンチップレンズと前記半導体層との間に、画素領域の片側半分を遮光する位相差遮光膜をさらに備える
請求項4に記載の受光素子。 - 前記オンチップレンズは、複数画素単位で設けられている
請求項1に記載の受光素子。 - 前記オンチップレンズと前記半導体層との間に、1つの前記オンチップレンズ下の複数画素の片側半分を遮光する位相差遮光膜をさらに備える
請求項6に記載の受光素子。 - 前記第1の電圧印加部および前記第2の電圧印加部の両方に正の電圧を供給する駆動部をさらに備える
請求項1に記載の受光素子。 - 前記第1のタップと前記第2のタップに供給する前記正の電圧は、画素アレイ部の外側にいくほど、電位差が設けられるように構成される
請求項8に記載の受光素子。 - 前記第1および第2の電圧印加部は、それぞれ前記半導体層に形成された第1および第2のP型半導体領域で構成される
請求項1に記載の受光素子。 - 前記第1および第2の電圧印加部は、それぞれ前記半導体層に形成された第1および第2の転送トランジスタで構成される
請求項1に記載の受光素子。 - オンチップレンズと、
配線層と、
前記オンチップレンズと前記配線層との間に配される半導体層と、
前記オンチップレンズと前記半導体層との間に配される偏光子と
を備え、
前記半導体層は、
第1の電圧印加部と、その周囲に配置される第1の電荷検出部とを有する第1のタップと、
第2の電圧印加部と、その周囲に配置される第2の電荷検出部とを有する第2のタップと
を備える
受光素子。 - 前記配線層は、反射部材を備える1層を少なくとも有し、
前記反射部材は、平面視において前記第1の電荷検出部または前記第2の電荷検出部と重なるように設けられている
請求項12に記載の受光素子。 - 前記配線層は、遮光部材を備える1層を少なくとも有し、
前記遮光部材は、平面視において前記第1の電荷検出部または前記第2の電荷検出部と重なるように設けられている
請求項12に記載の受光素子。 - 前記第1の電圧印加部および前記第2の電圧印加部の両方に正の電圧を供給する駆動部をさらに備える
請求項12に記載の受光素子。 - 前記第1のタップと前記第2のタップに供給する前記正の電圧は、画素アレイ部の外側にいくほど、電位差が設けられるように構成される
請求項15に記載の受光素子。 - 第1の偏光度の前記偏光子を有する第1の画素と、第2の偏光度の前記偏光子を有する第2の画素とを少なくとも備える
請求項12に記載の受光素子。 - 前記第1の画素と前記第2の画素は、周波数の異なる光を受光する
請求項17に記載の受光素子。 - 前記第1および第2の電圧印加部は、それぞれ前記半導体層に形成された第1および第2のP型半導体領域で構成される
請求項12に記載の受光素子。 - 前記第1および第2の電圧印加部は、それぞれ前記半導体層に形成された第1および第2の転送トランジスタで構成される
請求項12に記載の受光素子。 - オンチップレンズと、
配線層と、
前記オンチップレンズと前記配線層との間に配される半導体層と、
前記オンチップレンズと前記半導体層との間に配されるカラーフィルタと
を備え、
前記半導体層は、
第1の電圧印加部と、その周囲に配置される第1の電荷検出部とを有する第1のタップと、
第2の電圧印加部と、その周囲に配置される第2の電荷検出部とを有する第2のタップと
を備える
受光素子。 - 前記配線層は、反射部材を備える1層を少なくとも有し、
前記反射部材は、平面視において前記第1の電荷検出部または前記第2の電荷検出部と重なるように設けられている
請求項21に記載の受光素子。 - 前記配線層は、遮光部材を備える1層を少なくとも有し、
前記遮光部材は、平面視において前記第1の電荷検出部または前記第2の電荷検出部と重なるように設けられている
請求項21に記載の受光素子。 - 前記カラーフィルタを有する画素は、前記オンチップレンズと前記半導体層との間に配されるIRカットフィルタをさらに備える
請求項21に記載の受光素子。 - 前記カラーフィルタを有する画素は、前記半導体層にフォトダイオードを備える
請求項21に記載の受光素子。 - 前記フォトダイオードを有する前記画素は、前記半導体層の画素境界部に、隣接画素を分離する画素分離部をさらに備える
請求項25に記載の受光素子。 - 前記第1の電圧印加部および前記第2の電圧印加部の両方に正の電圧を供給する駆動部をさらに備える
請求項21に記載の受光素子。 - 前記第1のタップと前記第2のタップに供給する前記正の電圧は、画素アレイ部の外側にいくほど、電位差が設けられるように構成される
請求項27に記載の受光素子。 - 前記第1および第2の電圧印加部は、それぞれ前記半導体層に形成された第1および第2のP型半導体領域で構成される
請求項21に記載の受光素子。 - 前記第1および第2の電圧印加部は、それぞれ前記半導体層に形成された第1および第2の転送トランジスタで構成される
請求項21に記載の受光素子。 - 請求項1、12、または、21のいずれかに記載の受光素子と、
周期的に明るさが変動する照射光を照射する光源と、
前記照射光の照射タイミングを制御する発光制御部と
を備える測距モジュール。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020207002211A KR102613094B1 (ko) | 2018-07-18 | 2019-07-04 | 수광 소자 및 거리측정 모듈 |
JP2020500763A JP7225195B2 (ja) | 2018-07-18 | 2019-07-04 | 受光素子および測距モジュール |
US16/633,713 US11652175B2 (en) | 2018-07-18 | 2019-07-04 | Light reception device and distance measurement module |
EP19836782.3A EP3644366A4 (en) | 2018-07-18 | 2019-07-04 | LIGHT RECEIVING ELEMENT AND DISTANCE MEASURING MODULE |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018135399 | 2018-07-18 | ||
JP2018-135399 | 2018-07-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020017337A1 true WO2020017337A1 (ja) | 2020-01-23 |
Family
ID=69164029
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/026572 WO2020017337A1 (ja) | 2018-07-18 | 2019-07-04 | 受光素子および測距モジュール |
Country Status (7)
Country | Link |
---|---|
US (1) | US11652175B2 (ja) |
EP (1) | EP3644366A4 (ja) |
JP (1) | JP7225195B2 (ja) |
KR (1) | KR102613094B1 (ja) |
CN (2) | CN210325801U (ja) |
TW (1) | TWI816827B (ja) |
WO (1) | WO2020017337A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022130888A1 (ja) * | 2020-12-16 | 2022-06-23 | ソニーセミコンダクタソリューションズ株式会社 | 撮像装置 |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2020161779A (ja) * | 2019-03-28 | 2020-10-01 | ソニーセミコンダクタソリューションズ株式会社 | 受光装置および測距モジュール |
TWI756006B (zh) * | 2021-01-04 | 2022-02-21 | 力晶積成電子製造股份有限公司 | 影像感測器及其製造方法 |
JP2022119377A (ja) * | 2021-02-04 | 2022-08-17 | キヤノン株式会社 | 光電変換装置、光電変換システム、移動体、半導体基板 |
KR20230055605A (ko) * | 2021-10-19 | 2023-04-26 | 에스케이하이닉스 주식회사 | 이미지 센싱 장치 |
CN117812450A (zh) * | 2022-09-30 | 2024-04-02 | 晋城三赢精密电子有限公司 | 图像采集装置及图像采集方法 |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005235893A (ja) * | 2004-02-18 | 2005-09-02 | National Univ Corp Shizuoka Univ | 光飛行時間型距離センサ |
JP2011086904A (ja) | 2009-10-14 | 2011-04-28 | Optrima Nv | フォトニックミキサ、その使用およびシステム |
JP2015026708A (ja) * | 2013-07-26 | 2015-02-05 | 株式会社東芝 | 固体撮像装置および固体撮像装置の製造方法 |
JP2015060855A (ja) * | 2013-09-17 | 2015-03-30 | ソニー株式会社 | 固体撮像装置およびその製造方法、並びに電子機器 |
JP2015076476A (ja) * | 2013-10-08 | 2015-04-20 | ソニー株式会社 | 固体撮像装置およびその製造方法、並びに電子機器 |
US20170040362A1 (en) * | 2015-08-04 | 2017-02-09 | Artilux Corporation | Germanium-silicon light sensing apparatus |
WO2017130825A1 (ja) * | 2016-01-29 | 2017-08-03 | 富士フイルム株式会社 | 組成物、膜、近赤外線カットフィルタ、積層体、パターン形成方法、固体撮像素子、画像表示装置、赤外線センサおよびカラーフィルタ |
JP2017522727A (ja) * | 2014-06-27 | 2017-08-10 | ソフトキネティク センサーズ エヌブイ | 多数電流によって補助される放射線検出器デバイス |
WO2017169122A1 (ja) * | 2016-03-30 | 2017-10-05 | ソニー株式会社 | 光電変換素子および光電変換装置 |
JP2018063378A (ja) * | 2016-10-14 | 2018-04-19 | ソニーセミコンダクタソリューションズ株式会社 | 光学デバイス、光学センサ、並びに、撮像装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP6103301B2 (ja) * | 2013-07-03 | 2017-03-29 | ソニー株式会社 | 固体撮像装置およびその製造方法、並びに電子機器 |
KR20160030773A (ko) | 2014-09-11 | 2016-03-21 | 에스케이하이닉스 주식회사 | 반도체 장치 및 반도체 장치의 제조방법 |
US9906706B2 (en) | 2015-12-23 | 2018-02-27 | Visera Technologies Company Limited | Image sensor and imaging device |
CN109155323A (zh) * | 2016-03-31 | 2019-01-04 | 株式会社尼康 | 拍摄元件以及拍摄装置 |
US20210204840A1 (en) | 2016-04-25 | 2021-07-08 | Eccrine Systems, Inc. | Eab biosensors for detecting sweat analytes |
KR102632100B1 (ko) | 2016-09-09 | 2024-02-01 | 주식회사 디비하이텍 | 광 검출 장치 |
CN111830527B (zh) * | 2017-01-19 | 2024-06-18 | 索尼半导体解决方案公司 | 光接收元件、成像元件和成像装置 |
US11573320B2 (en) * | 2019-02-20 | 2023-02-07 | Sony Semiconductor Solutions Corporation | Light receiving element and ranging module |
-
2019
- 2019-06-28 CN CN201921002226.XU patent/CN210325801U/zh active Active
- 2019-06-28 CN CN201910574977.7A patent/CN110739321A/zh active Pending
- 2019-07-04 JP JP2020500763A patent/JP7225195B2/ja active Active
- 2019-07-04 TW TW108123611A patent/TWI816827B/zh active
- 2019-07-04 EP EP19836782.3A patent/EP3644366A4/en active Pending
- 2019-07-04 KR KR1020207002211A patent/KR102613094B1/ko active IP Right Grant
- 2019-07-04 US US16/633,713 patent/US11652175B2/en active Active
- 2019-07-04 WO PCT/JP2019/026572 patent/WO2020017337A1/ja unknown
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005235893A (ja) * | 2004-02-18 | 2005-09-02 | National Univ Corp Shizuoka Univ | 光飛行時間型距離センサ |
JP2011086904A (ja) | 2009-10-14 | 2011-04-28 | Optrima Nv | フォトニックミキサ、その使用およびシステム |
JP2015026708A (ja) * | 2013-07-26 | 2015-02-05 | 株式会社東芝 | 固体撮像装置および固体撮像装置の製造方法 |
JP2015060855A (ja) * | 2013-09-17 | 2015-03-30 | ソニー株式会社 | 固体撮像装置およびその製造方法、並びに電子機器 |
JP2015076476A (ja) * | 2013-10-08 | 2015-04-20 | ソニー株式会社 | 固体撮像装置およびその製造方法、並びに電子機器 |
JP2017522727A (ja) * | 2014-06-27 | 2017-08-10 | ソフトキネティク センサーズ エヌブイ | 多数電流によって補助される放射線検出器デバイス |
US20170040362A1 (en) * | 2015-08-04 | 2017-02-09 | Artilux Corporation | Germanium-silicon light sensing apparatus |
WO2017130825A1 (ja) * | 2016-01-29 | 2017-08-03 | 富士フイルム株式会社 | 組成物、膜、近赤外線カットフィルタ、積層体、パターン形成方法、固体撮像素子、画像表示装置、赤外線センサおよびカラーフィルタ |
WO2017169122A1 (ja) * | 2016-03-30 | 2017-10-05 | ソニー株式会社 | 光電変換素子および光電変換装置 |
JP2018063378A (ja) * | 2016-10-14 | 2018-04-19 | ソニーセミコンダクタソリューションズ株式会社 | 光学デバイス、光学センサ、並びに、撮像装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3644366A4 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022130888A1 (ja) * | 2020-12-16 | 2022-06-23 | ソニーセミコンダクタソリューションズ株式会社 | 撮像装置 |
US11769776B2 (en) | 2020-12-16 | 2023-09-26 | Sony Semiconductor Solutions Corporation | Imaging apparatus |
US12062677B2 (en) | 2020-12-16 | 2024-08-13 | Sony Semiconductor Solutions Corporation | Imaging apparatus |
Also Published As
Publication number | Publication date |
---|---|
US11652175B2 (en) | 2023-05-16 |
JP7225195B2 (ja) | 2023-02-20 |
KR20210032298A (ko) | 2021-03-24 |
CN110739321A (zh) | 2020-01-31 |
CN210325801U (zh) | 2020-04-14 |
TW202006939A (zh) | 2020-02-01 |
US20210135022A1 (en) | 2021-05-06 |
JPWO2020017337A1 (ja) | 2021-08-26 |
EP3644366A1 (en) | 2020-04-29 |
EP3644366A4 (en) | 2021-08-04 |
TWI816827B (zh) | 2023-10-01 |
KR102613094B1 (ko) | 2023-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN210575955U (zh) | 受光元件和测距模块 | |
WO2020017337A1 (ja) | 受光素子および測距モジュール | |
WO2020017338A1 (ja) | 受光素子および測距モジュール | |
KR20200043545A (ko) | 수광 소자, 촬상 소자, 및, 촬상 장치 | |
WO2020017345A1 (ja) | 受光素子および測距モジュール | |
WO2020017340A1 (ja) | 受光素子および測距モジュール | |
JP7395462B2 (ja) | 受光素子および測距モジュール | |
WO2020017342A1 (ja) | 受光素子および測距モジュール | |
WO2020017344A1 (ja) | 受光素子および測距モジュール | |
US12123974B2 (en) | Light-receiving element and distance-measuring module |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2020500763 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2019836782 Country of ref document: EP Effective date: 20200124 |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19836782 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019836782 Country of ref document: EP Effective date: 20200124 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |