WO2024057470A1 - Photodetection device, method for producing same, and electronic apparatus - Google Patents

Photodetection device, method for producing same, and electronic apparatus Download PDF

Info

Publication number
WO2024057470A1
WO2024057470A1 PCT/JP2022/034521 JP2022034521W WO2024057470A1 WO 2024057470 A1 WO2024057470 A1 WO 2024057470A1 JP 2022034521 W JP2022034521 W JP 2022034521W WO 2024057470 A1 WO2024057470 A1 WO 2024057470A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
silicon germanium
germanium layer
layer
photodetection device
Prior art date
Application number
PCT/JP2022/034521
Other languages
French (fr)
Japanese (ja)
Inventor
浩之 服部
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Priority to PCT/JP2022/034521 priority Critical patent/WO2024057470A1/en
Publication of WO2024057470A1 publication Critical patent/WO2024057470A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith

Definitions

  • the present disclosure relates to a photodetection device, a method of manufacturing the same, and an electronic device, and particularly relates to a photodetection device, a method of manufacturing the same, and an electronic device that can realize a photoelectric conversion unit with high quantum efficiency in the infrared region.
  • the absorption wavelength band in the near-infrared region is about 1 um, and the absorption coefficient decreases toward the near-infrared region. Therefore, in order to increase the quantum efficiency at wavelengths of 900 nm and above, which cannot be achieved with silicon, consideration is being given to replacing silicon (Si) with germanium (Ge) or silicon germanium (SiGe) in the photodiode composition.
  • Si silicon
  • germanium germanium
  • SiGe silicon germanium
  • the absorption wavelength band can be changed up to about 1.9 um by changing the Ge composition, and by designing the composition according to the purpose, a sensor with a high absorption coefficient in the infrared region can be manufactured. can do.
  • Patent Document 1 discloses a front-illuminated sensor structure provided with this SiGe stress relaxation layer.
  • a high-concentration P-type layer in which a high-concentration P-type impurity is implanted into the SiGe layer.
  • a process of implanting type impurities is required. Furthermore, it is desirable to remove the SiGe stress relaxation layer from the viewpoint of suppressing dark current.
  • the present disclosure has been made in view of this situation, and is intended to make it possible to realize a photoelectric conversion unit with high quantum efficiency in the infrared region.
  • the photodetection device includes: a silicon germanium layer in which a photoelectric conversion section is formed; an interpixel light shielding film formed on a first surface side that is a light incident surface side of the silicon germanium layer; a MOS transistor formed on a second surface opposite to the first surface of the silicon germanium layer;
  • the silicon germanium layer has a constant germanium concentration.
  • the silicon germanium layer is formed at a constant germanium concentration.
  • a method for manufacturing a photodetection device includes: A photoelectric conversion section is formed on the silicon germanium layer, forming an inter-pixel light-shielding film on a first surface side that is a light incident surface side of the silicon germanium layer; forming a MOS transistor on a second surface opposite to the first surface of the silicon germanium layer;
  • the present invention is a method for manufacturing a photodetector in which the silicon germanium layer has a constant germanium concentration.
  • the electronic device includes: a silicon germanium layer in which a photoelectric conversion section is formed; an interpixel light shielding film formed on a first surface side that is a light incident surface side of the silicon germanium layer; a MOS transistor formed on a second surface opposite to the first surface of the silicon germanium layer;
  • the silicon germanium layer has a constant germanium concentration.
  • the photodetector and electronic equipment may be independent devices or may be modules incorporated into other devices.
  • FIG. 1 is a cross-sectional view showing a configuration example of a photodetection device according to an embodiment of the present disclosure.
  • FIG. 2 is a diagram illustrating a method of manufacturing the photodetection device of FIG. 1.
  • FIG. 7 is a diagram showing an example of a pixel circuit when the photodetection device is a gate-type ToF sensor.
  • FIG. 2 is a cross-sectional view of a pixel when the photodetection device is a gate-type ToF sensor.
  • FIG. 3 is a diagram showing an example of a pixel circuit when the photodetection device is an IR image sensor.
  • FIG. 3 is a cross-sectional view of a pixel when the photodetection device is an IR image sensor.
  • FIG. 3 is a diagram showing an example of a pixel circuit when the photodetection device is a CAPD type ToF sensor.
  • FIG. 3 is a cross-sectional view of a pixel when the photodetection device is a CAPD type ToF sensor.
  • FIG. 7 is a diagram showing an example of a pixel circuit when the photodetection device is a direct ToF sensor using SPAD.
  • FIG. 7 is a cross-sectional view of a pixel when the photodetection device is a direct ToF sensor using SPAD.
  • 11 is a cross-sectional view of a pixel in which a diffraction structure is added to the back surface of the SiGe layer in FIG. 10.
  • FIG. 12 is a plan view of the diffractive structure of FIG. 11; 11 is a cross-sectional view of a pixel in which a diffraction structure is added to the front surface of the SiGe layer in FIG. 10.
  • FIG. FIG. 7 is a diagram illustrating a modification of the element isolation section.
  • FIG. 3 is a diagram illustrating an example of a diffraction structure formed by the same formation method as the element isolation portion.
  • FIG. 1 is a block diagram illustrating a schematic configuration example of a photodetection device according to the present disclosure. It is a figure explaining the example of use of the image sensor using a photodetection device.
  • FIG. 1 is a block diagram illustrating a configuration example of an imaging device as an electronic device to which the technology of the present disclosure is applied.
  • FIG. 1 is a diagram illustrating an example of a schematic configuration of an endoscopic surgery system.
  • FIG. 2 is a block diagram showing an example of the functional configuration of a camera head and a CCU.
  • FIG. 1 is a block diagram showing an example of a schematic configuration of a vehicle control system.
  • FIG. 2 is an explanatory diagram showing an example of installation positions of an outside-vehicle information detection section and an imaging section.
  • FIG. 1 is a cross-sectional view showing a configuration example of a photodetection device according to an embodiment of the present disclosure.
  • the photodetection device 1 shown in FIG. 1 includes a photoelectric conversion section that has high quantum efficiency (QE) for light in the infrared region.
  • a photodiode (PD) 12 is formed in the SiGe layer 11 as a photoelectric conversion section.
  • the SiGe layer (silicon germanium layer) 11 is a single crystal layer of silicon germanium (SiGe) formed with a constant germanium concentration.
  • the "constant" concentration is defined as the fact that the variation in the germanium concentration within the layer is typically controlled to a value smaller than about ⁇ 1%, or at least 10%, of the target concentration.
  • the absolute value of the germanium concentration is not limited to a predetermined range, and may be, for example, about 20 to 30%, or about 70 to 80%. The higher the germanium concentration, the higher the absorption coefficient in the infrared region.
  • the SiGe layer 11 is composed of a low concentration P-type semiconductor region, and a part of the SiGe layer 11 is an N-type semiconductor region doped with N-type impurities such as phosphorus (P) and arsenic (As). By forming , a PN junction type photodiode 12 is formed.
  • An anti-reflection film 14 is formed on the upper surface of the back surface of the SiGe layer 11, which is the upper surface of the SiGe layer 11 in FIG. 16 are formed in sequence.
  • the lens layer 16 it is formed flat on the upper part of the SiGe layer 11 where the photodiode 12 is not formed, such as the outer peripheral part outside the pixel array part, but the lens layer 16 is flat on the upper part of the SiGe layer 11 where the photodiode 12 is formed.
  • On the upper part of 11, an on-chip lens 17 is formed.
  • the antireflection film 14 can be configured with a laminated structure in which a fixed charge film and an oxide film are laminated, for example, using a high dielectric constant (High-k) insulating thin film formed by ALD (Atomic Layer Deposition) method. be able to. Specifically, hafnium oxide (HfO 2 ), aluminum oxide (Al 2 O 3 ), titanium oxide (TiO 2 ), STO (Strontium Titan Oxide), or the like can be used as the fixed charge film.
  • the material of the light shielding film 15 may be any material that blocks light, and for example, a metal material such as tungsten (W), aluminum (Al), or copper (Cu) can be used.
  • the lens layer 16 is formed of a resin material such as styrene resin, acrylic resin, styrene-acrylic copolymer resin, or siloxane resin.
  • the on-chip lens 17 condenses the incident light and makes it enter the photodiode 12 efficiently.
  • an element isolation part 13 is formed around the photodiode 12 in plan view.
  • the element isolation section 13 penetrates the SiGe layer 11 and is electrically isolated from other adjacent photoelectric conversion sections (photodiodes 12).
  • the element isolation part 13 is formed by filling a trench formed to penetrate the SiGe layer 11 with a metal material such as tungsten (W), aluminum (Al), titanium (Ti), or titanium nitride (TiN). Consists of.
  • the element isolation section 13 may include at least one layer of the anti-reflection film 14 formed on the back surface of the SiGe layer 11 on the side wall of the trench formed in the SiGe layer 11, and on the inside of the anti-reflection film 14. It may also be constructed by embedding an insulating film such as a silicon oxide film.
  • a Si layer (silicon layer) 21 is formed on the upper surface (lower surface in FIG. 1) of the front surface of the SiGe layer 11, which is the lower surface of the SiGe layer 11 in FIG.
  • the Si layer 21 is a silicon (Si) cap layer that covers the SiGe layer 11, and has a function of preventing germanium contamination during manufacturing.
  • the Si layer 21 is formed to have a thickness of, for example, about 10 nm.
  • the upper surface (lower surface in FIG. 1) of the Si layer 21 is a transistor formation surface when a MOS transistor is formed, and in the example of FIG. 1, one MOS transistor Tr is formed.
  • the MOS transistor Tr does not necessarily need to be formed on the upper surface of the Si layer 21.
  • two or more MOS transistors Tr may be formed on the upper surface of the Si layer 21.
  • the MOS transistor Tr includes a gate insulating film 31 , a gate electrode 32 formed thereon, and a sidewall 33 formed on a side wall of the gate electrode 32 .
  • the gate electrode 32 of the MOS transistor Tr is connected to a bonding electrode 42 via a contact electrode 41, and the bonding electrode 42 is connected to a bonding electrode 64 of a wiring layer 61 of another stacked semiconductor substrate (not shown). Connected by metal bonding.
  • An insulating layer 43 is formed on the upper surface of the Si layer 21 where the MOS transistor Tr is not formed.
  • the photodetector 1 has a stacked structure of a semiconductor substrate (compound semiconductor substrate) 51 including a SiGe layer 11 and a Si layer 21, and a semiconductor substrate (hereinafter referred to as a logic substrate), not shown, on which at least a logic circuit is formed. Consists of. However, regarding the logic board, only the wiring layer 61 is illustrated in FIG. In FIG. 1, the light incident surface side of the semiconductor substrate 51 including the SiGe layer 11 and the Si layer 21 is the back surface (first surface) of the semiconductor substrate 51, and the side to be connected to the logic board is the front surface of the semiconductor substrate 51. This is the surface (second surface).
  • a plurality of MOS transistors Tr, a drive circuit for the MOS transistors Tr, and a signal processing circuit that processes a signal according to the charge generated by the photodiode 12 are formed on the logic board.
  • a drive circuit for the MOS transistor Tr formed on the Si layer 21 is also formed on the logic substrate side.
  • An antireflection film 14, a light shielding film 15, and a lens layer 16 are formed on the back side of the semiconductor substrate 51, and a plurality of MOS transistors Tr (formed on a logic substrate) are formed on the front side of the semiconductor substrate 51. ) are formed. Note that the antireflection film 14, light shielding film 15, and lens layer 16 formed on the back side of the semiconductor substrate 51 may be omitted as appropriate depending on design conditions.
  • the wiring layer 61 includes multiple layers of metal wiring 62 and an insulating layer 63.
  • the number of layers of the metal wiring 62 is two, but the number of layers of the metal wiring 62 is not limited.
  • a bonding electrode 64 is formed on a bonding surface 66 with the insulating layer 43, which is the upper surface of the wiring layer 61, and the bonding electrode 64 is electrically connected to the bonding electrode 42 by metal bonding.
  • the bonding electrode 64 is connected to a predetermined metal wiring 62 via a contact electrode 65.
  • the contact electrode 41, the bonding electrode 42, the bonding electrode 64, the contact electrode 65, and the metal wiring 62 are made of, for example, copper (Cu), tungsten (W), aluminum (Al), gold (Au), or the like. However, in this embodiment, it is made of copper. Therefore, the bonding electrode 42 and the bonding electrode 64 form a Cu-Cu bond.
  • the insulating layers 43 and 63 are formed of, for example, a SiO2 film, a low-k film (low dielectric constant insulating film), a SiOC film, or the like.
  • the insulating layers 43 and 63 may be composed of a plurality of insulating films made of different materials.
  • the photodetector 1 has the photodiode 12 as a photoelectric conversion section, and the photodiode 12 is formed only in the SiGe layer 11 with good crystallinity and a constant germanium concentration. Since the photodiode 12 formed in the SiGe layer 11 has a high absorption coefficient in the infrared region, its sensitivity to light in the infrared region is improved.
  • the photodetector 1 is also a back-illuminated type in which an on-chip lens 17 is formed on the back side of a semiconductor substrate 51 including a SiGe layer 11, and the light focused by the on-chip lens 17 is photoelectrically converted by a photodiode 12. It is a structure.
  • the back-illuminated structure can enlarge the opening area to the photodiode 12 compared to the front-illuminated structure, and can improve sensitivity. Therefore, according to the photodetecting device 1, a photoelectric conversion section having high quantum efficiency in the infrared region can be realized.
  • the structure of the semiconductor substrate 51 included in the photodetector 1 does not include a SiGe stress relaxation layer, dark current generated due to crystal defects in the SiGe stress relaxation layer can be suppressed. Further, unlike the sensor structure of Patent Document 1, there is no need to provide a high concentration P-type layer for suppressing dark current when a SiGe stress relaxation layer is provided.
  • a SiGe heteroepitaxial substrate is prepared as the semiconductor substrate 51 including the SiGe layer 11 and the Si layer 21.
  • the SiGe heteroepitaxial substrate prepared as the semiconductor substrate 51 in the state of A in FIG. , and the Si layer 21 are laminated on the substrate.
  • This SiGe heteroepitaxial substrate can be formed by epitaxially growing the SiGe stress relaxation layer 112, the SiGe layer 11, and the Si layer 21 in this order on the silicon substrate 111.
  • an N-type semiconductor region is formed by doping an N-type impurity such as phosphorus into a desired region of the SiGe layer 11 composed of a P-type semiconductor region.
  • a junction type photodiode 12 is formed.
  • a MOS transistor Tr is formed at a desired position on the upper surface of the Si layer 21.
  • an insulating film that will become the gate insulating film 31 is formed on the entire upper surface of the Si layer 21, and then the insulating film is removed from the position where the MOS transistor Tr is formed using lithography technology, and the gate insulating film is removed. 31 is formed. After that, gate electrode 32 and sidewalls 33 are formed.
  • the gate electrode 32 is made of, for example, polysilicon, and the sidewalls 33 are made of, for example, a silicon nitride film (SiN). Note that in FIG. 2, the symbols of the gate insulating film 31, the gate electrode 32, and the sidewalls 33 are omitted.
  • an insulating layer 43 is formed with a predetermined thickness on the upper surface of the Si layer 21 so as to cover the gate electrode 32 of the MOS transistor Tr, and then the insulating layer 43 is formed on the upper surface of the gate electrode 32. is opened and filled with a predetermined metal material such as copper, thereby forming the contact electrode 41 and the bonding electrode 42.
  • the insulating layer 43 formed on the semiconductor substrate 51 and the wiring layer 61 of the separately manufactured logic board are bonded at the bonding surface 66, and then the top and bottom are reversed as shown in D of FIG. be done.
  • the silicon substrate 111 which is the uppermost layer of the semiconductor substrate 51, is removed by CMP (Chemical Mechanical Polishing) or the like.
  • the SiGe stress relaxation layer 112 which is the uppermost layer of the semiconductor substrate 51, is removed by CMP or the like.
  • the semiconductor substrate 51 is made up of the SiGe layer 11 and the Si layer 21, and the photodiode 12 is formed in the SiGe layer 11.
  • the element isolation part 13 is formed. Subsequently, a light shielding film 15 and a lens layer 16 including an on-chip lens 17 are formed in this order, thereby completing the cross-sectional structure of the photodetector 1 shown in FIG. 1.
  • the SiGe stress relaxation layer 112 with many crystal defects is removed, so the high-concentration P type for suppressing dark current required in the sensor structure of Patent Document 1 is removed.
  • it is necessary to implant impurities into a deep region with high energy and there is a risk that the SiGe layer may be damaged by the high-energy impurity implantation process. Since no layer is required, such damage can also be avoided.
  • the SiGe layer 11 which is the surface on which the MOS transistor is formed, is covered with the Si layer 21, germanium contamination during manufacturing can be prevented. Furthermore, since an oxide film that becomes the gate insulating film 31 of the MOS transistor Tr can be formed on the Si layer 21, the gate insulating film 31 during formation of the MOS transistor is The interface state at the interface can be reduced.
  • the photodetection device 1 described above includes a ToF (Time of Flight) sensor that measures the distance to an object using irradiated light, and an imaging sensor that receives light that includes at least infrared light and generates an image according to the amount of received light. It may be a sensor.
  • ToF Time of Flight
  • imaging sensor that receives light that includes at least infrared light and generates an image according to the amount of received light. It may be a sensor.
  • a ToF sensor is a sensor that measures the distance to an object by emitting irradiated light toward the object and measuring the time it takes for the irradiated light to be reflected on the object's surface and returned.
  • ToF sensors include indirect ToF sensors and direct ToF sensors.
  • Indirect ToF sensors calculate the distance to the object by detecting the flight time from the time the irradiated light is emitted until the reflected light is received as a phase difference, whereas the direct ToF sensor uses This method calculates the distance to an object by directly measuring the flight time from when the light is emitted until the reflected light is received.
  • FIG. 3 shows an example of a pixel circuit when the photodetection device 1 is a gate-type ToF sensor.
  • a gate-type ToF sensor is an indirect ToF sensor that detects a phase difference by distributing the charge generated by the photodiode 12 between two transfer gates (transfer transistors) and calculates the distance to the object.
  • the pixel 200 includes a photodiode 12 as a photoelectric conversion section. Furthermore, the pixel 200 includes two transfer transistors TRG, a floating diffusion region FD, an additional capacitor FDL, a switching transistor FDG, an amplification transistor AMP, a reset transistor RST, and two selection transistors SEL. Furthermore, the pixel 200 includes a charge discharge transistor OFG.
  • transfer transistors TRG1 and TRG2, floating diffusion regions FD1 and FD2, additional capacitors FDL1 and FDL2, switching transistors FDG1 and FDG2, amplification transistors AMP1 and AMP2, reset transistors RST1 and RST2, and selection transistors SEL1 and SEL2 It is called as.
  • the transfer transistor TRG, the switching transistor FDG, the amplification transistor AMP, the selection transistor SEL, the reset transistor RST, and the charge discharge transistor OFG are composed of, for example, N-type MOS transistors.
  • the transfer transistor TRG1 When the transfer drive signal TRG1g supplied to the gate electrode becomes active, the transfer transistor TRG1 becomes conductive in response to this, thereby transferring the charge accumulated in the photodiode 12 to the floating diffusion region FD1.
  • the transfer drive signal TRG2g supplied to the gate electrode becomes active, the transfer transistor TRG2 becomes conductive in response to this, thereby transferring the charge accumulated in the photodiode 12 to the floating diffusion region FD2.
  • the floating diffusion regions FD1 and FD2 are charge holding parts that temporarily hold the charges transferred from the photodiode 12.
  • the switching transistor FDG1 becomes conductive in response to the active state of the FD drive signal FDG1g supplied to the gate electrode, thereby connecting the additional capacitance FDL1 to the floating diffusion region FD1.
  • the switching transistor FDG2 becomes conductive in response to the active state of the FD drive signal FDG2g supplied to the gate electrode, thereby connecting the additional capacitance FDL2 to the floating diffusion region FD2.
  • Additional capacitances FDL1 and FDL2 can be formed by wiring capacitances, for example.
  • the reset transistor RST1 becomes conductive in response to this, thereby resetting the potential of the floating diffusion region FD1.
  • the reset transistor RST2 becomes conductive in response to the activation of the reset drive signal RSTg supplied to the gate electrode, thereby resetting the potential of the floating diffusion region FD2. Note that when the reset transistors RST1 and RST2 are brought into the active state, the switching transistors FDG1 and FDG2 are also brought into the active state at the same time, and the additional capacitors FDL1 and FDL2 are also reset.
  • the vertical drive section (for example, the vertical drive section 522 in FIG. 16) that drives the pixel 200 activates the switching transistors FDG1 and FDG2 and adds the floating diffusion region FD1, for example, at high illumination with a large amount of incident light.
  • the capacitor FDL1 is connected, and the floating diffusion region FD2 and the additional capacitor FDL2 are also connected. This allows more charges to be accumulated during high illuminance.
  • the vertical drive section makes the switching transistors FDG1 and FDG2 inactive, and separates the additional capacitors FDL1 and FDL2 from the floating diffusion regions FD1 and FD2, respectively. Thereby, conversion efficiency can be increased.
  • the charge discharge transistor OFG discharges the charge accumulated in the photodiode 12 by becoming conductive in response to the discharge drive signal OFG1g supplied to the gate electrode becoming active.
  • the amplification transistor AMP1 has its source electrode connected to the vertical signal line 211A via the selection transistor SEL1, thereby connecting to a constant current source (not shown) and forming a source follower circuit.
  • the amplification transistor AMP2 has a source electrode connected to the vertical signal line 211B via the selection transistor SEL2, thereby connecting to a constant current source (not shown) and forming a source follower circuit.
  • the selection transistor SEL1 is connected between the source electrode of the amplification transistor AMP1 and the vertical signal line 211A.
  • the selection drive signal SEL1g supplied to the gate electrode becomes active, the selection transistor SEL1 becomes conductive in response to this, and outputs the pixel signal VSL1 output from the amplification transistor AMP1 to the vertical signal line 211A.
  • the selection transistor SEL2 is connected between the source electrode of the amplification transistor AMP2 and the vertical signal line 211B.
  • the selection drive signal SEL2g supplied to the gate electrode becomes active, the selection transistor SEL2 becomes conductive in response to this, and outputs the pixel signal VSL2 output from the amplification transistor AMP2 to the vertical signal line 211B.
  • Transfer transistors TRG1 and TRG2 switching transistors FDG1 and FDG2, amplification transistors AMP1 and AMP2, selection transistors SEL1 and SEL2, and charge discharge transistor OFG of the pixel 200 are controlled by the vertical drive section.
  • the additional capacitors FDL1 and FDL2 and the switching transistors FDG1 and FDG2 that control their connections may be omitted, but by providing the additional capacitor FDL and using it properly according to the amount of incident light, it is possible to achieve high dynamic A range can be secured.
  • a reset operation that resets the charge of the pixels 200 is performed on all pixels in the pixel array portion in which the pixels 200 are arranged in a matrix. That is, the charge discharge transistor OFG, the reset transistors RST1 and RST2, and the switching transistors FDG1 and FDG2 are turned on, and the accumulated charge in the photodiode 12, the floating diffusion regions FD1 and FD2, and the additional capacitances FDL1 and FDL2 is discharged.
  • transfer transistors TRG1 and TRG2 are driven alternately. That is, in the first period, the transfer transistor TRG1 is controlled to be on and the transfer transistor TRG2 is controlled to be off. During this first period, charges generated in the photodiode 12 are transferred to the floating diffusion region FD1. In the second period following the first period, the transfer transistor TRG1 is controlled to be off and the transfer transistor TRG2 is controlled to be on. In this second period, charges generated in the photodiode 12 are transferred to the floating diffusion region FD2. As a result, charges generated in the photodiode 12 are alternately distributed and accumulated in the floating diffusion regions FD1 and FD2.
  • each pixel 200 in the pixel array section is selected line-sequentially.
  • selection transistors SEL1 and SEL2 are turned on.
  • the charges accumulated in the floating diffusion region FD1 are outputted as the pixel signal VSL1 via the vertical signal line 211A.
  • the charges accumulated in the floating diffusion region FD2 are output as a pixel signal VSL2 via the vertical signal line 211B.
  • the reflected light received by the pixel 200 is delayed from the timing of irradiation by the light source according to the distance to the target object.
  • the distribution ratio of the charges accumulated in the two floating diffusion regions FD1 and FD2 changes depending on the delay time depending on the distance to the target object, so the distribution ratio of the charges accumulated in the two floating diffusion regions FD1 and FD2 changes. , the distance to an object can be found.
  • FIG. 4 is a cross-sectional view showing an example of the configuration of the pixel 200 when the photodetection device 1 is a gate-type ToF sensor.
  • FIG. 4 parts corresponding to the cross-sectional view shown in FIG. 1 are denoted by the same reference numerals, and explanations of those parts will be omitted as appropriate.
  • the symbols of the gate insulating film 31, gate electrode 32, and sidewall 33 of the MOS transistor Tr are omitted.
  • photodiodes 12 are formed in the SiGe layer 11 in units of 200 pixels.
  • the element isolation section 13 is formed at the boundary between the pixels 200 and functions as a pixel isolation section that isolates the photodiode 12 into pixel units.
  • the light-shielding film 15 is formed at the boundary between the pixels 200, and functions as an inter-pixel light-shielding film.
  • a trench is formed from the back surface (top surface in FIG. 1) of the SiGe layer 11, so that the trench width (trench width) increases from the back surface to the front surface of the SiGe layer 11. It had a tapered cross-sectional shape with a narrow opening width.
  • a trench is formed from the front surface side of the SiGe layer 11, so that the groove width (trench It has an inverted tapered cross-sectional shape with a narrow opening width.
  • Two MOS transistors Tr1 and Tr2 are formed on the MOS transistor formation surface where the Si layer 21 is formed. These two MOS transistors Tr1 and Tr2 correspond to the transfer transistors TRG1 and TRG2 of the pixel circuit in FIG. 3.
  • the photodetector 1 is a gate-type ToF sensor and the photodiode 12 of the pixel 200 is formed on the SiGe layer 11, it has high quantum efficiency in the infrared region and improves the light receiving sensitivity.
  • a ToF sensor can be realized.
  • FIG. 5 shows an example of a pixel circuit when the photodetector 1 is an IR image sensor.
  • the pixel 200 is a transfer transistor in order to distribute and accumulate the charge generated in the photodiode 12 in the two floating diffusion regions FD1 and FD2. It had two each of TRG, floating diffusion region FD, additional capacitance FDL, switching transistor FDG, amplification transistor AMP, reset transistor RST, and selection transistor SEL.
  • the photodetection device 1 is an IR imaging sensor, only one charge holding section is required to temporarily hold the charges generated in the photodiode 12, so the transfer transistor TRG, the floating diffusion region FD, and the additional There is also one capacitor FDL, one switching transistor FDG, one amplification transistor AMP, one reset transistor RST, and one selection transistor SEL.
  • the pixel 200 has the circuit configuration shown in FIG. 3, as shown in FIG. , the amplification transistor AMP2, and the selection transistor SEL2 are omitted. Floating diffusion region FD2 and vertical signal line 211B are also omitted.
  • FIG. 6 is a cross-sectional view showing an example of the configuration of the pixel 200 when the photodetector 1 is an IR image sensor.
  • FIG. 6 the same reference numerals are given to the parts corresponding to the cross-sectional view shown in FIG. 1, and the description of those parts will be omitted as appropriate. However, the numbers of the gate insulating film 31 and sidewalls 33 of the MOS transistor Tr are omitted.
  • the photodetection device 1 is an IR image sensor
  • photodiodes 12 are formed in the SiGe layer 11 in units of 200 pixels.
  • the element isolation section 13 is formed at the boundary between the pixels 200 and functions as a pixel isolation section that isolates the photodiode 12 into pixel units.
  • the light-shielding film 15 is formed at the boundary between the pixels 200, and functions as an inter-pixel light-shielding film.
  • the element isolation part 13 is formed by forming a trench from the front surface side of the SiGe layer 11, as in the case of the gate-type ToF sensor shown in FIG. It has an inverted tapered cross-sectional shape in which the groove width becomes narrower from the front side to the back side.
  • Two MOS transistors Tr1 and Tr2 are formed on the MOS transistor formation surface where the Si layer 21 is formed. These two MOS transistors Tr1 and Tr2 correspond to, for example, the transfer transistor TRG1 and the charge discharge transistor OFG of the pixel circuit in FIG. 5.
  • the other pixel transistors of the pixel circuit in FIG. 3, such as the switching transistor FDG1, the amplification transistor AMP1, the selection transistor SEL1, and the reset transistor RST1, are formed on the logic substrate side (not shown).
  • bonding electrodes 42 and 64 are formed on the bonding surface 66 between the wiring layer 61 and the insulating layer 43, and the gate electrode of the MOS transistor Tr is connected via the bonding electrodes 42 and 64.
  • the voltage applied to 32 was being supplied.
  • the cross-sectional view of the pixel 200 in FIG. It is connected to wiring 62.
  • the gate electrode 32 of the MOS transistor Tr2 is connected to a predetermined metal wiring 62 of the wiring layer 61 via a contact electrode 242.
  • the voltage applied to the gate electrodes 32 of the MOS transistors Tr1 and Tr2 may be supplied only by the vias (through electrodes) without passing through the junction electrodes 42 and 64.
  • the applied voltage may be supplied via the bonding electrodes 42 and 64 as in FIGS. 1 and 4.
  • the photodetection device 1 may be an RGBIR image sensor that receives infrared light and RGB light.
  • the pixel array of the photodetector 1 is set to 4 pixels of 2 x 2: an R pixel that receives R (red) light, a G pixel that receives G (green) light, and a B (blue) light.
  • B pixels that receive IR (infrared) light and IR pixels that receive IR (infrared) light may be assigned to form a pixel array that is repeatedly arranged in a matrix.
  • Whether each pixel 200 is an R pixel, B pixel, G pixel, or IR pixel can be controlled by inserting a color filter layer between the photodiode 12 and the on-chip lens 17, for example. can.
  • the CAPD type ToF sensor applies a voltage directly to the semiconductor substrate 51 to generate current within the substrate, and quickly modulates a wide range of photoelectric conversion regions within the substrate, thereby distributing the photoelectrically converted charges. It is an indirect ToF sensor.
  • FIG. 7 shows an example of a pixel circuit when the photodetection device 1 is a CAPD type ToF sensor.
  • the pixel 200 in FIG. 7 has signal extraction sections 301-1 and 301-2 within the semiconductor substrate 51.
  • the signal extraction section 301-1 includes at least an N+ semiconductor region 311-1, which is an N-type semiconductor region, and a P+ semiconductor region 312-1, which is a P-type semiconductor region.
  • the signal extraction section 301-2 includes at least an N+ semiconductor region 311-2, which is an N-type semiconductor region, and a P+ semiconductor region 312-2, which is a P-type semiconductor region.
  • the pixel 200 has a transfer transistor 321A, an FD 322A, a reset transistor 323A, an amplification transistor 324A, and a selection transistor 325A for the signal extraction section 301-1.
  • the pixel 200 includes a transfer transistor 321B, an FD 322B, a reset transistor 323B, an amplification transistor 324B, and a selection transistor 325B for the signal extraction section 301-2.
  • the vertical drive section applies a predetermined voltage MIX1 (first voltage) to the P+ semiconductor region 312-1 and a predetermined voltage MIX2 (second voltage) to the P+ semiconductor region 312-2.
  • a predetermined voltage MIX1 first voltage
  • MIX2 second voltage
  • one of the voltages MIX1 and MIX2 is 1.5V, and the other is 0V.
  • the P+ semiconductor regions 312-1 and 312-2 are voltage application parts to which a first voltage or a second voltage is applied.
  • the N+ semiconductor regions 311-1 and 311-2 are charge detection sections that detect and accumulate charges generated by photoelectric conversion of light incident on the semiconductor substrate 51.
  • the transfer transistor 321A becomes conductive in response to this, thereby transferring the charges accumulated in the N+ semiconductor region 311-1 to the FD 322A.
  • the transfer transistor 321B becomes conductive in response to this, thereby transferring the charges accumulated in the N+ semiconductor region 311-2 to the FD 322B.
  • the FD 322A temporarily holds the charge supplied from the N+ semiconductor region 311-1.
  • the FD 322B temporarily holds the charge supplied from the N+ semiconductor region 311-2.
  • the reset transistor 323A becomes conductive in response to this, thereby resetting the potential of the FD 322A to a predetermined level (reset voltage VDD).
  • the reset transistor 323B becomes conductive in response to the activation of the reset drive signal RSTg supplied to the gate electrode, thereby resetting the potential of the FD 322B to a predetermined level (reset voltage VDD). Note that when reset transistors 323A and 323B are activated, transfer transistors 321A and 321B are also activated at the same time.
  • the amplification transistor 324A has a source electrode connected to the vertical signal line 211A via the selection transistor 325A, so that the amplification transistor 324A connects the load MOS of the constant current source circuit section 326A connected to one end of the vertical signal line 211A and the source follower circuit.
  • the amplification transistor 324B has a source electrode connected to the vertical signal line 211B via the selection transistor 325B, thereby connecting the load MOS of the constant current source circuit section 326B connected to one end of the vertical signal line 211B and the source follower circuit.
  • the selection transistor 325A is connected between the source electrode of the amplification transistor 324A and the vertical signal line 211A.
  • the selection drive signal SELg supplied to the gate electrode becomes active, the selection transistor 325A becomes conductive in response to this, and outputs the pixel signal output from the amplification transistor 324A to the vertical signal line 211A.
  • the selection transistor 325B is connected between the source electrode of the amplification transistor 324B and the vertical signal line 211B.
  • the selection drive signal SELg supplied to the gate electrode becomes active, the selection transistor 325B becomes conductive in response to this, and outputs the pixel signal output from the amplification transistor 324B to the vertical signal line 211B.
  • Transfer transistors 321A and 321B, reset transistors 323A and 323B, amplification transistors 324A and 324B, and selection transistors 325A and 325B of pixel 200 are controlled by, for example, a vertical drive unit that drives pixel 200.
  • FIG. 8 is a cross-sectional view showing an example of the configuration of the pixel 200 when the photodetection device 1 is a CAPD type ToF sensor.
  • the entire SiGe layer 11 of the semiconductor substrate 51 serves as a photoelectric conversion section.
  • the element separation section 13 is formed at the boundary of the pixel 200, and functions as a pixel separation section that separates the photoelectric conversion section into pixel units.
  • the light shielding film 15 is formed at the boundary between the pixels 200, and functions as an inter-pixel light shielding film.
  • the element isolation part 13 is formed by forming a trench from the front surface side of the SiGe layer 11, as in the case of the gate-type ToF sensor shown in FIG. It has an inverted tapered cross-sectional shape in which the groove width becomes narrower from the front side to the back side.
  • a P-well 331 is formed in the center of the pixel 200 and near the interface on the front surface side of the SiGe layer 11. -1 and a signal extraction section 301-2 are formed.
  • the signal extraction section 301-1 includes at least an N+ semiconductor region 311-1 and a P+ semiconductor region 312-1.
  • the signal extraction section 301-2 includes at least an N+ semiconductor region 311-2 and a P+ semiconductor region 312-2.
  • the P+ semiconductor region 312-1 of the signal take-out section 301-1 is provided with a predetermined signal from the logic board side via the predetermined metal wiring 62 of the wiring layer 61, the contact electrode 65, the bonding electrodes 64 and 42, and the contact electrode 41. voltage MIX1 is applied.
  • a signal DET1 corresponding to the charge obtained by photoelectric conversion is transmitted to the logic circuit via a predetermined contact electrode 41, junction electrodes 42 and 64, and contact electrode 65. It is output to the metal wiring 62 on the board side.
  • a predetermined voltage is applied to the P+ semiconductor region 312-2 of the signal extraction section 301-2 from the logic board side via the predetermined metal wiring 62 of the wiring layer 61, the contact electrode 65, the bonding electrodes 64 and 42, and the contact electrode 41. MIX2 is applied.
  • a signal DET2 corresponding to the electric charge obtained by photoelectric conversion is transmitted to the logic circuit through a predetermined contact electrode 41, junction electrodes 42 and 64, and contact electrode 65. It is output to the metal wiring 62 on the board side.
  • the vertical drive unit drives the pixel 200 and distributes a signal according to the charge obtained by photoelectric conversion to FD 322A and FD 322B (FIG. 7).
  • the vertical drive section applies voltage to the two P+ semiconductor regions 312.
  • the vertical drive unit applies a voltage of 1.5V to the P+ semiconductor region 312-1 and a voltage of 0V to the P+ semiconductor region 312-2.
  • infrared light reflected light
  • the infrared light is photoelectrically converted within the SiGe layer 11 and converted into electrons and positive light.
  • the obtained electrons are guided in the direction of the P+ semiconductor region 312-1 by the electric field between the P+ semiconductor regions 312 and move into the N+ semiconductor region 311-1.
  • electrons generated by photoelectric conversion are used as signal charges for detecting a signal corresponding to the amount of infrared light incident on the pixel 200, that is, the amount of received infrared light.
  • the accumulated charge in the N+ semiconductor region 311-1 is transferred to the FD322A directly connected to the N+ semiconductor region 311-1, and the signal DET1 corresponding to the charge transferred to the FD322A is transmitted to the amplification transistor 324A and the vertical signal line 211A. read out via
  • This pixel signal is a signal indicating the amount of charge corresponding to the electrons detected by the N+ semiconductor region 311-1, that is, the amount of charge accumulated in the FD 322A.
  • the pixel signal can also be said to be a signal indicating the amount of infrared light received by the pixel 200.
  • pixel signals corresponding to electrons detected in the N+ semiconductor region 311-2 may also be used for distance measurement as appropriate.
  • a voltage is applied to the two P+ semiconductor regions 312 by the vertical drive unit so that an electric field in the opposite direction to the electric field that has been generated in the SiGe layer 11 is generated.
  • a voltage of 1.5V is applied to the P+ semiconductor region 312-2, and a voltage of 0V is applied to the P+ semiconductor region 312-1.
  • infrared light (reflected light) from the outside enters the SiGe layer 11 through the on-chip lens 17, and the infrared light is photoelectrically converted within the SiGe layer 11 to generate electrons and holes. Once converted into pairs, the obtained electrons are guided toward the P+ semiconductor region 312-2 by the electric field between the P+ semiconductor regions 312 and move into the N+ semiconductor region 311-2.
  • the accumulated charge in the N+ semiconductor region 311-2 is transferred to the FD322B directly connected to the N+ semiconductor region 311-2, and the signal DET2 corresponding to the charge transferred to the FD322B is transmitted to the amplification transistor 324B and the vertical signal line 211B. read out via
  • pixel signals corresponding to electrons detected in the N+ semiconductor region 311-1 may also be appropriately used for distance measurement in the same manner as in the N+ semiconductor region 311-2.
  • the MOS transistor Tr is not formed on the front surface of the SiGe layer 11. Therefore, in the example of FIG. 8, the Si layer 21 is formed on the front surface of the SiGe layer 11, similar to the other cross-sectional configurations described above, but this Si layer 21 can be omitted. Since it is not necessary to form the Si layer 21, the manufacturing process becomes easier.
  • FIG. 9 shows an example of a pixel circuit when the photodetection device 1 is a direct ToF sensor using SPAD.
  • the pixel 200 in FIG. 9 includes a SPAD 401 and a readout circuit 402 composed of a transistor 411 and an inverter 412.
  • the pixel 200 also includes a switch 413.
  • the transistor 411 is a P-type MOS transistor.
  • the cathode of the SPAD 401 is connected to the drain of the transistor 411, as well as to the input terminal of the inverter 412 and one end of the switch 413.
  • the anode of the SPAD 401 is connected to a power supply voltage VA (hereinafter also referred to as anode voltage VA).
  • the SPAD 401 is a photodiode (single photon avalanche photodiode) that avalanche-amplifies generated electrons when incident light is applied and outputs a signal of cathode voltage VS.
  • the power supply voltage VA supplied to the anode of the SPAD 401 is, for example, a negative bias (negative potential) of about -20V.
  • the transistor 411 is a constant current source that operates in the saturation region, and performs passive quenching by functioning as a quenching resistor.
  • the source of the transistor 411 is connected to the power supply voltage VE, and the drain is connected to the cathode of the SPAD 401, the input terminal of the inverter 412, and one end of the switch 413.
  • the power supply voltage VE is also supplied to the cathode of the SPAD 401.
  • a pull-up resistor can also be used instead of the transistor 411 connected in series with the SPAD 401.
  • a voltage (excess bias) larger than the breakdown voltage VBD of the SPAD 401 is applied to the SPAD 401.
  • the breakdown voltage VBD of the SPAD 401 is 20V and a voltage 3V higher than that is applied, the power supply voltage VE supplied to the source of the transistor 411 is 3V.
  • the breakdown voltage VBD of the SPAD 401 varies greatly depending on temperature and other factors. Therefore, the applied voltage applied to the SPAD 401 is controlled (adjusted) according to changes in the breakdown voltage VBD. For example, if the power supply voltage VE is a fixed voltage, the anode voltage VA is controlled (adjusted).
  • the switch 413 has one end connected to the cathode of the SPAD 401, the input terminal of the inverter 412, and the drain of the transistor 411, and the other end connected to the ground (GND).
  • the switch 413 can be composed of, for example, an N-type MOS transistor, and is turned on and off according to the gating control signal VG supplied from the vertical drive section.
  • the vertical drive section supplies a high or low gating control signal VG to the switch 413 of each pixel 200 and turns the switch 413 on and off, thereby setting each pixel 200 of the pixel array section as an active pixel or an inactive pixel.
  • An active pixel is a pixel that detects incident photons
  • an inactive pixel is a pixel that does not detect incident photons.
  • the switch 413 is set to off, as described above.
  • the SPAD 401 is set to Geiger mode.
  • avalanche multiplication occurs and current flows through the SPAD 401. If avalanche multiplication occurs and a current flows through the SPAD 401, the current flows through the SPAD 401, so that a current also flows through the transistor 411, and a voltage drop occurs due to the resistance component of the transistor 411.
  • the cathode voltage VS of the SPAD 401 becomes lower than 0V
  • the anode-cathode voltage of the SPAD 401 becomes lower than the breakdown voltage VBD, so avalanche amplification stops.
  • the current generated by avalanche amplification flows through the transistor 411, causing a voltage drop, and with the generated voltage drop, the cathode voltage VS becomes lower than the breakdown voltage VBD, and the avalanche amplification is stopped. This action is the quench action.
  • the inverter 412 outputs a Lo pixel signal PFout when the cathode voltage VS, which is the input voltage, is equal to or higher than a predetermined threshold voltage Vth, and outputs a Hi pixel signal PFout when the cathode voltage VS is less than the predetermined threshold voltage Vth. do. Therefore, when a photon is incident on the SPAD 401 and avalanche multiplication occurs and the cathode voltage VS decreases and becomes less than the threshold voltage Vth, the pixel signal PFout is inverted from a low level to a high level. On the other hand, when the avalanche multiplication of the SPAD 401 converges and the cathode voltage VS rises and becomes equal to or higher than the threshold voltage Vth, the pixel signal PFout is inverted from high level to low level.
  • the switch 413 when the pixel 200 is set as an inactive pixel, the switch 413 is turned on.
  • the cathode voltage VS of the SPAD 401 becomes 0V.
  • the voltage between the anode and cathode of the SPAD 401 becomes lower than the breakdown voltage VBD, so that even if a photon enters the SPAD 401, it does not react.
  • FIG. 10 is a cross-sectional view showing an example of the configuration of the pixel 200 when the photodetector 1 is a direct ToF sensor using SPAD.
  • the SiGe layer 11 of the semiconductor substrate 51 is configured as an N-well region.
  • the element separation section 13 is formed at the boundary of the pixel 200, and functions as a pixel separation section that separates the photoelectric conversion section into pixel units.
  • the light-shielding film 15 is formed at the boundary between the pixels 200, and functions as an inter-pixel light-shielding film.
  • the element isolation section 13 is formed by forming a trench from the front surface side of the SiGe layer 11, as in the case of the gate-type ToF sensor shown in FIG. It has an inverted tapered cross-sectional shape in which the groove width becomes narrower from the front side to the back side.
  • a P-type diffusion layer 441 and an N-type diffusion layer 442 are formed in the center of the pixel 200 and near the front surface side interface of the SiGe layer 11.
  • An avalanche multiplication region 443 is formed by a depletion layer formed in a region where the P-type diffusion layer 441 and the N-type diffusion layer 442 are connected.
  • a hole accumulation layer 444 is formed over the entire area from the front surface to the back surface of the SiGe layer 11 in the inner region of the element isolation section 13 that is the peripheral portion of the boundary of the pixel 200.
  • the hole accumulation layer 444 is connected to a high concentration P-type diffusion layer 445 formed at the front surface side interface of the SiGe layer 11.
  • the SiGe layer 11 is controlled to be N-type by implanting N-type impurities, and forms an electric field that transfers electrons generated by photoelectric conversion in the pixel 200 to the avalanche multiplication region 443.
  • the P-type diffusion layer 441 is a dense P-type diffusion layer (P+) formed over almost the entire pixel region in the planar direction.
  • the N-type diffusion layer 442 is a dense N-type diffusion layer (N+) formed near the surface of the semiconductor substrate 51 and covering almost the entire pixel region, like the P-type diffusion layer 441.
  • the N-type diffusion layer 442 is a contact layer connected to the contact electrode 41 as a cathode electrode for supplying a negative voltage to form the avalanche multiplication region 443, and a part of the N-type diffusion layer 442 is on the front side of the semiconductor substrate 51. It has a convex shape that extends up to the contact electrode 41 on the surface.
  • a power supply voltage VE is applied to the N-type diffusion layer 442 from the connected contact electrode 41 .
  • the hole accumulation layer 444 is a P-type diffusion layer (P) and accumulates holes. Further, the hole accumulation layer 444 is connected to a high concentration P-type diffusion layer 445 that is electrically connected to the contact electrode 41 as an anode electrode of the SPAD 401.
  • P P-type diffusion layer
  • the high-concentration P-type diffusion layer 445 is a dense P-type diffusion layer (P++) formed near the surface of the SiGe layer 11 in the periphery of the pixel 200 in a plan view.
  • a contact layer is formed to electrically connect the two.
  • a power supply voltage VA is applied to the heavily doped P-type diffusion layer 445 from the connected contact electrode 41 .
  • the SPAD 401 as a photoelectric conversion unit includes a SiGe layer 11 of a semiconductor substrate 51, a P-type diffusion layer 441, an N-type diffusion layer 442, a hole accumulation layer 444, and a high concentration P-type diffusion layer 445, and the hole accumulation layer 444 is , is connected to the first contact electrode 41 as an anode electrode, and the N-type diffusion layer 442 is connected to the second contact electrode 41 as a cathode electrode.
  • the SiGe layer 11 may be controlled to be P-type by implanting P-type impurities.
  • the voltage applied to the N-type diffusion layer 442 becomes the power supply voltage VA
  • the voltage applied to the high concentration P-type diffusion layer 445 becomes the power supply voltage VE.
  • the transistor 411, inverter 412, and switch 413 that constitute the readout circuit 402 of the pixel 200 are formed on the logic substrate side.
  • the MOS transistor Tr is not formed on the front surface of the SiGe layer 11. Therefore, in the example of FIG. 10, the Si layer 21 is formed on the front surface of the SiGe layer 11, as in the other cross-sectional configurations described above, but this Si layer 21 can be omitted. Since it is not necessary to form the Si layer 21, the manufacturing process becomes easier.
  • FIG. 11 shows an example of a cross-sectional configuration in which a diffraction structure is added to the pixel 200 of the direct ToF sensor shown in FIG. 10.
  • the pixel 200 in FIG. 11 is provided with a diffraction structure 481 in which a recess is formed at a predetermined depth at the interface on the light incident surface side of the SiGe layer 11, and a silicon oxide film as an insulating film is embedded in the recess.
  • the diffraction structure 481 can be formed as STI (Shallow Trench Isolation) using isotropic dry etching, for example.
  • FIG. 12 is a plan view showing the planar shape of the diffraction structure 481 in FIG. 11.
  • the diffraction structure 481 can be formed, for example, in a planar shape of a 4x4 grating pattern, as shown in FIG. Note that, of course, the planar shape of the diffraction structure 481 is not limited to the 4x4 grating pattern, and may be any other shape.
  • FIG. 13 shows an example of a cross-sectional configuration in which the same diffraction structure 481 as in FIG. 11 is added to the front surface side of the SiGe layer 11 of the pixel 200 of the direct ToF sensor shown in FIG. 10.
  • the configuration other than the diffraction structure 481 is the same as that of the pixel 200 shown in FIG. 10, so a description thereof will be omitted.
  • the optical path length can be increased due to the light scattering effect, and the light receiving sensitivity can be improved.
  • 11 to 13 show examples in which a diffraction structure 481 is added to the pixel structure of the direct ToF sensor shown in FIG. 10, but the gate type ToF sensor shown in FIG. 4 and the IR imaging sensor shown in FIG. It goes without saying that the diffraction structure 481 may be added to the pixel structure of the CAPD type ToF sensor shown in FIG.
  • FIG. 14 is a diagram illustrating a modification of the element separation section 13 (pixel separation section) in the photodetection device 1.
  • a trench penetrates the SiGe layer 11 as shown in A in FIG.
  • the element isolation part 13 may have a structure in which it is formed in a part of the SiGe layer 11 in the depth direction without penetrating the SiGe layer 11.
  • FIG. 14C shows an example in which the element isolation portion 13 is formed to a depth halfway through the SiGe layer 11 by a method of forming a trench from the back surface (the top surface in FIG. 14) of the SiGe layer 11.
  • FIG. 14 shows an example in which the element isolation part 13 is formed to a depth halfway through the SiGe layer 11 by forming a trench from the front surface (bottom surface in FIG. 14) of the SiGe layer 11. .
  • the pixel 200 has the diffraction structure 481 formed by STI, etc., but the diffraction structure 481 of the pixel 200 can be A structure may be formed.
  • FIG. 15A shows an example in which a diffraction structure 491 is provided in the region near the center of the pixel 200 using a method of forming a trench from the back side of the SiGe layer 11.
  • FIG. 15B shows an example in which a diffraction structure 491 is provided in a region near the center of the pixel 200 using a method of forming a trench from the front surface side of the SiGe layer 11.
  • the depth of the trench can be formed deeper than the diffraction structure 481 shown in FIGS. 11 to 13.
  • FIG. 15C is a plan view showing the planar shape of the diffraction structure 491.
  • the diffraction structure 491 can be formed, for example, in a cross-shaped planar shape that intersects at the center of the pixel, as shown in FIG. 15C.
  • the planar shape of the diffraction structure 491 is not limited to a cross-shaped planar shape, and may be a grating pattern like the diffraction structure 481 or other shapes.
  • FIG. 16 is a block diagram showing a schematic configuration example of the photodetecting device 1 having the above-mentioned pixel 200.
  • the photodetecting device 1 shown in FIG. 16 includes a pixel array section 521 in which the pixels 200 described above are arranged in rows and columns in the row and column directions, and a peripheral circuit section.
  • the peripheral circuit section includes, for example, a vertical drive section 522, a column processing section 523, a horizontal drive section 524, a system control section 525, and the like.
  • the photodetector 1 is further provided with a signal processing section 526 and a data storage section 527.
  • the signal processing section 526 and the data storage section 527 may be mounted on the same substrate as the photodetection device 1, or may be arranged on a substrate in a module separate from the photodetection device 1.
  • the pixel array section 521 has a configuration in which pixels 200 that generate charges according to the amount of received light and output signals according to the charges are arranged in matrix in the row and column directions.
  • the row direction refers to the direction in which the pixels 200 are arranged in the horizontal direction
  • the column direction refers to the direction in which the pixels 200 are arranged in the vertical direction.
  • the row direction is the horizontal direction in the figure
  • the column direction is the vertical direction in the figure. Therefore, the pixel array section 521 includes a plurality of pixels 200 that photoelectrically convert incident light and output a signal according to the resulting charge.
  • the photodetection device 1 is a gate type ToF sensor, and when it has the configuration explained in FIGS.
  • the photodetection device 1 is an IR It is an image sensor. Further, when the pixel 200 has the configuration explained in FIGS. 7 and 8, the photodetection device 1 is a CAPD type ToF sensor, and when the pixel 200 has the configuration explained in FIGS. 9 and 10, the photodetection device 1 is a SPAD type ToF sensor. This is a direct ToF sensor using
  • a pixel drive line 528 is wired along the row direction for each pixel row, and a vertical signal line 529 is wired along the pixel column direction for a matrix-like pixel arrangement.
  • the pixel drive line 528 transmits a drive signal for driving when reading a signal from the pixel 200.
  • the pixel drive line 528 is shown as one wiring in FIG. 16, it is not limited to one wiring.
  • One end of the pixel drive line 528 is connected to an output end corresponding to each row of the vertical drive section 522.
  • the vertical signal line 529 corresponds to the vertical signal lines 211A and 211B described with reference to FIG. 3 and the like.
  • the vertical drive section 522 is composed of a shift register, an address decoder, etc., and drives each pixel 200 of the pixel array section 521 simultaneously or in units of rows. That is, the vertical drive unit 522 forms a control circuit that controls the operation of each pixel 200 of the pixel array unit 521, together with the system control unit 525 that controls the vertical drive unit 522.
  • Pixel signals output from each pixel 200 in a pixel row according to drive control by the vertical drive unit 522 are input to the column processing unit 523 through the vertical signal line 529.
  • the column processing unit 523 performs predetermined signal processing on the pixel signal output from each pixel 200 through the vertical signal line 529, and temporarily holds the pixel signal after the signal processing. Specifically, the column processing unit 523 performs noise removal processing, AD (Analog to Digital) conversion processing, etc. as signal processing.
  • the horizontal drive section 524 is composed of a shift register, an address decoder, etc., and sequentially selects unit circuits corresponding to the pixel columns of the column processing section 523. By this selective scanning by the horizontal driving section 524, pixel signals subjected to signal processing for each unit circuit in the column processing section 523 are sequentially output.
  • the system control unit 525 includes a timing generator that generates various timing signals, and based on the various timing signals generated by the timing generator, the vertical drive unit 522, column processing unit 523, and horizontal drive unit 524 Performs drive control such as
  • the signal processing unit 526 has at least an arithmetic processing function, and performs various signal processing such as arithmetic processing based on the pixel signal output from the column processing unit 523.
  • the data storage section 527 temporarily stores data necessary for signal processing in the signal processing section 526.
  • the photodetector 1 configured as described above has a circuit configuration called a column ADC type in which an AD conversion circuit that performs AD conversion processing in the column processing section 523 is arranged for each pixel column.
  • the photodetector 1 is a back-illuminated device that has a high quantum efficiency in the infrared region because a photodiode 12 as a photoelectric conversion section is formed in the SiGe layer 11.
  • the photodetection device 1 is mounted on a vehicle, for example, and can measure the distance to an object outside the vehicle and generate a photographed image.
  • the photodetecting device 1 can be installed in a smartphone or the like, and can measure the distance to an object or generate a photographed image.
  • FIG. 17 is a diagram showing an example of use of an image sensor using the above-described photodetection device 1.
  • the above-described photodetection device 1 can be used as an image sensor in various cases for sensing light such as infrared light, visible light, ultraviolet light, and X-rays, for example, as described below.
  • ⁇ Digital cameras, mobile devices with camera functions, and other devices that take images for viewing purposes Devices used for transportation, such as in-vehicle sensors that take pictures of the rear, surroundings, and interior of the car, surveillance cameras that monitor moving vehicles and roads, and distance sensors that measure the distance between vehicles, etc.
  • Devices used for transportation such as in-vehicle sensors that take pictures of the rear, surroundings, and interior of the car, surveillance cameras that monitor moving vehicles and roads, and distance sensors that measure the distance between vehicles, etc.
  • User gestures Devices used in home appliances such as TVs, refrigerators, and air conditioners to take pictures and operate devices according to the gestures.
  • - Endoscopes devices that perform blood vessel imaging by receiving infrared light, etc.
  • Devices used for medical and healthcare purposes - Devices used for security, such as surveillance cameras for crime prevention and cameras for person authentication - Skin measurement devices that take pictures of the skin, and devices that take pictures of the scalp - Devices used for beauty purposes, such as microscopes for skin care.
  • - Devices used for sports such as action cameras and wearable cameras.
  • - Cameras, etc. used to monitor the condition of fields and crops. , equipment used for agricultural purposes
  • the technology of the present disclosure is not limited to application to photodetecting devices. That is, the technology of the present disclosure applies to an image capture unit (photoelectric conversion unit) in an image capture device such as a digital still camera or a video camera, a mobile terminal device having an image capture function, or a copying machine that uses a solid-state image capture device in an image reading unit. ) is applicable to all electronic devices that use solid-state imaging devices.
  • the solid-state imaging device may be formed as a single chip, or may be a module having an imaging function in which an imaging section and a signal processing section or an optical system are packaged together.
  • FIG. 18 is a block diagram showing a configuration example of an imaging device as an electronic device to which the technology of the present disclosure is applied.
  • the imaging device 1000 in FIG. 18 includes an optical section 1001 consisting of a lens group, etc., a solid-state imaging device (imaging device) 1002 in which the configuration of the photodetection device 1 in FIG. 1 is adopted, and a DSP (Digital Signal (processor) circuit 1003.
  • the imaging apparatus 1000 also includes a frame memory 1004, a display section 1005, a recording section 1006, an operation section 1007, and a power supply section 1008.
  • DSP circuit 1003, frame memory 1004, display section 1005, recording section 1006, operation section 1007, and power supply section 1008 are interconnected via bus line 1009.
  • the optical section 1001 takes in incident light (image light) from a subject and forms an image on the imaging surface of the solid-state imaging device 1002.
  • the solid-state imaging device 1002 converts the amount of incident light imaged onto the imaging surface by the optical section 1001 into an electrical signal for each pixel, and outputs the electric signal as a pixel signal.
  • the photodetection device 1 of FIG. 1 that is, a photodetection device that has high quantum efficiency in the infrared region and has improved sensitivity to light in the infrared region.
  • the display unit 1005 is configured with a thin display such as an LCD (Liquid Crystal Display) or an organic EL (Electro Luminescence) display, and displays moving images or still images captured by the solid-state imaging device 1002.
  • a recording unit 1006 records a moving image or a still image captured by the solid-state imaging device 1002 on a recording medium such as a hard disk or a semiconductor memory.
  • the operation unit 1007 issues operation commands regarding various functions of the imaging device 1000 under operation by the user.
  • a power supply unit 1008 appropriately supplies various power supplies that serve as operating power for the DSP circuit 1003, frame memory 1004, display unit 1005, recording unit 1006, and operation unit 1007 to these supply targets.
  • the sensitivity to light in the infrared region can be improved. Therefore, it is possible to improve the quality of captured images even in the imaging device 1000 such as a video camera, a digital still camera, or a camera module for mobile devices such as a mobile phone.
  • Example of application to endoscopic surgery system The technology according to the present disclosure (this technology) can be applied to various products.
  • the technology according to the present disclosure may be applied to an endoscopic surgery system.
  • FIG. 19 is a diagram illustrating an example of a schematic configuration of an endoscopic surgery system to which the technology according to the present disclosure (present technology) can be applied.
  • FIG. 19 shows an operator (doctor) 11131 performing surgery on a patient 11132 on a patient bed 11133 using the endoscopic surgery system 11000.
  • the endoscopic surgery system 11000 includes an endoscope 11100, other surgical instruments 11110 such as a pneumoperitoneum tube 11111 and an energy treatment instrument 11112, and a support arm device 11120 that supports the endoscope 11100. , and a cart 11200 loaded with various devices for endoscopic surgery.
  • the endoscope 11100 is composed of a lens barrel 11101 whose distal end is inserted into a body cavity of a patient 11132 over a predetermined length, and a camera head 11102 connected to the proximal end of the lens barrel 11101.
  • an endoscope 11100 configured as a so-called rigid scope having a rigid tube 11101 is shown, but the endoscope 11100 may also be configured as a so-called flexible scope having a flexible tube. good.
  • An opening into which an objective lens is fitted is provided at the tip of the lens barrel 11101.
  • a light source device 11203 is connected to the endoscope 11100, and the light generated by the light source device 11203 is guided to the tip of the lens barrel by a light guide extending inside the lens barrel 11101, and the light is guided to the tip of the lens barrel. Irradiation is directed toward an observation target within the body cavity of the patient 11132 through the lens.
  • the endoscope 11100 may be a direct-viewing mirror, a diagonal-viewing mirror, or a side-viewing mirror.
  • An optical system and an image sensor are provided inside the camera head 11102, and reflected light (observation light) from an observation target is focused on the image sensor by the optical system.
  • the observation light is photoelectrically converted by the image sensor, and an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image is generated.
  • the image signal is transmitted as RAW data to a camera control unit (CCU) 11201.
  • CCU camera control unit
  • the CCU 11201 is configured with a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and the like, and centrally controls the operations of the endoscope 11100 and the display device 11202. Further, the CCU 11201 receives an image signal from the camera head 11102, and performs various image processing on the image signal, such as development processing (demosaic processing), for displaying an image based on the image signal.
  • a CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the display device 11202 displays an image based on an image signal subjected to image processing by the CCU 11201 under control from the CCU 11201.
  • the light source device 11203 is composed of a light source such as an LED (Light Emitting Diode), and supplies irradiation light to the endoscope 11100 when photographing the surgical site or the like.
  • a light source such as an LED (Light Emitting Diode)
  • LED Light Emitting Diode
  • the input device 11204 is an input interface for the endoscopic surgery system 11000.
  • the user can input various information and instructions to the endoscopic surgery system 11000 via the input device 11204.
  • the user inputs an instruction to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) by the endoscope 11100.
  • a treatment tool control device 11205 controls driving of an energy treatment tool 11112 for cauterizing tissue, incising, sealing blood vessels, or the like.
  • the pneumoperitoneum device 11206 injects gas into the body cavity of the patient 11132 via the pneumoperitoneum tube 11111 in order to inflate the body cavity of the patient 11132 for the purpose of ensuring a field of view with the endoscope 11100 and a working space for the operator. send in.
  • the recorder 11207 is a device that can record various information regarding surgery.
  • the printer 11208 is a device that can print various types of information regarding surgery in various formats such as text, images, or graphs.
  • the light source device 11203 that supplies irradiation light to the endoscope 11100 when photographing the surgical site can be configured, for example, from a white light source configured by an LED, a laser light source, or a combination thereof.
  • a white light source configured by a combination of RGB laser light sources
  • the output intensity and output timing of each color (each wavelength) can be controlled with high precision, so the white balance of the captured image is adjusted in the light source device 11203. It can be carried out.
  • the laser light from each RGB laser light source is irradiated onto the observation target in a time-sharing manner, and the drive of the image sensor of the camera head 11102 is controlled in synchronization with the irradiation timing, thereby supporting each of RGB. It is also possible to capture images in a time-division manner. According to this method, a color image can be obtained without providing a color filter in the image sensor.
  • the driving of the light source device 11203 may be controlled so that the intensity of the light it outputs is changed at predetermined time intervals.
  • the drive of the image sensor of the camera head 11102 in synchronization with the timing of changes in the light intensity to acquire images in a time-division manner and compositing the images, a high dynamic It is possible to generate an image of a range.
  • the light source device 11203 may be configured to be able to supply light in a predetermined wavelength band compatible with special light observation.
  • Special light observation uses, for example, the wavelength dependence of light absorption in body tissues to illuminate the mucosal surface layer by irradiating a narrower band of light than the light used for normal observation (i.e., white light). So-called narrow band imaging is performed in which predetermined tissues such as blood vessels are photographed with high contrast.
  • fluorescence observation may be performed in which an image is obtained using fluorescence generated by irradiating excitation light.
  • Fluorescence observation involves irradiating body tissues with excitation light and observing the fluorescence from the body tissues (autofluorescence observation), or locally injecting reagents such as indocyanine green (ICG) into the body tissues and It is possible to obtain a fluorescence image by irradiating excitation light corresponding to the fluorescence wavelength of the reagent.
  • the light source device 11203 may be configured to be able to supply narrowband light and/or excitation light compatible with such special light observation.
  • FIG. 20 is a block diagram showing an example of the functional configuration of the camera head 11102 and CCU 11201 shown in FIG. 19.
  • the camera head 11102 includes a lens unit 11401, an imaging section 11402, a driving section 11403, a communication section 11404, and a camera head control section 11405.
  • the CCU 11201 includes a communication section 11411, an image processing section 11412, and a control section 11413. Camera head 11102 and CCU 11201 are communicably connected to each other by transmission cable 11400.
  • the lens unit 11401 is an optical system provided at the connection part with the lens barrel 11101. Observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102 and enters the lens unit 11401.
  • the lens unit 11401 is configured by combining a plurality of lenses including a zoom lens and a focus lens.
  • the imaging unit 11402 is composed of an image sensor.
  • the imaging unit 11402 may include one image sensor (so-called single-plate type) or a plurality of image sensors (so-called multi-plate type).
  • image signals corresponding to RGB are generated by each imaging element, and a color image may be obtained by combining them.
  • the imaging unit 11402 may be configured to include a pair of imaging elements for respectively acquiring right-eye and left-eye image signals corresponding to 3D (dimensional) display. By performing 3D display, the operator 11131 can more accurately grasp the depth of the living tissue at the surgical site.
  • a plurality of lens units 11401 may be provided corresponding to each imaging element.
  • the imaging unit 11402 does not necessarily have to be provided in the camera head 11102.
  • the imaging unit 11402 may be provided inside the lens barrel 11101 immediately after the objective lens.
  • the drive unit 11403 is constituted by an actuator, and moves the zoom lens and focus lens of the lens unit 11401 by a predetermined distance along the optical axis under control from the camera head control unit 11405. Thereby, the magnification and focus of the image captured by the imaging unit 11402 can be adjusted as appropriate.
  • the communication unit 11404 is configured by a communication device for transmitting and receiving various information to and from the CCU 11201.
  • the communication unit 11404 transmits the image signal obtained from the imaging unit 11402 to the CCU 11201 via the transmission cable 11400 as RAW data.
  • the communication unit 11404 receives a control signal for controlling the drive of the camera head 11102 from the CCU 11201 and supplies it to the camera head control unit 11405.
  • the control signal may include, for example, information specifying the frame rate of the captured image, information specifying the exposure value at the time of capturing, and/or information specifying the magnification and focus of the captured image. Contains information about conditions.
  • the above-mentioned frame rate, exposure value, magnification, focus, and other imaging conditions may be appropriately specified by the user, or may be automatically set by the control unit 11413 of the CCU 11201 based on the acquired image signal.
  • the endoscope 11100 is equipped with so-called AE (Auto Exposure) function, AF (Auto Focus) function, and AWB (Auto White Balance) function.
  • the camera head control unit 11405 controls the drive of the camera head 11102 based on the control signal from the CCU 11201 received via the communication unit 11404.
  • the communication unit 11411 is configured by a communication device for transmitting and receiving various information to and from the camera head 11102.
  • the communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
  • the communication unit 11411 transmits a control signal for controlling the drive of the camera head 11102 to the camera head 11102.
  • the image signal and control signal can be transmitted by electrical communication, optical communication, or the like.
  • the image processing unit 11412 performs various image processing on the image signal, which is RAW data, transmitted from the camera head 11102.
  • the control unit 11413 performs various controls related to the imaging of the surgical site etc. by the endoscope 11100 and the display of the captured image obtained by imaging the surgical site etc. For example, the control unit 11413 generates a control signal for controlling the drive of the camera head 11102.
  • control unit 11413 causes the display device 11202 to display a captured image showing the surgical site, etc., based on the image signal subjected to image processing by the image processing unit 11412.
  • the control unit 11413 may recognize various objects in the captured image using various image recognition techniques. For example, the control unit 11413 detects the shape and color of the edge of an object included in the captured image to detect surgical tools such as forceps, specific body parts, bleeding, mist when using the energy treatment tool 11112, etc. can be recognized.
  • the control unit 11413 may use the recognition result to superimpose and display various types of surgical support information on the image of the surgical site. By displaying the surgical support information in a superimposed manner and presenting it to the surgeon 11131, it becomes possible to reduce the burden on the surgeon 11131 and allow the surgeon 11131 to proceed with the surgery reliably.
  • the transmission cable 11400 connecting the camera head 11102 and the CCU 11201 is an electrical signal cable compatible with electrical signal communication, an optical fiber compatible with optical communication, or a composite cable thereof.
  • communication is performed by wire using the transmission cable 11400, but communication between the camera head 11102 and the CCU 11201 may be performed wirelessly.
  • the technology according to the present disclosure can be applied to the lens unit 11401 and the imaging section 11402 of the camera head 11102 among the configurations described above.
  • the photodetecting device 1 as an IR imaging sensor having the configuration described in FIGS. 5 and 6 can be applied.
  • the technology according to the present disclosure (this technology) can be applied to various products.
  • the technology according to the present disclosure may be realized as a device mounted on any type of moving body such as a car, electric vehicle, hybrid electric vehicle, motorcycle, bicycle, personal mobility, airplane, drone, ship, robot, etc. You can.
  • FIG. 21 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile object control system to which the technology according to the present disclosure can be applied.
  • the vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside vehicle information detection unit 12030, an inside vehicle information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio/image output section 12052, and an in-vehicle network I/F (interface) 12053 are illustrated.
  • the drive system control unit 12010 controls the operation of devices related to the drive system of the vehicle according to various programs.
  • the drive system control unit 12010 includes a drive force generation device such as an internal combustion engine or a drive motor that generates drive force for the vehicle, a drive force transmission mechanism that transmits the drive force to wheels, and a drive force transmission mechanism that controls the steering angle of the vehicle. It functions as a control device for a steering mechanism to adjust and a braking device to generate braking force for the vehicle.
  • the body system control unit 12020 controls the operations of various devices installed in the vehicle body according to various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a turn signal, or a fog lamp.
  • radio waves transmitted from a portable device that replaces a key or signals from various switches may be input to the body control unit 12020.
  • the body system control unit 12020 receives input of these radio waves or signals, and controls the door lock device, power window device, lamp, etc. of the vehicle.
  • the external information detection unit 12030 detects information external to the vehicle in which the vehicle control system 12000 is mounted.
  • an imaging section 12031 is connected to the outside-vehicle information detection unit 12030.
  • the vehicle exterior information detection unit 12030 causes the imaging unit 12031 to capture an image of the exterior of the vehicle, and receives the captured image.
  • the external information detection unit 12030 may perform object detection processing such as a person, car, obstacle, sign, or text on the road surface or distance detection processing based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal according to the amount of received light.
  • the imaging unit 12031 can output the electrical signal as an image or as distance measurement information.
  • the light received by the imaging unit 12031 may be visible light or non-visible light such as infrared rays.
  • the in-vehicle information detection unit 12040 detects in-vehicle information.
  • a driver condition detection section 12041 that detects the condition of the driver is connected to the in-vehicle information detection unit 12040.
  • the driver condition detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 detects the degree of fatigue or concentration of the driver based on the detection information input from the driver condition detection unit 12041. It may be calculated, or it may be determined whether the driver is falling asleep.
  • the microcomputer 12051 calculates control target values for the driving force generation device, steering mechanism, or braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, Control commands can be output to 12010.
  • the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions, including vehicle collision avoidance or impact mitigation, following distance based on vehicle distance, vehicle speed maintenance, vehicle collision warning, vehicle lane departure warning, etc. It is possible to perform cooperative control for the purpose of ADAS (Advanced Driver Assistance System) functions, including vehicle collision avoidance or impact mitigation, following distance based on vehicle distance, vehicle speed maintenance, vehicle collision warning, vehicle lane departure warning, etc. It is possible to perform cooperative control for the purpose of
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 controls the driving force generating device, steering mechanism, braking device, etc. based on information about the surroundings of the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform cooperative control for the purpose of autonomous driving, etc., which does not rely on operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the outside information detection unit 12030.
  • the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or oncoming vehicle detected by the vehicle exterior information detection unit 12030, and performs cooperative control for the purpose of preventing glare, such as switching from high beam to low beam. It can be carried out.
  • the audio and image output unit 12052 transmits an output signal of at least one of audio and images to an output device that can visually or audibly notify information to the occupants of the vehicle or to the outside of the vehicle.
  • an audio speaker 12061, a display section 12062, and an instrument panel 12063 are illustrated as output devices.
  • the display unit 12062 may include, for example, at least one of an on-board display and a head-up display.
  • FIG. 22 is a diagram showing an example of the installation position of the imaging section 12031.
  • the vehicle 12100 has imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at positions such as the front nose, side mirrors, rear bumper, back door, and the top of the windshield inside the vehicle 12100.
  • An imaging unit 12101 provided in the front nose and an imaging unit 12105 provided above the windshield inside the vehicle mainly acquire images in front of the vehicle 12100.
  • Imaging units 12102 and 12103 provided in the side mirrors mainly capture images of the sides of the vehicle 12100.
  • An imaging unit 12104 provided in the rear bumper or back door mainly captures images of the rear of the vehicle 12100.
  • the images of the front acquired by the imaging units 12101 and 12105 are mainly used for detecting preceding vehicles, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like.
  • FIG. 22 shows an example of the imaging range of the imaging units 12101 to 12104.
  • An imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
  • imaging ranges 12112 and 12113 indicate imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
  • an imaging range 12114 shows the imaging range of the imaging unit 12101 provided on the front nose.
  • the imaging range of the imaging unit 12104 provided in the rear bumper or back door is shown. For example, by overlapping the image data captured by the imaging units 12101 to 12104, an overhead image of the vehicle 12100 viewed from above can be obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of image sensors, or may be an image sensor having pixels for phase difference detection.
  • the microcomputer 12051 determines the distance to each three-dimensional object within the imaging ranges 12111 to 12114 and the temporal change in this distance (relative speed with respect to the vehicle 12100) based on the distance information obtained from the imaging units 12101 to 12104. In particular, by determining the three-dimensional object that is closest to the vehicle 12100 on its path and that is traveling at a predetermined speed (for example, 0 km/h or more) in approximately the same direction as the vehicle 12100, it is possible to extract the three-dimensional object as the preceding vehicle. can.
  • a predetermined speed for example, 0 km/h or more
  • the microcomputer 12051 can set an inter-vehicle distance to be secured in advance in front of the preceding vehicle, and perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform cooperative control for the purpose of autonomous driving, etc., in which the vehicle travels autonomously without depending on the driver's operation.
  • the microcomputer 12051 transfers three-dimensional object data to other three-dimensional objects such as two-wheeled vehicles, regular vehicles, large vehicles, pedestrians, and utility poles based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic obstacle avoidance. For example, the microcomputer 12051 identifies obstacles around the vehicle 12100 into obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines a collision risk indicating the degree of risk of collision with each obstacle, and when the collision risk exceeds a set value and there is a possibility of a collision, the microcomputer 12051 transmits information via the audio speaker 12061 and the display unit 12062. By outputting a warning to the driver via the vehicle control unit 12010 and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
  • the microcomputer 12051 determines a collision risk indicating the degree of risk of collision with each obstacle, and when the collision risk exceed
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether the pedestrian is present in the images captured by the imaging units 12101 to 12104.
  • pedestrian recognition involves, for example, a procedure for extracting feature points in images captured by the imaging units 12101 to 12104 as infrared cameras, and a pattern matching process is performed on a series of feature points indicating the outline of an object to determine whether it is a pedestrian or not.
  • the audio image output unit 12052 creates a rectangular outline for emphasis on the recognized pedestrian.
  • the display unit 12062 is controlled to display the .
  • the audio image output unit 12052 may also control the display unit 12062 to display an icon or the like indicating a pedestrian at a desired position.
  • the technology according to the present disclosure can be applied to the imaging unit 12031 among the configurations described above.
  • the imaging unit 12031 the photodetection device 1 as an IR imaging sensor having the configuration described in FIGS. 5 and 6 can be applied.
  • the photodetecting device 1 as a direct ToF sensor having the configuration described in FIGS. 9 and 10 can be applied as the imaging unit 12031.
  • the first conductivity type is P type
  • the second conductivity type is N type
  • the photodetection device uses electrons as the signal charge.
  • the present disclosure describes a photodetection device using holes as the signal charge. can also be applied. That is, the first conductivity type can be an N type, the second conductivity type can be a P type, and each of the aforementioned semiconductor regions can be configured with semiconductor regions of opposite conductivity types.
  • the present disclosure is not limited to application to a photodetection device that detects the distribution of the amount of incident light of visible light and captures the image as an image, but also applies to a light detection device that detects the distribution of the amount of incident light such as infrared rays, X-rays, or particles. It can be applied to detection devices and, in a broader sense, to general photodetection devices (physical quantity distribution detection devices) such as fingerprint detection sensors that detect the distribution of other physical quantities such as pressure and capacitance and capture them as images. be.
  • the technology of the present disclosure can take the following configuration.
  • the silicon germanium layer is formed with a constant germanium concentration.
  • a gate-type ToF sensor having the photoelectric conversion section, a transfer transistor, an amplification transistor, a selection transistor, a reset transistor, and a charge discharge transistor in each pixel, according to any one of (1) to (4) above.
  • the TOF sensor is a CAPD type ToF sensor that includes, in each pixel, the photoelectric conversion section, a voltage application section that applies a predetermined voltage to the photoelectric conversion section, and a charge detection section that detects the charge generated in the photoelectric conversion section.
  • the photodetection device according to any one of (1) to (5) above.
  • the photodetection device according to any one of (1) to (5) which is a direct ToF sensor having an avalanche photodiode including an avalanche multiplication region in each pixel as the photoelectric conversion section.
  • the photodetection device according to any one of (1) to (13), further comprising a diffraction structure on the first surface side of the silicon germanium layer.
  • a silicon germanium layer in which a photoelectric conversion section is formed a silicon germanium layer in which a photoelectric conversion section is formed; an interpixel light shielding film formed on a first surface side that is a light incident surface side of the silicon germanium layer; a MOS transistor formed on a second surface opposite to the first surface of the silicon germanium layer;
  • the germanium concentration of the silicon germanium layer is formed at a constant concentration.

Abstract

The present disclosure relates to: a photodetection device which enables the achievement of a photoelectric conversion unit that has high quantum efficiency in an infrared region; a method for producing this photodetection device; and an electronic apparatus. This photodetection device comprises: a silicon germanium layer that is provided with a photoelectric conversion unit; an inter-pixel light-blocking film that is formed on a first surface side of the silicon germanium layer, the first surface side being the light incident surface side; and a MOS transistor that is formed on a second surface side of the silicon germanium layer, the second surface being on the reverse side of the first surface. The silicon germanium layer is formed so as to have a constant germanium concentration. This technology can be applied, for example, to a ToF sensor or an imaging sensor.

Description

光検出装置およびその製造方法、並びに電子機器Photodetection device and its manufacturing method, and electronic equipment
 本開示は、光検出装置およびその製造方法、並びに電子機器に関し、特に、赤外領域において高い量子効率を持つ光電変換部を実現できるようにした光検出装置およびその製造方法、並びに電子機器に関する。 The present disclosure relates to a photodetection device, a method of manufacturing the same, and an electronic device, and particularly relates to a photodetection device, a method of manufacturing the same, and an electronic device that can realize a photoelectric conversion unit with high quantum efficiency in the infrared region.
 従来、CMOSイメージセンサの基板材料としてシリコン基板を用いた場合、近赤外領域の吸収波長帯は1um程度であり、吸収係数は近赤外領域へ向かって低くなる。そこで、シリコンでは実現できない、波長900nm以上の量子効率を高めるため、フォトダイオードの組成をシリコン(Si)に代えて、ゲルマニウム(Ge)もしくはシリコンゲルマニウム(SiGe)を導入することが検討されている。シリコンゲルマニウムといった化合物半導体基板を使用すると、赤外領域の吸収係数が高くなるため感度は向上する。特にシリコンゲルマニウム基板では、Ge組成を変えることにより吸収波長帯を最大1.9um程度まで変化させることができ、目的に合わせた組成設計を行うことで、赤外領域で高い吸収係数を持つセンサを製造することができる。 Conventionally, when a silicon substrate is used as the substrate material for a CMOS image sensor, the absorption wavelength band in the near-infrared region is about 1 um, and the absorption coefficient decreases toward the near-infrared region. Therefore, in order to increase the quantum efficiency at wavelengths of 900 nm and above, which cannot be achieved with silicon, consideration is being given to replacing silicon (Si) with germanium (Ge) or silicon germanium (SiGe) in the photodiode composition. When a compound semiconductor substrate such as silicon germanium is used, the absorption coefficient in the infrared region becomes high, so sensitivity is improved. In particular, with silicon germanium substrates, the absorption wavelength band can be changed up to about 1.9 um by changing the Ge composition, and by designing the composition according to the purpose, a sensor with a high absorption coefficient in the infrared region can be manufactured. can do.
 Si基板上にSiGe層を形成すると、SiとGeの格子定数のミスマッチによって、歪みや応力が発生することで結晶欠陥が発生する。これを防ぐためにSiGeエピタキシャル層とSi基板の間には、Ge濃度勾配を持つSiGe応力緩和層を形成しなければならない。例えば、特許文献1には、このSiGe応力緩和層を設けた表面照射型のセンサ構造が開示されている。 When a SiGe layer is formed on a Si substrate, a mismatch in the lattice constants of Si and Ge causes distortion and stress, which causes crystal defects. To prevent this, a SiGe stress relaxation layer with a Ge concentration gradient must be formed between the SiGe epitaxial layer and the Si substrate. For example, Patent Document 1 discloses a front-illuminated sensor structure provided with this SiGe stress relaxation layer.
特開2007-13177号公報Japanese Patent Application Publication No. 2007-13177
 特許文献1のセンサ構造では、暗電流を抑制するために、SiGe層に高濃度のP型不純物が注入された高濃度P型層が設けられており、高濃度P型層形成のためにP型不純物を注入する工程が必要となる。また、SiGe応力緩和層については、暗電流の抑制の観点から取り除いた方が望ましい。 In the sensor structure of Patent Document 1, in order to suppress dark current, a high-concentration P-type layer is provided in which a high-concentration P-type impurity is implanted into the SiGe layer. A process of implanting type impurities is required. Furthermore, it is desirable to remove the SiGe stress relaxation layer from the viewpoint of suppressing dark current.
 本開示は、このような状況に鑑みてなされたものであり、赤外領域において高い量子効率を持つ光電変換部を実現できるようにするものである。 The present disclosure has been made in view of this situation, and is intended to make it possible to realize a photoelectric conversion unit with high quantum efficiency in the infrared region.
 本開示の第1の側面の光検出装置は、
 光電変換部が形成されたシリコンゲルマニウム層と、
 前記シリコンゲルマニウム層の光入射面側である第1面側に形成された画素間遮光膜と、
 前記シリコンゲルマニウム層の前記第1面と反対側の第2面側に形成されたMOSトランジスタと
 を備え、
 前記シリコンゲルマニウム層のゲルマニウム濃度は、一定濃度で形成されている
 光検出装置である。
The photodetection device according to the first aspect of the present disclosure includes:
a silicon germanium layer in which a photoelectric conversion section is formed;
an interpixel light shielding film formed on a first surface side that is a light incident surface side of the silicon germanium layer;
a MOS transistor formed on a second surface opposite to the first surface of the silicon germanium layer;
The silicon germanium layer has a constant germanium concentration.The silicon germanium layer is formed at a constant germanium concentration.
 本開示の第2の側面の光検出装置の製造方法は、
 シリコンゲルマニウム層に光電変換部を形成し、
 前記シリコンゲルマニウム層の光入射面側である第1面側に画素間遮光膜を形成し、
 前記シリコンゲルマニウム層の前記第1面と反対側の第2面側にMOSトランジスタを形成し、
 前記シリコンゲルマニウム層のゲルマニウム濃度が一定濃度で形成された
 光検出装置の製造方法である。
A method for manufacturing a photodetection device according to a second aspect of the present disclosure includes:
A photoelectric conversion section is formed on the silicon germanium layer,
forming an inter-pixel light-shielding film on a first surface side that is a light incident surface side of the silicon germanium layer;
forming a MOS transistor on a second surface opposite to the first surface of the silicon germanium layer;
The present invention is a method for manufacturing a photodetector in which the silicon germanium layer has a constant germanium concentration.
 本開示の第3の側面の電子機器は、
 光電変換部が形成されたシリコンゲルマニウム層と、
 前記シリコンゲルマニウム層の光入射面側である第1面側に形成された画素間遮光膜と、
 前記シリコンゲルマニウム層の前記第1面と反対側の第2面側に形成されたMOSトランジスタと
 を備え、
 前記シリコンゲルマニウム層のゲルマニウム濃度は、一定濃度で形成されている
 光検出装置
 を備える
 電子機器である。
The electronic device according to the third aspect of the present disclosure includes:
a silicon germanium layer in which a photoelectric conversion section is formed;
an interpixel light shielding film formed on a first surface side that is a light incident surface side of the silicon germanium layer;
a MOS transistor formed on a second surface opposite to the first surface of the silicon germanium layer;
The silicon germanium layer has a constant germanium concentration.
 本開示の第1ないし第3の側面においては、光電変換部が形成されたシリコンゲルマニウム層と、前記シリコンゲルマニウム層の光入射面側である第1面側に形成された画素間遮光膜と、前記シリコンゲルマニウム層の前記第1面と反対側の第2面側に形成されたMOSトランジスタとが設けられ、前記シリコンゲルマニウム層のゲルマニウム濃度が一定濃度で形成されている。 In the first to third aspects of the present disclosure, a silicon germanium layer in which a photoelectric conversion section is formed, an inter-pixel light shielding film formed on a first surface side that is a light incident surface side of the silicon germanium layer, A MOS transistor is provided on a second surface of the silicon germanium layer opposite to the first surface, and the silicon germanium layer has a constant germanium concentration.
 光検出装置及び電子機器は、独立した装置であっても良いし、他の装置に組み込まれるモジュールであっても良い。 The photodetector and electronic equipment may be independent devices or may be modules incorporated into other devices.
本開示の実施の形態である光検出装置の構成例を示す断面図である。FIG. 1 is a cross-sectional view showing a configuration example of a photodetection device according to an embodiment of the present disclosure. 図1の光検出装置の製造方法を説明する図である。FIG. 2 is a diagram illustrating a method of manufacturing the photodetection device of FIG. 1. FIG. 光検出装置がゲート方式のToFセンサである場合の画素回路例を示す図である。FIG. 7 is a diagram showing an example of a pixel circuit when the photodetection device is a gate-type ToF sensor. 光検出装置がゲート方式のToFセンサである場合の画素の断面図である。FIG. 2 is a cross-sectional view of a pixel when the photodetection device is a gate-type ToF sensor. 光検出装置がIR撮像センサである場合の画素回路例を示す図である。FIG. 3 is a diagram showing an example of a pixel circuit when the photodetection device is an IR image sensor. 光検出装置がIR撮像センサである場合の画素の断面図である。FIG. 3 is a cross-sectional view of a pixel when the photodetection device is an IR image sensor. 光検出装置がCAPD方式のToFセンサである場合の画素回路例を示す図である。FIG. 3 is a diagram showing an example of a pixel circuit when the photodetection device is a CAPD type ToF sensor. 光検出装置がCAPD方式のToFセンサである場合の画素の断面図である。FIG. 3 is a cross-sectional view of a pixel when the photodetection device is a CAPD type ToF sensor. 光検出装置がSPADを用いた直接ToFセンサである場合の画素回路例を示す図である。FIG. 7 is a diagram showing an example of a pixel circuit when the photodetection device is a direct ToF sensor using SPAD. 光検出装置がSPADを用いた直接ToFセンサである場合の画素の断面図である。FIG. 7 is a cross-sectional view of a pixel when the photodetection device is a direct ToF sensor using SPAD. 図10のSiGe層の裏面に回折構造を追加した画素の断面図である。11 is a cross-sectional view of a pixel in which a diffraction structure is added to the back surface of the SiGe layer in FIG. 10. FIG. 図11の回折構造の平面図である。FIG. 12 is a plan view of the diffractive structure of FIG. 11; 図10のSiGe層のおもて面に回折構造を追加した画素の断面図である。11 is a cross-sectional view of a pixel in which a diffraction structure is added to the front surface of the SiGe layer in FIG. 10. FIG. 素子分離部の変形例を説明する図である。FIG. 7 is a diagram illustrating a modification of the element isolation section. 素子分離部と同じ形成方法による回折構造の例を説明する図である。FIG. 3 is a diagram illustrating an example of a diffraction structure formed by the same formation method as the element isolation portion. 本開示の光検出装置の概略構成例を示すブロック図である。FIG. 1 is a block diagram illustrating a schematic configuration example of a photodetection device according to the present disclosure. 光検出装置を用いたイメージセンサの使用例を説明する図である。It is a figure explaining the example of use of the image sensor using a photodetection device. 本開示の技術を適用した電子機器としての撮像装置の構成例を示すブロック図である。FIG. 1 is a block diagram illustrating a configuration example of an imaging device as an electronic device to which the technology of the present disclosure is applied. 内視鏡手術システムの概略的な構成の一例を示す図である。FIG. 1 is a diagram illustrating an example of a schematic configuration of an endoscopic surgery system. カメラヘッド及びCCUの機能構成の一例を示すブロック図である。FIG. 2 is a block diagram showing an example of the functional configuration of a camera head and a CCU. 車両制御システムの概略的な構成の一例を示すブロック図である。FIG. 1 is a block diagram showing an example of a schematic configuration of a vehicle control system. 車外情報検出部及び撮像部の設置位置の一例を示す説明図である。FIG. 2 is an explanatory diagram showing an example of installation positions of an outside-vehicle information detection section and an imaging section.
 以下、添付図面を参照しながら、本開示の技術を実施するための形態(以下、実施の形態という)について説明する。説明は以下の順序で行う。
1.本開示の光検出装置の断面図
2.光検出装置の製造方法
3.光検出装置がゲート方式のToFセンサである場合の画素構成例
4.光検出装置が撮像センサである場合の画素構成例
5.光検出装置がCAPD方式のToFセンサである場合の画素構成例
6.光検出装置がSPADを用いた直接ToFセンサである場合の画素構成例
7.回折構造を付加した変形例
8.素子分離部の変形例
9.光検出装置の構成例
10.光検出装置のイメージセンサとしての使用例
11.電子機器への適用例
12.内視鏡手術システムへの応用例
13.移動体への応用例
Hereinafter, embodiments for implementing the technology of the present disclosure (hereinafter referred to as embodiments) will be described with reference to the accompanying drawings. The explanation will be given in the following order.
1. Cross-sectional view 2 of the photodetection device of the present disclosure. Manufacturing method of photodetection device 3. Pixel configuration example 4 when the photodetector is a gate type ToF sensor. Pixel configuration example 5 when the photodetection device is an image sensor. Pixel configuration example 6 when the photodetector is a CAPD type ToF sensor. Pixel configuration example 7 when the photodetection device is a direct ToF sensor using SPAD. Modification example 8 with added diffraction structure. Modification example 9 of element isolation section. Configuration example 10 of photodetection device. Example of use of photodetector as an image sensor 11. Application example to electronic equipment 12. Application example to endoscopic surgery system 13. Example of application to mobile objects
 なお、以下の説明で参照する図面において、同一又は類似の部分には同一又は類似の符号を付すことにより重複説明を適宜省略する。図面は模式的なものであり、厚みと平面寸法との関係、各層の厚みの比率等は実際のものとは異なる。また、図面相互間においても、互いの寸法の関係や比率が異なる部分が含まれている場合がある。 In addition, in the drawings referred to in the following description, the same or similar parts are given the same or similar numerals to omit redundant explanation as appropriate. The drawings are schematic, and the relationship between thickness and planar dimensions, the ratio of thickness of each layer, etc. differ from the actual drawings. Furthermore, the drawings may include portions with different dimensional relationships and ratios.
 また、以下の説明における上下等の方向の定義は、単に説明の便宜上の定義であって、本開示の技術的思想を限定するものではない。例えば、対象を90°回転して観察すれば上下は左右に変換して読まれ、180°回転して観察すれば上下は反転して読まれる。 Further, the definitions of directions such as up and down in the following description are simply definitions for convenience of explanation, and do not limit the technical idea of the present disclosure. For example, if the object is rotated 90 degrees and observed, the top and bottom will be converted to left and right and read, and if the object is rotated 180 degrees and observed, the top and bottom will be reversed and read.
<1.本開示の光検出装置の断面図>
 図1は、本開示の実施の形態である光検出装置の構成例を示す断面図である。
<1. Cross-sectional view of the photodetection device of the present disclosure>
FIG. 1 is a cross-sectional view showing a configuration example of a photodetection device according to an embodiment of the present disclosure.
 図1に示される光検出装置1は、赤外領域の光に高い量子効率(QE)を持つ光電変換部を備える。具体的には、SiGe層11に、光電変換部としてのフォトダイオード(PD)12が形成されている。 The photodetection device 1 shown in FIG. 1 includes a photoelectric conversion section that has high quantum efficiency (QE) for light in the infrared region. Specifically, a photodiode (PD) 12 is formed in the SiGe layer 11 as a photoelectric conversion section.
 SiGe層(シリコンゲルマニウム層)11は、ゲルマニウム濃度が一定濃度で形成されたシリコンゲルマニウム(SiGe)の単結晶層である。ここで、一定濃度の「一定」とは、層内のゲルマニウム濃度のばらつきが、典型的にはターゲット濃度の±1%程度、少なくとも10%より小さい値に制御されていることと定義する。ゲルマニウム濃度の絶対値は所定の範囲に制限されず、例えば、20ないし30%程度でもよいし、70ないし80%程度でもよい。ゲルマニウム濃度を高くするほど、赤外領域で高い吸収係数を持たせることができる。SiGe層11は、低濃度のP型半導体領域で構成されており、SiGe層11の一部の領域に、リン(P)やヒ素(As)等のN型不純物がドーピングされたN型半導体領域が形成されることにより、PN接合型のフォトダイオード12が形成されている。 The SiGe layer (silicon germanium layer) 11 is a single crystal layer of silicon germanium (SiGe) formed with a constant germanium concentration. Here, the "constant" concentration is defined as the fact that the variation in the germanium concentration within the layer is typically controlled to a value smaller than about ±1%, or at least 10%, of the target concentration. The absolute value of the germanium concentration is not limited to a predetermined range, and may be, for example, about 20 to 30%, or about 70 to 80%. The higher the germanium concentration, the higher the absorption coefficient in the infrared region. The SiGe layer 11 is composed of a low concentration P-type semiconductor region, and a part of the SiGe layer 11 is an N-type semiconductor region doped with N-type impurities such as phosphorus (P) and arsenic (As). By forming , a PN junction type photodiode 12 is formed.
 図1においてSiGe層11の上側の面となる、SiGe層11の裏面の上面には、反射防止膜14が形成されており、反射防止膜14のさらに上側に、遮光膜15、及び、レンズ層16が順に形成されている。レンズ層16については、例えば画素アレイ部より外側の外周部などの、フォトダイオード12が形成されていないSiGe層11の上部では平坦に形成されているが、フォトダイオード12が形成されているSiGe層11の上部では、オンチップレンズ17が形成されている。反射防止膜14は、例えば、固定電荷膜および酸化膜が積層された積層構造で構成することができ、例えば、ALD(Atomic Layer Deposition)法による高誘電率(High-k)の絶縁薄膜を用いることができる。具体的には、固定電荷膜として、酸化ハフニウム(HfO2)や、酸化アルミニウム(Al23)、酸化チタン(TiO2)、STO(Strontium Titan Oxide)などを用いることができる。遮光膜15の材料は、光を遮光する材料であればよく、例えば、タングステン(W)、アルミニウム(Al)又は銅(Cu)などの金属材料を用いることができる。レンズ層16は、例えば、スチレン系樹脂、アクリル系樹脂、スチレン-アクリル共重合系樹脂、またはシロキサン系樹脂等の樹脂系材料で形成される。オンチップレンズ17は、入射光を集光し、フォトダイオード12に効率良く入射させる。 An anti-reflection film 14 is formed on the upper surface of the back surface of the SiGe layer 11, which is the upper surface of the SiGe layer 11 in FIG. 16 are formed in sequence. Regarding the lens layer 16, it is formed flat on the upper part of the SiGe layer 11 where the photodiode 12 is not formed, such as the outer peripheral part outside the pixel array part, but the lens layer 16 is flat on the upper part of the SiGe layer 11 where the photodiode 12 is formed. On the upper part of 11, an on-chip lens 17 is formed. The antireflection film 14 can be configured with a laminated structure in which a fixed charge film and an oxide film are laminated, for example, using a high dielectric constant (High-k) insulating thin film formed by ALD (Atomic Layer Deposition) method. be able to. Specifically, hafnium oxide (HfO 2 ), aluminum oxide (Al 2 O 3 ), titanium oxide (TiO 2 ), STO (Strontium Titan Oxide), or the like can be used as the fixed charge film. The material of the light shielding film 15 may be any material that blocks light, and for example, a metal material such as tungsten (W), aluminum (Al), or copper (Cu) can be used. The lens layer 16 is formed of a resin material such as styrene resin, acrylic resin, styrene-acrylic copolymer resin, or siloxane resin. The on-chip lens 17 condenses the incident light and makes it enter the photodiode 12 efficiently.
 SiGe層11のうち、平面視でフォトダイオード12の周囲には、素子分離部13が形成されている。素子分離部13は、SiGe層11を貫通し、隣接する他の光電変換部(フォトダイオード12)と電気的に分離する。素子分離部13は、SiGe層11を貫通するように形成されたトレンチの内部に、例えば、タングステン(W)、アルミニウム(Al)、チタン(Ti)、窒化チタン(TiN)等の金属材料を埋め込んで構成される。あるいは、素子分離部13は、SiGe層11の裏面上に形成された反射防止膜14の少なくとも1層を、SiGe層11に形成したトレンチの側壁に形成し、その反射防止膜14の内側に、シリコン酸化膜等の絶縁膜を埋め込んで構成してもよい。 In the SiGe layer 11, an element isolation part 13 is formed around the photodiode 12 in plan view. The element isolation section 13 penetrates the SiGe layer 11 and is electrically isolated from other adjacent photoelectric conversion sections (photodiodes 12). The element isolation part 13 is formed by filling a trench formed to penetrate the SiGe layer 11 with a metal material such as tungsten (W), aluminum (Al), titanium (Ti), or titanium nitride (TiN). Consists of. Alternatively, the element isolation section 13 may include at least one layer of the anti-reflection film 14 formed on the back surface of the SiGe layer 11 on the side wall of the trench formed in the SiGe layer 11, and on the inside of the anti-reflection film 14. It may also be constructed by embedding an insulating film such as a silicon oxide film.
 図1においてSiGe層11の下側の面となる、SiGe層11のおもて面の上面(図1では下面)には、Si層(シリコン層)21が形成されている。Si層21は、SiGe層11を覆うシリコン(Si)のキャップ層であり、製造時のゲルマニウム汚染を防ぐ機能を有する。Si層21は、例えば10nm程度の膜厚で形成される。 A Si layer (silicon layer) 21 is formed on the upper surface (lower surface in FIG. 1) of the front surface of the SiGe layer 11, which is the lower surface of the SiGe layer 11 in FIG. The Si layer 21 is a silicon (Si) cap layer that covers the SiGe layer 11, and has a function of preventing germanium contamination during manufacturing. The Si layer 21 is formed to have a thickness of, for example, about 10 nm.
 Si層21の上面(図1では下面)は、MOSトランジスタが形成される場合のトランジスタ形成面であり、図1の例では、1つのMOSトランジスタTrが形成されている。ただし、MOSトランジスタTrは、Si層21の上面に必ずしも形成される必要はない。あるいはまた、Si層21の上面に2つ以上のMOSトランジスタTrが形成されてもよい。MOSトランジスタTrは、ゲート絶縁膜31と、その上に形成されたゲート電極32と、ゲート電極32の側壁に形成されたサイドウォール33とを含む。MOSトランジスタTrのゲート電極32は、コンタクト電極41を介して接合電極42に接続されており、接合電極42は、積層された他の半導体基板(不図示)の配線層61の接合電極64と、金属接合により接続されている。MOSトランジスタTrが形成されていないSi層21の上面には、絶縁層43が形成されている。 The upper surface (lower surface in FIG. 1) of the Si layer 21 is a transistor formation surface when a MOS transistor is formed, and in the example of FIG. 1, one MOS transistor Tr is formed. However, the MOS transistor Tr does not necessarily need to be formed on the upper surface of the Si layer 21. Alternatively, two or more MOS transistors Tr may be formed on the upper surface of the Si layer 21. The MOS transistor Tr includes a gate insulating film 31 , a gate electrode 32 formed thereon, and a sidewall 33 formed on a side wall of the gate electrode 32 . The gate electrode 32 of the MOS transistor Tr is connected to a bonding electrode 42 via a contact electrode 41, and the bonding electrode 42 is connected to a bonding electrode 64 of a wiring layer 61 of another stacked semiconductor substrate (not shown). Connected by metal bonding. An insulating layer 43 is formed on the upper surface of the Si layer 21 where the MOS transistor Tr is not formed.
 光検出装置1は、SiGe層11及びSi層21を含む半導体基板(化合物半導体基板)51と、少なくともロジック回路が形成された不図示の半導体基板(以下、ロジック基板と称する。)との積層構造で構成される。ただし、ロジック基板については、図1では、配線層61のみが図示されている。図1において、SiGe層11及びSi層21を含む半導体基板51の光入射面側が、半導体基板51の裏面(第1面)であり、ロジック基板と接続される側が、半導体基板51のおもて面(第2面)である。ロジック基板には、複数のMOSトランジスタTrと、MOSトランジスタTrの駆動回路や、フォトダイオード12で生成された電荷に応じた信号を処理する信号処理回路が形成されている。Si層21上に形成されたMOSトランジスタTrの駆動回路も、ロジック基板側に形成されている。半導体基板51の裏面側には、反射防止膜14、遮光膜15、及び、レンズ層16が形成され、半導体基板51のおもて面側には、複数のMOSトランジスタTr(ロジック基板に形成されたものも含む)が形成されている。なお、半導体基板51の裏面側に形成された、反射防止膜14、遮光膜15、及び、レンズ層16は、設計条件に応じて適宜省略してもよい。 The photodetector 1 has a stacked structure of a semiconductor substrate (compound semiconductor substrate) 51 including a SiGe layer 11 and a Si layer 21, and a semiconductor substrate (hereinafter referred to as a logic substrate), not shown, on which at least a logic circuit is formed. Consists of. However, regarding the logic board, only the wiring layer 61 is illustrated in FIG. In FIG. 1, the light incident surface side of the semiconductor substrate 51 including the SiGe layer 11 and the Si layer 21 is the back surface (first surface) of the semiconductor substrate 51, and the side to be connected to the logic board is the front surface of the semiconductor substrate 51. This is the surface (second surface). A plurality of MOS transistors Tr, a drive circuit for the MOS transistors Tr, and a signal processing circuit that processes a signal according to the charge generated by the photodiode 12 are formed on the logic board. A drive circuit for the MOS transistor Tr formed on the Si layer 21 is also formed on the logic substrate side. An antireflection film 14, a light shielding film 15, and a lens layer 16 are formed on the back side of the semiconductor substrate 51, and a plurality of MOS transistors Tr (formed on a logic substrate) are formed on the front side of the semiconductor substrate 51. ) are formed. Note that the antireflection film 14, light shielding film 15, and lens layer 16 formed on the back side of the semiconductor substrate 51 may be omitted as appropriate depending on design conditions.
 配線層61は、複数層の金属配線62と絶縁層63とを含む。図1の例では、金属配線62の層数が2層で形成されているが、金属配線62の層数は問わない。また、配線層61の上面となる絶縁層43との接合面66には接合電極64が形成されており、接合電極64は、金属接合により接合電極42と電気的に接続されている。接合電極64は、コンタクト電極65を介して、所定の金属配線62に接続されている。コンタクト電極41、接合電極42、接合電極64、コンタクト電極65、及び、金属配線62の材料としては、例えば銅(Cu)、タングステン(W)、アルミニウム(Al)、金(Au)などを採用することができるが、本実施の形態では、銅で形成されている。したがって、接合電極42と接合電極64は、Cu-Cu接合となる。絶縁層43及び63は、例えば、SiO2膜、Low-k膜(低誘電率絶縁膜)、SiOC膜等で形成される。絶縁層43及び63は、異なる材料からなる複数の絶縁膜で構成されてもよい。 The wiring layer 61 includes multiple layers of metal wiring 62 and an insulating layer 63. In the example of FIG. 1, the number of layers of the metal wiring 62 is two, but the number of layers of the metal wiring 62 is not limited. Further, a bonding electrode 64 is formed on a bonding surface 66 with the insulating layer 43, which is the upper surface of the wiring layer 61, and the bonding electrode 64 is electrically connected to the bonding electrode 42 by metal bonding. The bonding electrode 64 is connected to a predetermined metal wiring 62 via a contact electrode 65. The contact electrode 41, the bonding electrode 42, the bonding electrode 64, the contact electrode 65, and the metal wiring 62 are made of, for example, copper (Cu), tungsten (W), aluminum (Al), gold (Au), or the like. However, in this embodiment, it is made of copper. Therefore, the bonding electrode 42 and the bonding electrode 64 form a Cu-Cu bond. The insulating layers 43 and 63 are formed of, for example, a SiO2 film, a low-k film (low dielectric constant insulating film), a SiOC film, or the like. The insulating layers 43 and 63 may be composed of a plurality of insulating films made of different materials.
 以上のように、光検出装置1は、光電変換部としてフォトダイオード12を有し、フォトダイオード12は、結晶性が良好かつゲルマニウム濃度が一定濃度のSiGe層11のみに形成されている。SiGe層11に形成したフォトダイオード12は、赤外領域の吸収係数が高くなるために、赤外領域の光に対する感度が向上する。また、光検出装置1は、SiGe層11を含む半導体基板51の裏面側にオンチップレンズ17を形成し、オンチップレンズ17により集光させた光をフォトダイオード12で光電変換する裏面照射型の構造である。裏面照射型の構造は、表面照射型と比較してフォトダイオード12への開口面積を拡大することができ、感度を向上させることができる。したがって、光検出装置1によれば、赤外領域において高い量子効率を持つ光電変換部を実現することができる。 As described above, the photodetector 1 has the photodiode 12 as a photoelectric conversion section, and the photodiode 12 is formed only in the SiGe layer 11 with good crystallinity and a constant germanium concentration. Since the photodiode 12 formed in the SiGe layer 11 has a high absorption coefficient in the infrared region, its sensitivity to light in the infrared region is improved. The photodetector 1 is also a back-illuminated type in which an on-chip lens 17 is formed on the back side of a semiconductor substrate 51 including a SiGe layer 11, and the light focused by the on-chip lens 17 is photoelectrically converted by a photodiode 12. It is a structure. The back-illuminated structure can enlarge the opening area to the photodiode 12 compared to the front-illuminated structure, and can improve sensitivity. Therefore, according to the photodetecting device 1, a photoelectric conversion section having high quantum efficiency in the infrared region can be realized.
 光検出装置1が有する半導体基板51の構造は、SiGe応力緩和層を備えていないため、SiGe応力緩和層の結晶欠陥により発生する暗電流を抑制することができる。また、特許文献1のセンサ構造のように、SiGe応力緩和層を設けた場合における暗電流抑制のための高濃度P型層を設ける必要がない。 Since the structure of the semiconductor substrate 51 included in the photodetector 1 does not include a SiGe stress relaxation layer, dark current generated due to crystal defects in the SiGe stress relaxation layer can be suppressed. Further, unlike the sensor structure of Patent Document 1, there is no need to provide a high concentration P-type layer for suppressing dark current when a SiGe stress relaxation layer is provided.
<2.光検出装置の製造方法>
 次に、図2を参照して、図1の光検出装置1の製造方法について説明する。
<2. Manufacturing method of photodetector>
Next, with reference to FIG. 2, a method for manufacturing the photodetecting device 1 of FIG. 1 will be described.
 初めに、図2のAに示されるように、SiGe層11及びSi層21を含む半導体基板51として、SiGeヘテロエピタキシャル基板が用意される。図2のAの状態において半導体基板51として用意されるSiGeヘテロエピタキシャル基板は、シリコン基板111上に、結晶欠陥を抑制させるためのSiGe応力緩和層(SiGe SRB)112、結晶欠陥のないSiGe層11、及び、Si層21が、積層された基板である。このSiGeヘテロエピタキシャル基板は、シリコン基板111上に、SiGe応力緩和層112、SiGe層11、Si層21の順に、エピタキシャル成長させることで形成することができる。 First, as shown in FIG. 2A, a SiGe heteroepitaxial substrate is prepared as the semiconductor substrate 51 including the SiGe layer 11 and the Si layer 21. The SiGe heteroepitaxial substrate prepared as the semiconductor substrate 51 in the state of A in FIG. , and the Si layer 21 are laminated on the substrate. This SiGe heteroepitaxial substrate can be formed by epitaxially growing the SiGe stress relaxation layer 112, the SiGe layer 11, and the Si layer 21 in this order on the silicon substrate 111.
 次に、図2のBに示されるように、P型半導体領域で構成されたSiGe層11の所望の領域に、リン等のN型不純物をドーピングすることによりN型半導体領域を形成し、PN接合型のフォトダイオード12が形成される。さらに、Si層21上面の所望の位置に、MOSトランジスタTrが形成される。MOSトランジスタTrの形成では、まずSi層21上面の全面にゲート絶縁膜31となる絶縁膜を形成した後、リソグラフィ技術により、MOSトランジスタTrを形成する位置以外の絶縁膜が除去され、ゲート絶縁膜31が形成される。その後、ゲート電極32及びサイドウォール33が形成される。ゲート電極32は、例えば、ポリシリコンで形成され、サイドウォール33は、例えば、シリコン窒化膜(SiN)等で形成される。なお、図2においては、ゲート絶縁膜31、ゲート電極32、及び、サイドウォール33の符号は省略されている。 Next, as shown in FIG. 2B, an N-type semiconductor region is formed by doping an N-type impurity such as phosphorus into a desired region of the SiGe layer 11 composed of a P-type semiconductor region. A junction type photodiode 12 is formed. Further, a MOS transistor Tr is formed at a desired position on the upper surface of the Si layer 21. In forming the MOS transistor Tr, first, an insulating film that will become the gate insulating film 31 is formed on the entire upper surface of the Si layer 21, and then the insulating film is removed from the position where the MOS transistor Tr is formed using lithography technology, and the gate insulating film is removed. 31 is formed. After that, gate electrode 32 and sidewalls 33 are formed. The gate electrode 32 is made of, for example, polysilicon, and the sidewalls 33 are made of, for example, a silicon nitride film (SiN). Note that in FIG. 2, the symbols of the gate insulating film 31, the gate electrode 32, and the sidewalls 33 are omitted.
 次に、図2のCに示されるように、MOSトランジスタTrのゲート電極32等を覆うように、Si層21上面に絶縁層43が所定の膜厚で形成された後、ゲート電極32の上部が開口され、銅等の所定の金属材料が埋め込まれることにより、コンタクト電極41及び接合電極42が形成される。 Next, as shown in FIG. 2C, an insulating layer 43 is formed with a predetermined thickness on the upper surface of the Si layer 21 so as to cover the gate electrode 32 of the MOS transistor Tr, and then the insulating layer 43 is formed on the upper surface of the gate electrode 32. is opened and filled with a predetermined metal material such as copper, thereby forming the contact electrode 41 and the bonding electrode 42.
 次に、半導体基板51上に形成された絶縁層43と、別途製造されたロジック基板の配線層61が、接合面66で接合された後、図2のDに示されるように、上下が反転される。上下反転後に、半導体基板51の最上層となるシリコン基板111が、CMP(Chemical Mechanical Polishing)等により除去される。 Next, the insulating layer 43 formed on the semiconductor substrate 51 and the wiring layer 61 of the separately manufactured logic board are bonded at the bonding surface 66, and then the top and bottom are reversed as shown in D of FIG. be done. After being turned upside down, the silicon substrate 111, which is the uppermost layer of the semiconductor substrate 51, is removed by CMP (Chemical Mechanical Polishing) or the like.
 次に、図2のEに示されるように、半導体基板51の最上層となるSiGe応力緩和層112が、CMP等により除去される。これにより、半導体基板51は、SiGe層11とSi層21とで構成され、SiGe層11内にフォトダイオード12が形成されている状態となる。 Next, as shown in FIG. 2E, the SiGe stress relaxation layer 112, which is the uppermost layer of the semiconductor substrate 51, is removed by CMP or the like. As a result, the semiconductor substrate 51 is made up of the SiGe layer 11 and the Si layer 21, and the photodiode 12 is formed in the SiGe layer 11.
 最後に、図2のFに示されるように、SiGe層11の上面に反射防止膜14が形成された後、素子分離部13が形成される。続いて、遮光膜15、オンチップレンズ17を含むレンズ層16が順に形成されることにより、図1で示した光検出装置1の断面構造が完成する。 Finally, as shown in FIG. 2F, after the antireflection film 14 is formed on the upper surface of the SiGe layer 11, the element isolation part 13 is formed. Subsequently, a light shielding film 15 and a lens layer 16 including an on-chip lens 17 are formed in this order, thereby completing the cross-sectional structure of the photodetector 1 shown in FIG. 1.
 以上の光検出装置1の製造方法によれば、結晶欠陥の多いSiGe応力緩和層112が除去されるので、特許文献1のセンサ構造で必要とされていた暗電流抑制のための高濃度P型層を設ける必要がない。これにより、暗電流抑制のための高濃度P型層を形成するための不純物注入工程が不要となる。また、高濃度P型層を形成するためには、深い領域に高エネルギーで不純物を注入する必要があり、高エネルギーの不純物注入工程によりSiGe層がダメージを受ける恐れもあるが、高濃度P型層が不要であるので、そのようなダメージも回避できる。 According to the above manufacturing method of the photodetector 1, the SiGe stress relaxation layer 112 with many crystal defects is removed, so the high-concentration P type for suppressing dark current required in the sensor structure of Patent Document 1 is removed. There is no need to provide layers. This eliminates the need for an impurity implantation step for forming a highly doped P-type layer for suppressing dark current. In addition, in order to form a high concentration P-type layer, it is necessary to implant impurities into a deep region with high energy, and there is a risk that the SiGe layer may be damaged by the high-energy impurity implantation process. Since no layer is required, such damage can also be avoided.
 MOSトランジスタ形成面となるSiGe層11の上面はSi層21で覆われているので、製造時のゲルマニウム汚染を防ぐことができる。また、Si層21上に、MOSトランジスタTrのゲート絶縁膜31となる酸化膜を形成できるため、SiGe層11の上面に酸化膜を形成する場合と比べて、MOSトランジスタ形成時のゲート絶縁膜31界面の界面準位を低減することができる。 Since the upper surface of the SiGe layer 11, which is the surface on which the MOS transistor is formed, is covered with the Si layer 21, germanium contamination during manufacturing can be prevented. Furthermore, since an oxide film that becomes the gate insulating film 31 of the MOS transistor Tr can be formed on the Si layer 21, the gate insulating film 31 during formation of the MOS transistor is The interface state at the interface can be reduced.
<3.光検出装置がゲート方式のToFセンサである場合の画素構成例>
 上述した光検出装置1は、照射光を用いて物体までの距離を測定するToF(Time of Flight)センサや、赤外光を少なくとも含む光を受光し、受光量に応じた画像を生成する撮像センサであってよい。以下では、光検出装置1が、各種のToFセンサまたは撮像センサである場合の画素構造について順に説明する。
<3. Example of pixel configuration when the photodetection device is a gate-type ToF sensor>
The photodetection device 1 described above includes a ToF (Time of Flight) sensor that measures the distance to an object using irradiated light, and an imaging sensor that receives light that includes at least infrared light and generates an image according to the amount of received light. It may be a sensor. Below, pixel structures in the case where the photodetection device 1 is various ToF sensors or image sensors will be explained in order.
 ToFセンサは、物体に向かって照射光が発光され、物体の表面で照射光が反射されて返ってくるまでの時間を計測することにより、物体までの距離を計測するセンサである。ToFセンサには、間接ToFセンサと直接ToFセンサとがある。間接ToFセンサは、照射光が発光されてから反射光が受光されるまでの飛行時間を位相差として検出し、物体までの距離を算出する方式であるのに対して、直接ToFセンサは、照射光が発光されてから反射光が受光されるまでの飛行時間を直接計測し、物体までの距離を算出する方式である。 A ToF sensor is a sensor that measures the distance to an object by emitting irradiated light toward the object and measuring the time it takes for the irradiated light to be reflected on the object's surface and returned. ToF sensors include indirect ToF sensors and direct ToF sensors. Indirect ToF sensors calculate the distance to the object by detecting the flight time from the time the irradiated light is emitted until the reflected light is received as a phase difference, whereas the direct ToF sensor uses This method calculates the distance to an object by directly measuring the flight time from when the light is emitted until the reflected light is received.
 図3は、光検出装置1がゲート方式のToFセンサである場合の画素回路例を示している。 FIG. 3 shows an example of a pixel circuit when the photodetection device 1 is a gate-type ToF sensor.
 ゲート方式のToFセンサは、フォトダイオード12で生成された電荷を2つの転送ゲート(転送トランジスタ)で振り分けることにより位相差を検出し、物体までの距離を算出する間接ToFセンサである。 A gate-type ToF sensor is an indirect ToF sensor that detects a phase difference by distributing the charge generated by the photodiode 12 between two transfer gates (transfer transistors) and calculates the distance to the object.
 画素200は、光電変換部としてフォトダイオード12を備える。また、画素200は、転送トランジスタTRG、浮遊拡散領域FD、付加容量FDL、切替トランジスタFDG、増幅トランジスタAMP、リセットトランジスタRST、及び、選択トランジスタSELをそれぞれ2個ずつ有する。さらに、画素200は、電荷排出トランジスタOFGを有している。 The pixel 200 includes a photodiode 12 as a photoelectric conversion section. Furthermore, the pixel 200 includes two transfer transistors TRG, a floating diffusion region FD, an additional capacitor FDL, a switching transistor FDG, an amplification transistor AMP, a reset transistor RST, and two selection transistors SEL. Furthermore, the pixel 200 includes a charge discharge transistor OFG.
 ここで、画素200において2個ずつ設けられる転送トランジスタTRG、浮遊拡散領域FD、付加容量FDL、切替トランジスタFDG、増幅トランジスタAMP、リセットトランジスタRST、及び、選択トランジスタSELのそれぞれを区別する場合、図3に示されるように、転送トランジスタTRG1およびTRG2、浮遊拡散領域FD1およびFD2、付加容量FDL1およびFDL2、切替トランジスタFDG1およびFDG2、増幅トランジスタAMP1およびAMP2、リセットトランジスタRST1およびRST2、並びに、選択トランジスタSEL1およびSEL2のように称する。 Here, when distinguishing between the transfer transistor TRG, the floating diffusion region FD, the additional capacitance FDL, the switching transistor FDG, the amplification transistor AMP, the reset transistor RST, and the selection transistor SEL, which are provided in pairs in each pixel 200, as shown in FIG. As shown, transfer transistors TRG1 and TRG2, floating diffusion regions FD1 and FD2, additional capacitors FDL1 and FDL2, switching transistors FDG1 and FDG2, amplification transistors AMP1 and AMP2, reset transistors RST1 and RST2, and selection transistors SEL1 and SEL2 It is called as.
 転送トランジスタTRG、切替トランジスタFDG、増幅トランジスタAMP、選択トランジスタSEL、リセットトランジスタRST、及び、電荷排出トランジスタOFGは、例えば、N型のMOSトランジスタで構成される。 The transfer transistor TRG, the switching transistor FDG, the amplification transistor AMP, the selection transistor SEL, the reset transistor RST, and the charge discharge transistor OFG are composed of, for example, N-type MOS transistors.
 転送トランジスタTRG1は、ゲート電極に供給される転送駆動信号TRG1gがアクティブ状態になるとこれに応答して導通状態になることで、フォトダイオード12に蓄積されている電荷を浮遊拡散領域FD1に転送する。転送トランジスタTRG2は、ゲート電極に供給される転送駆動信号TRG2gがアクティブ状態になるとこれに応答して導通状態になることで、フォトダイオード12に蓄積されている電荷を浮遊拡散領域FD2に転送する。 When the transfer drive signal TRG1g supplied to the gate electrode becomes active, the transfer transistor TRG1 becomes conductive in response to this, thereby transferring the charge accumulated in the photodiode 12 to the floating diffusion region FD1. When the transfer drive signal TRG2g supplied to the gate electrode becomes active, the transfer transistor TRG2 becomes conductive in response to this, thereby transferring the charge accumulated in the photodiode 12 to the floating diffusion region FD2.
 浮遊拡散領域FD1およびFD2は、フォトダイオード12から転送された電荷を一時的に保持する電荷保持部である。 The floating diffusion regions FD1 and FD2 are charge holding parts that temporarily hold the charges transferred from the photodiode 12.
 切替トランジスタFDG1は、ゲート電極に供給されるFD駆動信号FDG1gがアクティブ状態になるとこれに応答して導通状態になることで、付加容量FDL1を、浮遊拡散領域FD1に接続させる。切替トランジスタFDG2は、ゲート電極に供給されるFD駆動信号FDG2gがアクティブ状態になるとこれに応答して導通状態になることで、付加容量FDL2を、浮遊拡散領域FD2に接続させる。付加容量FDL1およびFDL2は、例えば、配線容量によって形成することができる。 The switching transistor FDG1 becomes conductive in response to the active state of the FD drive signal FDG1g supplied to the gate electrode, thereby connecting the additional capacitance FDL1 to the floating diffusion region FD1. The switching transistor FDG2 becomes conductive in response to the active state of the FD drive signal FDG2g supplied to the gate electrode, thereby connecting the additional capacitance FDL2 to the floating diffusion region FD2. Additional capacitances FDL1 and FDL2 can be formed by wiring capacitances, for example.
 リセットトランジスタRST1は、ゲート電極に供給されるリセット駆動信号RSTgがアクティブ状態になるとこれに応答して導通状態になることで、浮遊拡散領域FD1の電位をリセットする。リセットトランジスタRST2は、ゲート電極に供給されるリセット駆動信号RSTgがアクティブ状態になるとこれに応答して導通状態になることで、浮遊拡散領域FD2の電位をリセットする。なお、リセットトランジスタRST1およびRST2がアクティブ状態とされるとき、切替トランジスタFDG1およびFDG2も同時にアクティブ状態とされ、付加容量FDL1およびFDL2もリセットされる。 When the reset drive signal RSTg supplied to the gate electrode becomes active, the reset transistor RST1 becomes conductive in response to this, thereby resetting the potential of the floating diffusion region FD1. The reset transistor RST2 becomes conductive in response to the activation of the reset drive signal RSTg supplied to the gate electrode, thereby resetting the potential of the floating diffusion region FD2. Note that when the reset transistors RST1 and RST2 are brought into the active state, the switching transistors FDG1 and FDG2 are also brought into the active state at the same time, and the additional capacitors FDL1 and FDL2 are also reset.
 画素200を駆動する垂直駆動部(例えば、図16の垂直駆動部522)は、例えば、入射光の光量が多い高照度のとき、切替トランジスタFDG1およびFDG2をアクティブ状態として、浮遊拡散領域FD1と付加容量FDL1を接続するとともに、浮遊拡散領域FD2と付加容量FDL2を接続する。これにより、高照度時に、より多くの電荷を蓄積することができる。 The vertical drive section (for example, the vertical drive section 522 in FIG. 16) that drives the pixel 200 activates the switching transistors FDG1 and FDG2 and adds the floating diffusion region FD1, for example, at high illumination with a large amount of incident light. The capacitor FDL1 is connected, and the floating diffusion region FD2 and the additional capacitor FDL2 are also connected. This allows more charges to be accumulated during high illuminance.
 一方、入射光の光量が少ない低照度のときには、垂直駆動部は、切替トランジスタFDG1およびFDG2を非アクティブ状態として、付加容量FDL1およびFDL2を、それぞれ、浮遊拡散領域FD1およびFD2から切り離す。これにより、変換効率を上げることができる。 On the other hand, when the intensity of the incident light is low and the illuminance is low, the vertical drive section makes the switching transistors FDG1 and FDG2 inactive, and separates the additional capacitors FDL1 and FDL2 from the floating diffusion regions FD1 and FD2, respectively. Thereby, conversion efficiency can be increased.
 電荷排出トランジスタOFGは、ゲート電極に供給される排出駆動信号OFG1gがアクティブ状態になるとこれに応答して導通状態になることで、フォトダイオード12に蓄積された電荷を排出する。 The charge discharge transistor OFG discharges the charge accumulated in the photodiode 12 by becoming conductive in response to the discharge drive signal OFG1g supplied to the gate electrode becoming active.
 増幅トランジスタAMP1は、ソース電極が選択トランジスタSEL1を介して垂直信号線211Aに接続されることにより、不図示の定電流源と接続し、ソースフォロワ回路を構成する。増幅トランジスタAMP2は、ソース電極が選択トランジスタSEL2を介して垂直信号線211Bに接続されることにより、不図示の定電流源と接続し、ソースフォロワ回路を構成する。 The amplification transistor AMP1 has its source electrode connected to the vertical signal line 211A via the selection transistor SEL1, thereby connecting to a constant current source (not shown) and forming a source follower circuit. The amplification transistor AMP2 has a source electrode connected to the vertical signal line 211B via the selection transistor SEL2, thereby connecting to a constant current source (not shown) and forming a source follower circuit.
 選択トランジスタSEL1は、増幅トランジスタAMP1のソース電極と垂直信号線211Aとの間に接続されている。選択トランジスタSEL1は、ゲート電極に供給される選択駆動信号SEL1gがアクティブ状態になるとこれに応答して導通状態となり、増幅トランジスタAMP1から出力される画素信号VSL1を垂直信号線211Aに出力する。 The selection transistor SEL1 is connected between the source electrode of the amplification transistor AMP1 and the vertical signal line 211A. When the selection drive signal SEL1g supplied to the gate electrode becomes active, the selection transistor SEL1 becomes conductive in response to this, and outputs the pixel signal VSL1 output from the amplification transistor AMP1 to the vertical signal line 211A.
 選択トランジスタSEL2は、増幅トランジスタAMP2のソース電極と垂直信号線211Bとの間に接続されている。選択トランジスタSEL2は、ゲート電極に供給される選択駆動信号SEL2gがアクティブ状態になるとこれに応答して導通状態となり、増幅トランジスタAMP2から出力される画素信号VSL2を垂直信号線211Bに出力する。 The selection transistor SEL2 is connected between the source electrode of the amplification transistor AMP2 and the vertical signal line 211B. When the selection drive signal SEL2g supplied to the gate electrode becomes active, the selection transistor SEL2 becomes conductive in response to this, and outputs the pixel signal VSL2 output from the amplification transistor AMP2 to the vertical signal line 211B.
 画素200の転送トランジスタTRG1およびTRG2、切替トランジスタFDG1およびFDG2、増幅トランジスタAMP1およびAMP2、選択トランジスタSEL1およびSEL2、並びに、電荷排出トランジスタOFGは、垂直駆動部によって制御される。 Transfer transistors TRG1 and TRG2, switching transistors FDG1 and FDG2, amplification transistors AMP1 and AMP2, selection transistors SEL1 and SEL2, and charge discharge transistor OFG of the pixel 200 are controlled by the vertical drive section.
 図3の画素回路において、付加容量FDL1およびFDL2と、その接続を制御する、切替トランジスタFDG1およびFDG2は省略してもよいが、付加容量FDLを設け、入射光量に応じて使い分けることにより、高ダイナミックレンジを確保することができる。 In the pixel circuit of FIG. 3, the additional capacitors FDL1 and FDL2 and the switching transistors FDG1 and FDG2 that control their connections may be omitted, but by providing the additional capacitor FDL and using it properly according to the amount of incident light, it is possible to achieve high dynamic A range can be secured.
 図3の画素200の動作について簡単に説明する。 The operation of the pixel 200 in FIG. 3 will be briefly described.
 まず、受光を開始する前に、画素200の電荷をリセットするリセット動作が、画素200を行列状に配列した画素アレイ部の全画素で行われる。すなわち、電荷排出トランジスタOFGと、リセットトランジスタRST1およびRST2、並びに、切替トランジスタFDG1およびFDG2がオンされ、フォトダイオード12、浮遊拡散領域FD1およびFD2、並びに、付加容量FDL1およびFDL2の蓄積電荷が排出される。 First, before light reception begins, a reset operation that resets the charge of the pixels 200 is performed on all pixels in the pixel array portion in which the pixels 200 are arranged in a matrix. That is, the charge discharge transistor OFG, the reset transistors RST1 and RST2, and the switching transistors FDG1 and FDG2 are turned on, and the accumulated charge in the photodiode 12, the floating diffusion regions FD1 and FD2, and the additional capacitances FDL1 and FDL2 is discharged.
 蓄積電荷の排出後、全画素で受光が開始される。受光期間では、転送トランジスタTRG1とTRG2とが交互に駆動される。すなわち、第1の期間において、転送トランジスタTRG1がオン、転送トランジスタTRG2がオフに制御される。この第1の期間では、フォトダイオード12で発生した電荷が、浮遊拡散領域FD1に転送される。第1の期間の次の第2の期間では、転送トランジスタTRG1がオフ、転送トランジスタTRG2がオンに制御される。この第2の期間では、フォトダイオード12で発生した電荷が、浮遊拡散領域FD2に転送される。これにより、フォトダイオード12で発生した電荷が、浮遊拡散領域FD1とFD2とに交互に振り分けられて、蓄積される。 After the accumulated charges are discharged, all pixels start receiving light. During the light reception period, transfer transistors TRG1 and TRG2 are driven alternately. That is, in the first period, the transfer transistor TRG1 is controlled to be on and the transfer transistor TRG2 is controlled to be off. During this first period, charges generated in the photodiode 12 are transferred to the floating diffusion region FD1. In the second period following the first period, the transfer transistor TRG1 is controlled to be off and the transfer transistor TRG2 is controlled to be on. In this second period, charges generated in the photodiode 12 are transferred to the floating diffusion region FD2. As a result, charges generated in the photodiode 12 are alternately distributed and accumulated in the floating diffusion regions FD1 and FD2.
 そして、受光期間が終了すると、画素アレイ部の各画素200が、線順次に選択される。選択された画素200では、選択トランジスタSEL1およびSEL2がオンされる。これにより、浮遊拡散領域FD1に蓄積された電荷が、画素信号VSL1として、垂直信号線211Aを介して出力される。浮遊拡散領域FD2に蓄積された電荷は、画素信号VSL2として、垂直信号線211Bを介して出力される。 Then, when the light reception period ends, each pixel 200 in the pixel array section is selected line-sequentially. In the selected pixel 200, selection transistors SEL1 and SEL2 are turned on. Thereby, the charges accumulated in the floating diffusion region FD1 are outputted as the pixel signal VSL1 via the vertical signal line 211A. The charges accumulated in the floating diffusion region FD2 are output as a pixel signal VSL2 via the vertical signal line 211B.
 以上で1回の受光動作が終了し、リセット動作から始まる次の受光動作が実行される。 With this, one light receiving operation is completed, and the next light receiving operation starts from the reset operation.
 画素200が受光する反射光は、光源が照射したタイミングから、対象物までの距離に応じて遅延されている。対象物までの距離に応じた遅延時間によって、2つの浮遊拡散領域FD1とFD2に蓄積される電荷の配分比が変化するため、2つの浮遊拡散領域FD1とFD2に蓄積される電荷の配分比から、物体までの距離を求めることができる。 The reflected light received by the pixel 200 is delayed from the timing of irradiation by the light source according to the distance to the target object. The distribution ratio of the charges accumulated in the two floating diffusion regions FD1 and FD2 changes depending on the delay time depending on the distance to the target object, so the distribution ratio of the charges accumulated in the two floating diffusion regions FD1 and FD2 changes. , the distance to an object can be found.
 図4は、光検出装置1がゲート方式のToFセンサである場合の画素200の構成例を示す断面図である。 FIG. 4 is a cross-sectional view showing an example of the configuration of the pixel 200 when the photodetection device 1 is a gate-type ToF sensor.
 図4において、図1に示した断面図と対応する部分については同一の符号を付してあり、その部分の説明は適宜省略する。ただし、MOSトランジスタTrのゲート絶縁膜31、ゲート電極32、及び、サイドウォール33の符号については省略されている。 In FIG. 4, parts corresponding to the cross-sectional view shown in FIG. 1 are denoted by the same reference numerals, and explanations of those parts will be omitted as appropriate. However, the symbols of the gate insulating film 31, gate electrode 32, and sidewall 33 of the MOS transistor Tr are omitted.
 光検出装置1がゲート方式のToFセンサである場合、フォトダイオード12が、SiGe層11に画素200単位に形成される。素子分離部13は、画素200の境界部に形成されており、フォトダイオード12を画素単位に分離する画素分離部として機能する。また、遮光膜15は、画素200の境界部に形成されており、画素間遮光膜として機能する。図1で示した素子分離部13は、SiGe層11の裏面(図1において上面)側からトレンチが形成されることにより、SiGe層11の裏面側からおもて面側にしたがって溝幅(トレンチの開口幅)が狭くなるテーパ状の断面形状とされていた。これに対して、図4の素子分離部13は、SiGe層11のおもて面側からトレンチが形成されることで、SiGe層11のおもて面側から裏面側にしたがって溝幅(トレンチの開口幅)が狭くなる逆テーパ状の断面形状とされている。 When the photodetector 1 is a gate-type ToF sensor, photodiodes 12 are formed in the SiGe layer 11 in units of 200 pixels. The element isolation section 13 is formed at the boundary between the pixels 200 and functions as a pixel isolation section that isolates the photodiode 12 into pixel units. Further, the light-shielding film 15 is formed at the boundary between the pixels 200, and functions as an inter-pixel light-shielding film. In the element isolation part 13 shown in FIG. 1, a trench is formed from the back surface (top surface in FIG. 1) of the SiGe layer 11, so that the trench width (trench width) increases from the back surface to the front surface of the SiGe layer 11. It had a tapered cross-sectional shape with a narrow opening width. On the other hand, in the element isolation part 13 of FIG. 4, a trench is formed from the front surface side of the SiGe layer 11, so that the groove width (trench It has an inverted tapered cross-sectional shape with a narrow opening width.
 Si層21が形成されたMOSトランジスタ形成面には、2つのMOSトランジスタTr1及びTr2が形成されている。この2つのMOSトランジスタTr1及びTr2は、図3の画素回路の転送トランジスタTRG1及びTRG2に相当する。図3の画素回路のその他の画素トランジスタである、切替トランジスタFDG1及びFDG2、増幅トランジスタAMP1及びAMP2、選択トランジスタSEL1及びSEL2、リセットトランジスタRST1及びRST2、並びに、電荷排出トランジスタOFGは、不図示のロジック基板側に形成されている。 Two MOS transistors Tr1 and Tr2 are formed on the MOS transistor formation surface where the Si layer 21 is formed. These two MOS transistors Tr1 and Tr2 correspond to the transfer transistors TRG1 and TRG2 of the pixel circuit in FIG. 3. Other pixel transistors in the pixel circuit of FIG. 3, such as switching transistors FDG1 and FDG2, amplification transistors AMP1 and AMP2, selection transistors SEL1 and SEL2, reset transistors RST1 and RST2, and charge discharging transistor OFG, are provided on a logic board (not shown). formed on the side.
 以上のように、光検出装置1がゲート方式のToFセンサであり、画素200のフォトダイオード12がSiGe層11に形成されている場合、赤外領域において高い量子効率を持ち、受光感度を向上させたToFセンサを実現することができる。 As described above, when the photodetector 1 is a gate-type ToF sensor and the photodiode 12 of the pixel 200 is formed on the SiGe layer 11, it has high quantum efficiency in the infrared region and improves the light receiving sensitivity. A ToF sensor can be realized.
<4.光検出装置が撮像センサである場合の画素構成例>
 次に、光検出装置1が、赤外光を受光し、IR画像を生成するIR撮像センサである場合の画素構造について説明する。
<4. Example of pixel configuration when the photodetection device is an image sensor>
Next, a pixel structure in the case where the photodetecting device 1 is an IR image sensor that receives infrared light and generates an IR image will be described.
 図5は、光検出装置1がIR撮像センサである場合の画素回路例を示している。 FIG. 5 shows an example of a pixel circuit when the photodetector 1 is an IR image sensor.
 光検出装置1が、上述したゲート方式のToFセンサである場合には、フォトダイオード12で発生した電荷を、2つの浮遊拡散領域FD1とFD2とに振り分けて蓄積するため、画素200は、転送トランジスタTRG、浮遊拡散領域FD、付加容量FDL、切替トランジスタFDG、増幅トランジスタAMP、リセットトランジスタRST、及び、選択トランジスタSELをそれぞれ2個ずつ有していた。 When the photodetection device 1 is the above-mentioned gate type ToF sensor, the pixel 200 is a transfer transistor in order to distribute and accumulate the charge generated in the photodiode 12 in the two floating diffusion regions FD1 and FD2. It had two each of TRG, floating diffusion region FD, additional capacitance FDL, switching transistor FDG, amplification transistor AMP, reset transistor RST, and selection transistor SEL.
 これに対して、光検出装置1がIR撮像センサである場合には、フォトダイオード12で発生した電荷を一時保持する電荷保持部は1つでよいため、転送トランジスタTRG、浮遊拡散領域FD、付加容量FDL、切替トランジスタFDG、増幅トランジスタAMP、リセットトランジスタRST、及び、選択トランジスタSELも、それぞれ1個ずつとされる。 On the other hand, if the photodetection device 1 is an IR imaging sensor, only one charge holding section is required to temporarily hold the charges generated in the photodiode 12, so the transfer transistor TRG, the floating diffusion region FD, and the additional There is also one capacitor FDL, one switching transistor FDG, one amplification transistor AMP, one reset transistor RST, and one selection transistor SEL.
 換言すれば、光検出装置1がIR撮像センサである場合には、画素200は、図5に示されるように、図3に示した回路構成から、転送トランジスタTRG2、切替トランジスタFDG2、リセットトランジスタRST2、増幅トランジスタAMP2、及び、選択トランジスタSEL2を省略した構成に等しい。浮遊拡散領域FD2と垂直信号線211Bも省略される。 In other words, when the photodetection device 1 is an IR imaging sensor, the pixel 200 has the circuit configuration shown in FIG. 3, as shown in FIG. , the amplification transistor AMP2, and the selection transistor SEL2 are omitted. Floating diffusion region FD2 and vertical signal line 211B are also omitted.
 図6は、光検出装置1がIR撮像センサである場合の画素200の構成例を示す断面図である。 FIG. 6 is a cross-sectional view showing an example of the configuration of the pixel 200 when the photodetector 1 is an IR image sensor.
 図6においても、図1に示した断面図と対応する部分については同一の符号を付してあり、その部分の説明は適宜省略する。ただし、MOSトランジスタTrのゲート絶縁膜31及びサイドウォール33の符号については省略されている。 Also in FIG. 6, the same reference numerals are given to the parts corresponding to the cross-sectional view shown in FIG. 1, and the description of those parts will be omitted as appropriate. However, the numbers of the gate insulating film 31 and sidewalls 33 of the MOS transistor Tr are omitted.
 光検出装置1がIR撮像センサである場合、フォトダイオード12が、SiGe層11に画素200単位に形成される。素子分離部13は、画素200の境界部に形成されており、フォトダイオード12を画素単位に分離する画素分離部として機能する。また、遮光膜15は、画素200の境界部に形成されており、画素間遮光膜として機能する。素子分離部13は、図6においても、図4に示したゲート方式のToFセンサの場合と同様に、SiGe層11のおもて面側からトレンチが形成されることにより、SiGe層11のおもて面側から裏面側にしたがって溝幅が狭くなる逆テーパ状の断面形状とされている。 When the photodetection device 1 is an IR image sensor, photodiodes 12 are formed in the SiGe layer 11 in units of 200 pixels. The element isolation section 13 is formed at the boundary between the pixels 200 and functions as a pixel isolation section that isolates the photodiode 12 into pixel units. Further, the light-shielding film 15 is formed at the boundary between the pixels 200, and functions as an inter-pixel light-shielding film. In FIG. 6, the element isolation part 13 is formed by forming a trench from the front surface side of the SiGe layer 11, as in the case of the gate-type ToF sensor shown in FIG. It has an inverted tapered cross-sectional shape in which the groove width becomes narrower from the front side to the back side.
 Si層21が形成されたMOSトランジスタ形成面には、2つのMOSトランジスタTr1及びTr2が形成されている。この2つのMOSトランジスタTr1及びTr2は、例えば、図5の画素回路の転送トランジスタTRG1と電荷排出トランジスタOFGに相当する。図3の画素回路のその他の画素トランジスタである、切替トランジスタFDG1、増幅トランジスタAMP1、選択トランジスタSEL1、及び、リセットトランジスタRST1は、不図示のロジック基板側に形成されている。 Two MOS transistors Tr1 and Tr2 are formed on the MOS transistor formation surface where the Si layer 21 is formed. These two MOS transistors Tr1 and Tr2 correspond to, for example, the transfer transistor TRG1 and the charge discharge transistor OFG of the pixel circuit in FIG. 5. The other pixel transistors of the pixel circuit in FIG. 3, such as the switching transistor FDG1, the amplification transistor AMP1, the selection transistor SEL1, and the reset transistor RST1, are formed on the logic substrate side (not shown).
 また、図1に示した断面図では、配線層61と絶縁層43との接合面66に、接合電極42及び64が形成されており、接合電極42及び64を介してMOSトランジスタTrのゲート電極32へ印加される電圧が供給されていた。一方、図6の画素200の断面図では、接合面66において接合電極42及び64は省略されており、MOSトランジスタTr1のゲート電極32は、コンタクト電極241を介して、配線層61の所定の金属配線62に接続されている。同様に、MOSトランジスタTr2のゲート電極32は、コンタクト電極242を介して、配線層61の所定の金属配線62に接続されている。このように、MOSトランジスタTr1及びTr2のゲート電極32へ印加する電圧は、接合電極42及び64を介さずに、ビア(貫通電極)のみによって供給されてもよい。勿論、図1及び図4と同様に、接合電極42及び64を介して印加電圧を供給してもよい。 Further, in the cross-sectional view shown in FIG. 1, bonding electrodes 42 and 64 are formed on the bonding surface 66 between the wiring layer 61 and the insulating layer 43, and the gate electrode of the MOS transistor Tr is connected via the bonding electrodes 42 and 64. The voltage applied to 32 was being supplied. On the other hand, in the cross-sectional view of the pixel 200 in FIG. It is connected to wiring 62. Similarly, the gate electrode 32 of the MOS transistor Tr2 is connected to a predetermined metal wiring 62 of the wiring layer 61 via a contact electrode 242. In this way, the voltage applied to the gate electrodes 32 of the MOS transistors Tr1 and Tr2 may be supplied only by the vias (through electrodes) without passing through the junction electrodes 42 and 64. Of course, the applied voltage may be supplied via the bonding electrodes 42 and 64 as in FIGS. 1 and 4.
 なお、光検出装置1が、赤外光とRGBの光を受光するRGBIR撮像センサであってもよい。この場合、光検出装置1の画素配列を、2x2の4画素に対して、R(赤)の光を受光するR画素、G(緑)の光を受光するG画素、B(青)の光を受光するB画素、および、IR(赤外)の光を受光するIR画素を割り当て、行列状に繰り返し配置した画素配列とすればよい。各画素200が、R画素、B画素、G画素、またはIR画素のいずれになるかは、例えば、フォトダイオード12とオンチップレンズ17との間にカラーフィルタ層を挿入することで制御することができる。 Note that the photodetection device 1 may be an RGBIR image sensor that receives infrared light and RGB light. In this case, the pixel array of the photodetector 1 is set to 4 pixels of 2 x 2: an R pixel that receives R (red) light, a G pixel that receives G (green) light, and a B (blue) light. B pixels that receive IR (infrared) light and IR pixels that receive IR (infrared) light may be assigned to form a pixel array that is repeatedly arranged in a matrix. Whether each pixel 200 is an R pixel, B pixel, G pixel, or IR pixel can be controlled by inserting a color filter layer between the photodiode 12 and the on-chip lens 17, for example. can.
<5.光検出装置がCAPD方式のToFセンサである場合の画素構成例>
 次に、光検出装置1が、CAPD(Current Assisted Photonic Demodulator)方式のToFセンサである場合の画素構造について説明する。
<5. Example of pixel configuration when the photodetection device is a CAPD type ToF sensor>
Next, a pixel structure in the case where the photodetection device 1 is a CAPD (Current Assisted Photonic Demodulator) ToF sensor will be described.
 CAPD方式のToFセンサは、半導体基板51に直接電圧を印加して基板内に電流を発生させ、基板内の広範囲の光電変換領域を高速に変調することで、光電変換された電荷を振り分ける方式の間接ToFセンサである。 The CAPD type ToF sensor applies a voltage directly to the semiconductor substrate 51 to generate current within the substrate, and quickly modulates a wide range of photoelectric conversion regions within the substrate, thereby distributing the photoelectrically converted charges. It is an indirect ToF sensor.
 図7は、光検出装置1がCAPD方式のToFセンサである場合の画素回路例を示している。 FIG. 7 shows an example of a pixel circuit when the photodetection device 1 is a CAPD type ToF sensor.
 図7の画素200は、半導体基板51内に、信号取り出し部301-1および301-2を有している。信号取り出し部301-1は、N型半導体領域であるN+半導体領域311-1とP型半導体領域であるP+半導体領域312-1を少なくとも含む。信号取り出し部301-2は、N型半導体領域であるN+半導体領域311-2とP型半導体領域であるP+半導体領域312-2を少なくとも含む。 The pixel 200 in FIG. 7 has signal extraction sections 301-1 and 301-2 within the semiconductor substrate 51. The signal extraction section 301-1 includes at least an N+ semiconductor region 311-1, which is an N-type semiconductor region, and a P+ semiconductor region 312-1, which is a P-type semiconductor region. The signal extraction section 301-2 includes at least an N+ semiconductor region 311-2, which is an N-type semiconductor region, and a P+ semiconductor region 312-2, which is a P-type semiconductor region.
 画素200は、信号取り出し部301-1に対して、転送トランジスタ321A、FD322A、リセットトランジスタ323A、増幅トランジスタ324A、及び、選択トランジスタ325Aを有する。 The pixel 200 has a transfer transistor 321A, an FD 322A, a reset transistor 323A, an amplification transistor 324A, and a selection transistor 325A for the signal extraction section 301-1.
 また、画素200は、信号取り出し部301-2に対して、転送トランジスタ321B、FD322B、リセットトランジスタ323B、増幅トランジスタ324B、及び、選択トランジスタ325Bを有する。 Furthermore, the pixel 200 includes a transfer transistor 321B, an FD 322B, a reset transistor 323B, an amplification transistor 324B, and a selection transistor 325B for the signal extraction section 301-2.
 垂直駆動部は、P+半導体領域312-1に所定の電圧MIX1(第1の電圧)を印加し、P+半導体領域312-2に所定の電圧MIX2(第2の電圧)を印加する。例えば、電圧MIX1およびMIX2の一方が1.5Vで、他方が0Vとされる。P+半導体領域312-1および312-2は、第1の電圧または第2の電圧が印加される電圧印加部である。 The vertical drive section applies a predetermined voltage MIX1 (first voltage) to the P+ semiconductor region 312-1 and a predetermined voltage MIX2 (second voltage) to the P+ semiconductor region 312-2. For example, one of the voltages MIX1 and MIX2 is 1.5V, and the other is 0V. The P+ semiconductor regions 312-1 and 312-2 are voltage application parts to which a first voltage or a second voltage is applied.
 N+半導体領域311-1および311-2は、半導体基板51に入射された光が光電変換されて生成された電荷を検出して、蓄積する電荷検出部である。 The N+ semiconductor regions 311-1 and 311-2 are charge detection sections that detect and accumulate charges generated by photoelectric conversion of light incident on the semiconductor substrate 51.
 転送トランジスタ321Aは、ゲート電極に供給される転送駆動信号TRGgがアクティブ状態になるとこれに応答して導通状態になることで、N+半導体領域311-1に蓄積されている電荷をFD322Aに転送する。転送トランジスタ321Bは、ゲート電極に供給される転送駆動信号TRGgがアクティブ状態になるとこれに応答して導通状態になることで、N+半導体領域311-2に蓄積されている電荷をFD322Bに転送する。 When the transfer drive signal TRGg supplied to the gate electrode becomes active, the transfer transistor 321A becomes conductive in response to this, thereby transferring the charges accumulated in the N+ semiconductor region 311-1 to the FD 322A. When the transfer drive signal TRGg supplied to the gate electrode becomes active, the transfer transistor 321B becomes conductive in response to this, thereby transferring the charges accumulated in the N+ semiconductor region 311-2 to the FD 322B.
 FD322Aは、N+半導体領域311-1から供給された電荷を一時保持する。FD322Bは、N+半導体領域311-2から供給された電荷を一時保持する。 The FD 322A temporarily holds the charge supplied from the N+ semiconductor region 311-1. The FD 322B temporarily holds the charge supplied from the N+ semiconductor region 311-2.
 リセットトランジスタ323Aは、ゲート電極に供給されるリセット駆動信号RSTgがアクティブ状態になるとこれに応答して導通状態になることで、FD322Aの電位を所定のレベル(リセット電圧VDD)にリセットする。リセットトランジスタ323Bは、ゲート電極に供給されるリセット駆動信号RSTgがアクティブ状態になるとこれに応答して導通状態になることで、FD322Bの電位を所定のレベル(リセット電圧VDD)にリセットする。なお、リセットトランジスタ323Aおよび323Bがアクティブ状態とされるとき、転送トランジスタ321Aおよび321Bも同時にアクティブ状態とされる。 When the reset drive signal RSTg supplied to the gate electrode becomes active, the reset transistor 323A becomes conductive in response to this, thereby resetting the potential of the FD 322A to a predetermined level (reset voltage VDD). The reset transistor 323B becomes conductive in response to the activation of the reset drive signal RSTg supplied to the gate electrode, thereby resetting the potential of the FD 322B to a predetermined level (reset voltage VDD). Note that when reset transistors 323A and 323B are activated, transfer transistors 321A and 321B are also activated at the same time.
 増幅トランジスタ324Aは、ソース電極が選択トランジスタ325Aを介して垂直信号線211Aに接続されることにより、垂直信号線211Aの一端に接続されている定電流源回路部326Aの負荷MOSとソースフォロワ回路を構成する。増幅トランジスタ324Bは、ソース電極が選択トランジスタ325Bを介して垂直信号線211Bに接続されることにより、垂直信号線211Bの一端に接続されている定電流源回路部326Bの負荷MOSとソースフォロワ回路を構成する。 The amplification transistor 324A has a source electrode connected to the vertical signal line 211A via the selection transistor 325A, so that the amplification transistor 324A connects the load MOS of the constant current source circuit section 326A connected to one end of the vertical signal line 211A and the source follower circuit. Configure. The amplification transistor 324B has a source electrode connected to the vertical signal line 211B via the selection transistor 325B, thereby connecting the load MOS of the constant current source circuit section 326B connected to one end of the vertical signal line 211B and the source follower circuit. Configure.
 選択トランジスタ325Aは、増幅トランジスタ324Aのソース電極と垂直信号線211Aとの間に接続されている。選択トランジスタ325Aは、ゲート電極に供給される選択駆動信号SELgがアクティブ状態になるとこれに応答して導通状態となり、増幅トランジスタ324Aから出力される画素信号を垂直信号線211Aに出力する。 The selection transistor 325A is connected between the source electrode of the amplification transistor 324A and the vertical signal line 211A. When the selection drive signal SELg supplied to the gate electrode becomes active, the selection transistor 325A becomes conductive in response to this, and outputs the pixel signal output from the amplification transistor 324A to the vertical signal line 211A.
 選択トランジスタ325Bは、増幅トランジスタ324Bのソース電極と垂直信号線211Bとの間に接続されている。選択トランジスタ325Bは、ゲート電極に供給される選択駆動信号SELgがアクティブ状態になるとこれに応答して導通状態となり、増幅トランジスタ324Bから出力される画素信号を垂直信号線211Bに出力する。 The selection transistor 325B is connected between the source electrode of the amplification transistor 324B and the vertical signal line 211B. When the selection drive signal SELg supplied to the gate electrode becomes active, the selection transistor 325B becomes conductive in response to this, and outputs the pixel signal output from the amplification transistor 324B to the vertical signal line 211B.
 画素200の転送トランジスタ321Aおよび321B、リセットトランジスタ323Aおよび323B、増幅トランジスタ324Aおよび324B、並びに、選択トランジスタ325Aおよび325Bは、例えば、画素200を駆動する垂直駆動部によって制御される。 Transfer transistors 321A and 321B, reset transistors 323A and 323B, amplification transistors 324A and 324B, and selection transistors 325A and 325B of pixel 200 are controlled by, for example, a vertical drive unit that drives pixel 200.
 図8は、光検出装置1がCAPD方式のToFセンサである場合の画素200の構成例を示す断面図である。 FIG. 8 is a cross-sectional view showing an example of the configuration of the pixel 200 when the photodetection device 1 is a CAPD type ToF sensor.
 図8においても、図1に示した断面図と対応する部分については同一の符号を付してあり、その部分の説明は適宜省略する。 Also in FIG. 8, the same reference numerals are given to the parts corresponding to the cross-sectional view shown in FIG. 1, and the description of those parts will be omitted as appropriate.
 光検出装置1がCAPD方式のToFセンサである場合、半導体基板51のSiGe層11全体が光電変換部とされる。素子分離部13は、画素200の境界部に形成されており、光電変換部を画素単位に分離する画素分離部として機能する。また、遮光膜15は、画素200の境界部に形成されており、画素間遮光膜として機能する。素子分離部13は、図8においても、図4に示したゲート方式のToFセンサの場合と同様に、SiGe層11のおもて面側からトレンチが形成されることにより、SiGe層11のおもて面側から裏面側にしたがって溝幅が狭くなる逆テーパ状の断面形状とされている。 When the photodetection device 1 is a CAPD type ToF sensor, the entire SiGe layer 11 of the semiconductor substrate 51 serves as a photoelectric conversion section. The element separation section 13 is formed at the boundary of the pixel 200, and functions as a pixel separation section that separates the photoelectric conversion section into pixel units. Further, the light shielding film 15 is formed at the boundary between the pixels 200, and functions as an inter-pixel light shielding film. In FIG. 8, the element isolation part 13 is formed by forming a trench from the front surface side of the SiGe layer 11, as in the case of the gate-type ToF sensor shown in FIG. It has an inverted tapered cross-sectional shape in which the groove width becomes narrower from the front side to the back side.
 平面視で、画素200の中心部であって、SiGe層11のおもて面側界面近傍に、Pウェル331が形成されており、そのPウェル331を間に挟むように、信号取り出し部301-1および信号取り出し部301-2が形成されている。 In a plan view, a P-well 331 is formed in the center of the pixel 200 and near the interface on the front surface side of the SiGe layer 11. -1 and a signal extraction section 301-2 are formed.
 信号取り出し部301-1は、N+半導体領域311-1とP+半導体領域312-1を少なくとも含む。信号取り出し部301-2は、N+半導体領域311-2とP+半導体領域312-2を少なくとも含む。 The signal extraction section 301-1 includes at least an N+ semiconductor region 311-1 and a P+ semiconductor region 312-1. The signal extraction section 301-2 includes at least an N+ semiconductor region 311-2 and a P+ semiconductor region 312-2.
 信号取り出し部301-1のP+半導体領域312-1には、配線層61の所定の金属配線62、コンタクト電極65、接合電極64及び42、並びに、コンタクト電極41を介して、ロジック基板側から所定の電圧MIX1が印加される。 The P+ semiconductor region 312-1 of the signal take-out section 301-1 is provided with a predetermined signal from the logic board side via the predetermined metal wiring 62 of the wiring layer 61, the contact electrode 65, the bonding electrodes 64 and 42, and the contact electrode 41. voltage MIX1 is applied.
 信号取り出し部301-1のN+半導体領域311-1には、光電変換により得られた電荷に応じた信号DET1が、所定のコンタクト電極41、接合電極42及び64、コンタクト電極65を介して、ロジック基板側の金属配線62へ出力される。 In the N+ semiconductor region 311-1 of the signal extraction section 301-1, a signal DET1 corresponding to the charge obtained by photoelectric conversion is transmitted to the logic circuit via a predetermined contact electrode 41, junction electrodes 42 and 64, and contact electrode 65. It is output to the metal wiring 62 on the board side.
 信号取り出し部301-2のP+半導体領域312-2には、配線層61の所定の金属配線62、コンタクト電極65、接合電極64及び42、コンタクト電極41を介して、ロジック基板側から所定の電圧MIX2が印加される。 A predetermined voltage is applied to the P+ semiconductor region 312-2 of the signal extraction section 301-2 from the logic board side via the predetermined metal wiring 62 of the wiring layer 61, the contact electrode 65, the bonding electrodes 64 and 42, and the contact electrode 41. MIX2 is applied.
 信号取り出し部301-2のN+半導体領域311-2には、光電変換により得られた電荷に応じた信号DET2が、所定のコンタクト電極41、接合電極42及び64、コンタクト電極65を介して、ロジック基板側の金属配線62へ出力される。 In the N+ semiconductor region 311-2 of the signal extraction section 301-2, a signal DET2 corresponding to the electric charge obtained by photoelectric conversion is transmitted to the logic circuit through a predetermined contact electrode 41, junction electrodes 42 and 64, and contact electrode 65. It is output to the metal wiring 62 on the board side.
 図8の画素200の動作について説明する。 The operation of the pixel 200 in FIG. 8 will be explained.
 垂直駆動部は、画素200を駆動させ、光電変換により得られた電荷に応じた信号をFD322AとFD322B(図7)とに振り分ける。 The vertical drive unit drives the pixel 200 and distributes a signal according to the charge obtained by photoelectric conversion to FD 322A and FD 322B (FIG. 7).
 垂直駆動部は、2つのP+半導体領域312に電圧を印加する。例えば、垂直駆動部は、P+半導体領域312-1に1.5Vの電圧を印加し、P+半導体領域312-2には0Vの電圧を印加する。 The vertical drive section applies voltage to the two P+ semiconductor regions 312. For example, the vertical drive unit applies a voltage of 1.5V to the P+ semiconductor region 312-1 and a voltage of 0V to the P+ semiconductor region 312-2.
 電圧の印加により、SiGe層11における2つのP+半導体領域312の間に電界が発生し、P+半導体領域312-1からP+半導体領域312-2へと電流が流れる。この場合、SiGe層11内の正孔(ホール)はP+半導体領域312-2の方向へと移動することになり、電子はP+半導体領域312-1の方向へと移動することになる。 By applying the voltage, an electric field is generated between the two P+ semiconductor regions 312 in the SiGe layer 11, and a current flows from the P+ semiconductor region 312-1 to the P+ semiconductor region 312-2. In this case, holes in the SiGe layer 11 will move toward the P+ semiconductor region 312-2, and electrons will move toward the P+ semiconductor region 312-1.
 したがって、このような状態でオンチップレンズ17を介して外部からの赤外光(反射光)がSiGe層11内に入射し、その赤外光がSiGe層11内で光電変換されて電子と正孔のペアに変換されると、得られた電子はP+半導体領域312間の電界によりP+半導体領域312-1の方向へと導かれ、N+半導体領域311-1内へと移動する。 Therefore, in this state, infrared light (reflected light) from the outside enters the SiGe layer 11 via the on-chip lens 17, and the infrared light is photoelectrically converted within the SiGe layer 11 and converted into electrons and positive light. Once converted into a pair of holes, the obtained electrons are guided in the direction of the P+ semiconductor region 312-1 by the electric field between the P+ semiconductor regions 312 and move into the N+ semiconductor region 311-1.
 この場合、光電変換で発生した電子が、画素200に入射した赤外光の量、すなわち赤外光の受光量に応じた信号を検出するための信号電荷として用いられることになる。 In this case, electrons generated by photoelectric conversion are used as signal charges for detecting a signal corresponding to the amount of infrared light incident on the pixel 200, that is, the amount of received infrared light.
 これにより、N+半導体領域311-1には、N+半導体領域311-1内へと移動してきた電子に応じた電荷が蓄積されることになり、この電荷に応じた信号DET1がFD322Aや増幅トランジスタ324A、垂直信号線211A等を介して出力される。 As a result, charges corresponding to the electrons that have moved into the N+ semiconductor region 311-1 are accumulated in the N+ semiconductor region 311-1, and a signal DET1 corresponding to this charge is transmitted to the FD 322A and the amplification transistor 322A. , the vertical signal line 211A, etc.
 すなわち、N+半導体領域311-1の蓄積電荷が、そのN+半導体領域311-1に直接接続されたFD322Aに転送され、FD322Aに転送された電荷に応じた信号DET1が増幅トランジスタ324Aや垂直信号線211Aを介して読み出される。 That is, the accumulated charge in the N+ semiconductor region 311-1 is transferred to the FD322A directly connected to the N+ semiconductor region 311-1, and the signal DET1 corresponding to the charge transferred to the FD322A is transmitted to the amplification transistor 324A and the vertical signal line 211A. read out via
 この画素信号は、N+半導体領域311-1により検出された電子に応じた電荷量、すなわちFD322Aに蓄積された電荷の量を示す信号となる。換言すれば、画素信号は画素200で受光された赤外光の光量を示す信号であるともいうことができる。 This pixel signal is a signal indicating the amount of charge corresponding to the electrons detected by the N+ semiconductor region 311-1, that is, the amount of charge accumulated in the FD 322A. In other words, the pixel signal can also be said to be a signal indicating the amount of infrared light received by the pixel 200.
 なお、このときN+半導体領域311-1における場合と同様にしてN+半導体領域311-2で検出された電子に応じた画素信号も適宜測距に用いられるようにしてもよい。 Note that at this time, similarly to the case in the N+ semiconductor region 311-1, pixel signals corresponding to electrons detected in the N+ semiconductor region 311-2 may also be used for distance measurement as appropriate.
 また、次のタイミングでは、これまでSiGe層11内で生じていた電界と反対方向の電界が発生するように、垂直駆動部により2つのP+半導体領域312に電圧が印加される。具体的には、例えば、P+半導体領域312-2に1.5Vの電圧が印加され、P+半導体領域312-1には0Vの電圧が印加される。 Also, at the next timing, a voltage is applied to the two P+ semiconductor regions 312 by the vertical drive unit so that an electric field in the opposite direction to the electric field that has been generated in the SiGe layer 11 is generated. Specifically, for example, a voltage of 1.5V is applied to the P+ semiconductor region 312-2, and a voltage of 0V is applied to the P+ semiconductor region 312-1.
 これにより、SiGe層11における2つのP+半導体領域312の間で電界が発生し、P+半導体領域312-2からP+半導体領域312-1へと電流が流れる。 As a result, an electric field is generated between the two P+ semiconductor regions 312 in the SiGe layer 11, and a current flows from the P+ semiconductor region 312-2 to the P+ semiconductor region 312-1.
 このような状態でオンチップレンズ17を介して外部からの赤外光(反射光)がSiGe層11内に入射し、その赤外光がSiGe層11内で光電変換されて電子と正孔のペアに変換されると、得られた電子はP+半導体領域312間の電界によりP+半導体領域312-2の方向へと導かれ、N+半導体領域311-2内へと移動する。 In this state, infrared light (reflected light) from the outside enters the SiGe layer 11 through the on-chip lens 17, and the infrared light is photoelectrically converted within the SiGe layer 11 to generate electrons and holes. Once converted into pairs, the obtained electrons are guided toward the P+ semiconductor region 312-2 by the electric field between the P+ semiconductor regions 312 and move into the N+ semiconductor region 311-2.
 これにより、N+半導体領域311-2には、N+半導体領域311-2内へと移動してきた電子に応じた電荷が蓄積されることになり、この電荷がFD322Bや増幅トランジスタ324B、垂直信号線211B等を介して出力される。 As a result, a charge corresponding to the electrons that have moved into the N+ semiconductor region 311-2 is accumulated in the N+ semiconductor region 311-2, and this charge is output via the FD 322B, the amplification transistor 324B, the vertical signal line 211B, etc.
 すなわち、N+半導体領域311-2の蓄積電荷が、そのN+半導体領域311-2に直接接続されたFD322Bに転送され、FD322Bに転送された電荷に応じた信号DET2が増幅トランジスタ324Bや垂直信号線211Bを介して読み出される。 That is, the accumulated charge in the N+ semiconductor region 311-2 is transferred to the FD322B directly connected to the N+ semiconductor region 311-2, and the signal DET2 corresponding to the charge transferred to the FD322B is transmitted to the amplification transistor 324B and the vertical signal line 211B. read out via
 なお、このときN+半導体領域311-2における場合と同様にしてN+半導体領域311-1で検出された電子に応じた画素信号も適宜測距に用いられるようにしてもよい。 Note that at this time, pixel signals corresponding to electrons detected in the N+ semiconductor region 311-1 may also be appropriately used for distance measurement in the same manner as in the N+ semiconductor region 311-2.
 このようにして、同じ画素200において互いに異なる期間の光電変換で得られた画素信号が得られると、それらの画素信号に基づいて対象物までの距離を算出することができる。 In this way, when pixel signals obtained by photoelectric conversion in different periods in the same pixel 200 are obtained, the distance to the object can be calculated based on these pixel signals.
 CAPD方式のToFセンサの画素200の構成では、図8に示されるように、SiGe層11のおもて面には、MOSトランジスタTrが形成されない。そのため、図8の例では、上述した他の断面構成と同様に、SiGe層11のおもて面にSi層21が形成されているが、このSi層21を省略することができる。Si層21の形成が不要であるため、製造工程が容易になる。 In the configuration of the pixel 200 of the CAPD type ToF sensor, as shown in FIG. 8, the MOS transistor Tr is not formed on the front surface of the SiGe layer 11. Therefore, in the example of FIG. 8, the Si layer 21 is formed on the front surface of the SiGe layer 11, similar to the other cross-sectional configurations described above, but this Si layer 21 can be omitted. Since it is not necessary to form the Si layer 21, the manufacturing process becomes easier.
<6.光検出装置がSPADを用いた直接ToFセンサである場合の画素構成例>
 次に、光検出装置1が、SPAD(Single Photon Avalanche Diode)を用いた直接ToFセンサである場合の画素構造について説明する。
<6. Example of pixel configuration when the photodetection device is a direct ToF sensor using SPAD>
Next, a pixel structure will be described when the photodetector 1 is a direct ToF sensor using a SPAD (Single Photon Avalanche Diode).
 図9は、光検出装置1がSPADを用いた直接ToFセンサである場合の画素回路例を示している。 FIG. 9 shows an example of a pixel circuit when the photodetection device 1 is a direct ToF sensor using SPAD.
 図9の画素200は、SPAD401と、トランジスタ411およびインバータ412で構成される読み出し回路402とを備える。また、画素200は、スイッチ413も備える。トランジスタ411は、P型のMOSトランジスタで構成される。 The pixel 200 in FIG. 9 includes a SPAD 401 and a readout circuit 402 composed of a transistor 411 and an inverter 412. The pixel 200 also includes a switch 413. The transistor 411 is a P-type MOS transistor.
 SPAD401のカソードは、トランジスタ411のドレインに接続されるとともに、インバータ412の入力端子、及び、スイッチ413の一端に接続されている。SPAD401のアノードは、電源電圧VA(以下では、アノード電圧VAとも称する。)に接続されている。 The cathode of the SPAD 401 is connected to the drain of the transistor 411, as well as to the input terminal of the inverter 412 and one end of the switch 413. The anode of the SPAD 401 is connected to a power supply voltage VA (hereinafter also referred to as anode voltage VA).
 SPAD401は、入射光が入射されたとき、発生する電子をアバランシェ増幅させてカソード電圧VSの信号を出力するフォトダイオード(単一光子アバランシェフォトダイオード)である。SPAD401のアノードに供給される電源電圧VAは、例えば、-20V程度の負バイアス(負の電位)とされる。 The SPAD 401 is a photodiode (single photon avalanche photodiode) that avalanche-amplifies generated electrons when incident light is applied and outputs a signal of cathode voltage VS. The power supply voltage VA supplied to the anode of the SPAD 401 is, for example, a negative bias (negative potential) of about -20V.
 トランジスタ411は、飽和領域で動作する定電流源であり、クエンチング抵抗として働くことにより、パッシブクエンチを行う。トランジスタ411のソースは電源電圧VEに接続され、ドレインがSPAD401のカソード、インバータ412の入力端子、及び、スイッチ413の一端に接続されている。これにより、SPAD401のカソードにも、電源電圧VEが供給される。SPAD401と直列に接続されたトランジスタ411の代わりに、プルアップ抵抗を用いることもできる。 The transistor 411 is a constant current source that operates in the saturation region, and performs passive quenching by functioning as a quenching resistor. The source of the transistor 411 is connected to the power supply voltage VE, and the drain is connected to the cathode of the SPAD 401, the input terminal of the inverter 412, and one end of the switch 413. As a result, the power supply voltage VE is also supplied to the cathode of the SPAD 401. A pull-up resistor can also be used instead of the transistor 411 connected in series with the SPAD 401.
 SPAD401には、十分な効率で光(フォトン)を検出するため、SPAD401の降伏電圧VBDよりも大きな電圧(過剰バイアス(ExcessBias))が印加される。例えば、SPAD401の降伏電圧VBDが20Vであり、それよりも3V大きい電圧を印加することとすると、トランジスタ411のソースに供給される電源電圧VEは、3Vとされる。 In order to detect light (photons) with sufficient efficiency, a voltage (excess bias) larger than the breakdown voltage VBD of the SPAD 401 is applied to the SPAD 401. For example, if the breakdown voltage VBD of the SPAD 401 is 20V and a voltage 3V higher than that is applied, the power supply voltage VE supplied to the source of the transistor 411 is 3V.
 なお、SPAD401の降伏電圧VBDは、温度等によって大きく変化する。そのため、降伏電圧VBDの変化に応じて、SPAD401に印加する印加電圧が制御(調整)される。例えば、電源電圧VEを固定電圧とすると、アノード電圧VAが制御(調整)される。 Note that the breakdown voltage VBD of the SPAD 401 varies greatly depending on temperature and other factors. Therefore, the applied voltage applied to the SPAD 401 is controlled (adjusted) according to changes in the breakdown voltage VBD. For example, if the power supply voltage VE is a fixed voltage, the anode voltage VA is controlled (adjusted).
 スイッチ413は、両端の一端がSPAD401のカソード、インバータ412の入力端子、および、トランジスタ411のドレインに接続され、他端が、グランド(GND)に接続されている。スイッチ413は、例えば、N型のMOSトランジスタで構成することができ、垂直駆動部から供給されるゲーティング制御信号VGに応じてオンオフさせる。 The switch 413 has one end connected to the cathode of the SPAD 401, the input terminal of the inverter 412, and the drain of the transistor 411, and the other end connected to the ground (GND). The switch 413 can be composed of, for example, an N-type MOS transistor, and is turned on and off according to the gating control signal VG supplied from the vertical drive section.
 垂直駆動部は、各画素200のスイッチ413にHighまたはLowのゲーティング制御信号VGを供給し、スイッチ413をオンオフさせることにより、画素アレイ部の各画素200をアクティブ画素または非アクティブ画素に設定する。アクティブ画素は、光子の入射を検出する画素であり、非アクティブ画素は、光子の入射を検出しない画素である。ゲーティング制御信号VGにしたがいスイッチ413がオンされ、SPAD401のカソードがグランドに制御されると、画素200は、非アクティブ画素になる。 The vertical drive section supplies a high or low gating control signal VG to the switch 413 of each pixel 200 and turns the switch 413 on and off, thereby setting each pixel 200 of the pixel array section as an active pixel or an inactive pixel. . An active pixel is a pixel that detects incident photons, and an inactive pixel is a pixel that does not detect incident photons. When the switch 413 is turned on in accordance with the gating control signal VG and the cathode of the SPAD 401 is grounded, the pixel 200 becomes an inactive pixel.
 図9の画素200がアクティブ画素に設定された場合の動作について説明する。 The operation when the pixel 200 in FIG. 9 is set as an active pixel will be described.
 まず、画素200がアクティブ画素である場合、上述したように、スイッチ413はオフに設定される。 First, when the pixel 200 is an active pixel, the switch 413 is set to off, as described above.
 SPAD401のカソードには電源電圧VE(例えば、3V)が供給され、アノードには電源電圧VA(例えば、-20V)が供給されることから、SPAD401に降伏電圧VBD(=20V)より大きい逆電圧が印加されることにより、SPAD401がガイガーモードに設定される。 Since the cathode of the SPAD401 is supplied with the power supply voltage VE (e.g. 3V) and the anode is supplied with the power supply voltage VA (e.g. -20V), a reverse voltage greater than the breakdown voltage VBD (=20V) is applied to the SPAD401. By applying the voltage, the SPAD 401 is set to Geiger mode.
 ガイガーモードに設定されたSPAD401に光子が入射すると、アバランシェ増倍が発生し、SPAD401に電流が流れる。アバランシェ増倍が発生し、SPAD401に電流が流れたとすると、SPAD401に電流が流れることにより、トランジスタ411にも電流が流れ、トランジスタ411の抵抗成分により電圧降下が発生する。 When a photon enters the SPAD 401 set to Geiger mode, avalanche multiplication occurs and current flows through the SPAD 401. If avalanche multiplication occurs and a current flows through the SPAD 401, the current flows through the SPAD 401, so that a current also flows through the transistor 411, and a voltage drop occurs due to the resistance component of the transistor 411.
 電圧降下が発生し、SPAD401のカソード電圧VSが0Vよりも低くなると、SPAD401のアノード・カソード間電圧が降伏電圧VBDよりも低い状態となるので、アバランシェ増幅が停止する。ここで、アバランシェ増幅により発生する電流がトランジスタ411に流れることで電圧降下を発生させ、発生した電圧降下に伴って、カソード電圧VSが降伏電圧VBDよりも低い状態となることで、アバランシェ増幅を停止させる動作がクエンチ動作である。 When a voltage drop occurs and the cathode voltage VS of the SPAD 401 becomes lower than 0V, the anode-cathode voltage of the SPAD 401 becomes lower than the breakdown voltage VBD, so avalanche amplification stops. Here, the current generated by avalanche amplification flows through the transistor 411, causing a voltage drop, and with the generated voltage drop, the cathode voltage VS becomes lower than the breakdown voltage VBD, and the avalanche amplification is stopped. This action is the quench action.
 アバランシェ増幅が停止するとトランジスタ411の抵抗に流れる電流が徐々に減少して、再びカソード電圧VSが元の電源電圧VEまで戻り、次の新たなフォトンを検出できる状態となる(リチャージ動作)。 When the avalanche amplification stops, the current flowing through the resistor of the transistor 411 gradually decreases, and the cathode voltage VS returns to the original power supply voltage VE again, making it possible to detect the next new photon (recharge operation).
 インバータ412は、入力電圧であるカソード電圧VSが所定の閾値電圧Vth以上のとき、Loの画素信号PFoutを出力し、カソード電圧VSが所定の閾値電圧Vth未満のとき、Hiの画素信号PFoutを出力する。従って、SPAD401に光子が入射し、アバランシェ増倍が発生してカソード電圧VSが低下し、閾値電圧Vthを下回ると、画素信号PFoutは、ローレベルからハイレベルに反転する。一方、SPAD401のアバランシェ増倍が収束し、カソード電圧VSが上昇し、閾値電圧Vth以上になると、画素信号PFoutは、ハイレベルからローレベルに反転する。 The inverter 412 outputs a Lo pixel signal PFout when the cathode voltage VS, which is the input voltage, is equal to or higher than a predetermined threshold voltage Vth, and outputs a Hi pixel signal PFout when the cathode voltage VS is less than the predetermined threshold voltage Vth. do. Therefore, when a photon is incident on the SPAD 401 and avalanche multiplication occurs and the cathode voltage VS decreases and becomes less than the threshold voltage Vth, the pixel signal PFout is inverted from a low level to a high level. On the other hand, when the avalanche multiplication of the SPAD 401 converges and the cathode voltage VS rises and becomes equal to or higher than the threshold voltage Vth, the pixel signal PFout is inverted from high level to low level.
 なお、画素200が非アクティブ画素とされる場合には、スイッチ413がオンされる。スイッチ413がオンされると、SPAD401のカソード電圧VSが0Vとなる。その結果、SPAD401のアノード・カソード間電圧が降伏電圧VBD以下となるので、SPAD401に光子が入ってきても反応しない状態となる。 Note that when the pixel 200 is set as an inactive pixel, the switch 413 is turned on. When the switch 413 is turned on, the cathode voltage VS of the SPAD 401 becomes 0V. As a result, the voltage between the anode and cathode of the SPAD 401 becomes lower than the breakdown voltage VBD, so that even if a photon enters the SPAD 401, it does not react.
 図10は、光検出装置1がSPADを用いた直接ToFセンサである場合の画素200の構成例を示す断面図である。 FIG. 10 is a cross-sectional view showing an example of the configuration of the pixel 200 when the photodetector 1 is a direct ToF sensor using SPAD.
 図10においても、図1に示した断面図と対応する部分については同一の符号を付してあり、その部分の説明は適宜省略する。 Also in FIG. 10, the same reference numerals are given to the parts corresponding to the cross-sectional view shown in FIG. 1, and the explanation of those parts will be omitted as appropriate.
 光検出装置1がSPADを用いた直接ToFセンサである場合、半導体基板51のSiGe層11がNウェル領域で構成される。素子分離部13は、画素200の境界部に形成されており、光電変換部を画素単位に分離する画素分離部として機能する。また、遮光膜15は、画素200の境界部に形成されており、画素間遮光膜として機能する。素子分離部13は、図10においても、図4に示したゲート方式のToFセンサの場合と同様に、SiGe層11のおもて面側からトレンチが形成されることにより、SiGe層11のおもて面側から裏面側にしたがって溝幅が狭くなる逆テーパ状の断面形状とされている。 When the photodetector 1 is a direct ToF sensor using SPAD, the SiGe layer 11 of the semiconductor substrate 51 is configured as an N-well region. The element separation section 13 is formed at the boundary of the pixel 200, and functions as a pixel separation section that separates the photoelectric conversion section into pixel units. Further, the light-shielding film 15 is formed at the boundary between the pixels 200, and functions as an inter-pixel light-shielding film. In FIG. 10, the element isolation section 13 is formed by forming a trench from the front surface side of the SiGe layer 11, as in the case of the gate-type ToF sensor shown in FIG. It has an inverted tapered cross-sectional shape in which the groove width becomes narrower from the front side to the back side.
 平面視で、画素200の中心部であって、SiGe層11のおもて面側界面近傍には、P型拡散層441とN型拡散層442が形成されている。このP型拡散層441とN型拡散層442とが接続する領域に形成される空乏層によって、アバランシェ増倍領域443が形成される。 In a plan view, a P-type diffusion layer 441 and an N-type diffusion layer 442 are formed in the center of the pixel 200 and near the front surface side interface of the SiGe layer 11. An avalanche multiplication region 443 is formed by a depletion layer formed in a region where the P-type diffusion layer 441 and the N-type diffusion layer 442 are connected.
 画素200の境界周辺部となる素子分離部13の内側領域には、SiGe層11のおもて面から裏面までの全域にわたってホール蓄積層444が形成されている。ホール蓄積層444は、SiGe層11のおもて面側界面に形成された高濃度P型拡散層445と接続されている。 A hole accumulation layer 444 is formed over the entire area from the front surface to the back surface of the SiGe layer 11 in the inner region of the element isolation section 13 that is the peripheral portion of the boundary of the pixel 200. The hole accumulation layer 444 is connected to a high concentration P-type diffusion layer 445 formed at the front surface side interface of the SiGe layer 11.
 SiGe層11は、N型の不純物が注入されることによりN型に制御され、画素200における光電変換により発生する電子をアバランシェ増倍領域443へ転送する電界を形成する。 The SiGe layer 11 is controlled to be N-type by implanting N-type impurities, and forms an electric field that transfers electrons generated by photoelectric conversion in the pixel 200 to the avalanche multiplication region 443.
 P型拡散層441は、平面方向において、画素領域のほぼ全面に亘るように形成される濃いP型の拡散層(P+)である。N型拡散層442は、半導体基板51の表面近傍であってP型拡散層441と同様に、画素領域のほぼ全面に亘るように形成される濃いN型の拡散層(N+)である。N型拡散層442は、アバランシェ増倍領域443を形成するための負電圧を供給するためのカソード電極としてのコンタクト電極41と接続するコンタクト層であり、その一部が半導体基板51のおもて面のコンタクト電極41まで形成されるような凸形状となっている。N型拡散層442には、接続されたコンタクト電極41から電源電圧VEが印加される。 The P-type diffusion layer 441 is a dense P-type diffusion layer (P+) formed over almost the entire pixel region in the planar direction. The N-type diffusion layer 442 is a dense N-type diffusion layer (N+) formed near the surface of the semiconductor substrate 51 and covering almost the entire pixel region, like the P-type diffusion layer 441. The N-type diffusion layer 442 is a contact layer connected to the contact electrode 41 as a cathode electrode for supplying a negative voltage to form the avalanche multiplication region 443, and a part of the N-type diffusion layer 442 is on the front side of the semiconductor substrate 51. It has a convex shape that extends up to the contact electrode 41 on the surface. A power supply voltage VE is applied to the N-type diffusion layer 442 from the connected contact electrode 41 .
 ホール蓄積層444は、P型の拡散層(P)であり、ホールを蓄積する。また、ホール蓄積層444は、SPAD401のアノード電極としてのコンタクト電極41と電気的に接続される高濃度P型拡散層445と接続されている。 The hole accumulation layer 444 is a P-type diffusion layer (P) and accumulates holes. Further, the hole accumulation layer 444 is connected to a high concentration P-type diffusion layer 445 that is electrically connected to the contact electrode 41 as an anode electrode of the SPAD 401.
 高濃度P型拡散層445は、SiGe層11の表面近傍において平面視で画素200の周辺部に形成される濃いP型の拡散層(P++)であり、ホール蓄積層444とSPAD401のコンタクト電極41とを電気的に接続するためのコンタクト層を構成する。高濃度P型拡散層445には、接続されたコンタクト電極41から電源電圧VAが印加される。 The high-concentration P-type diffusion layer 445 is a dense P-type diffusion layer (P++) formed near the surface of the SiGe layer 11 in the periphery of the pixel 200 in a plan view. A contact layer is formed to electrically connect the two. A power supply voltage VA is applied to the heavily doped P-type diffusion layer 445 from the connected contact electrode 41 .
 光電変換部としてのSPAD401は、半導体基板51のSiGe層11、P型拡散層441、N型拡散層442、ホール蓄積層444、および、高濃度P型拡散層445を含み、ホール蓄積層444が、アノード電極としての第1のコンタクト電極41と接続され、N型拡散層442が、カソード電極としての第2のコンタクト電極41と接続されている。 The SPAD 401 as a photoelectric conversion unit includes a SiGe layer 11 of a semiconductor substrate 51, a P-type diffusion layer 441, an N-type diffusion layer 442, a hole accumulation layer 444, and a high concentration P-type diffusion layer 445, and the hole accumulation layer 444 is , is connected to the first contact electrode 41 as an anode electrode, and the N-type diffusion layer 442 is connected to the second contact electrode 41 as a cathode electrode.
 なお、SiGe層11は、P型の不純物が注入されることによりP型に制御されてもよい。SiGe層11がP型領域に形成された場合、N型拡散層442に印加される電圧は電源電圧VAになり、高濃度P型拡散層445に印加される電圧は電源電圧VEになる。 Note that the SiGe layer 11 may be controlled to be P-type by implanting P-type impurities. When the SiGe layer 11 is formed in a P-type region, the voltage applied to the N-type diffusion layer 442 becomes the power supply voltage VA, and the voltage applied to the high concentration P-type diffusion layer 445 becomes the power supply voltage VE.
 画素200の読み出し回路402を構成するトランジスタ411およびインバータ412と、スイッチ413は、ロジック基板側に形成されている。 The transistor 411, inverter 412, and switch 413 that constitute the readout circuit 402 of the pixel 200 are formed on the logic substrate side.
 SPADを用いた直接ToFセンサの画素200の構成では、SiGe層11のおもて面には、MOSトランジスタTrが形成されない。そのため、図10の例では、上述した他の断面構成と同様に、SiGe層11のおもて面にSi層21が形成されているが、このSi層21を省略することができる。Si層21の形成が不要であるため、製造工程が容易になる。 In the configuration of the pixel 200 of the direct ToF sensor using SPAD, the MOS transistor Tr is not formed on the front surface of the SiGe layer 11. Therefore, in the example of FIG. 10, the Si layer 21 is formed on the front surface of the SiGe layer 11, as in the other cross-sectional configurations described above, but this Si layer 21 can be omitted. Since it is not necessary to form the Si layer 21, the manufacturing process becomes easier.
<7.回折構造を付加した変形例>
 上述した光検出装置1において、SiGe層11の光入射面となる裏面もしくはロジック基板側のおもて面のどちらか一方または両方に、入射光を回折させる回折構造を追加した構成が可能である。
<7. Modified example with added diffraction structure>
In the photodetecting device 1 described above, a configuration is possible in which a diffraction structure for diffracting incident light is added to either or both of the back surface serving as the light incident surface of the SiGe layer 11 or the front surface on the logic board side. .
 例えば、図11は、図10に示した直接ToFセンサの画素200に回折構造を追加した断面構成例を示している。図11の画素200には、SiGe層11の光入射面側界面に、所定の深さで凹部を形成し、凹部に絶縁膜としてのシリコン酸化膜を埋め込んだ回折構造481が設けられている。回折構造481は、例えば、等方性のドライエッチングを用いて、STI(Shallow Trench Isolation)として形成することができる。 For example, FIG. 11 shows an example of a cross-sectional configuration in which a diffraction structure is added to the pixel 200 of the direct ToF sensor shown in FIG. 10. The pixel 200 in FIG. 11 is provided with a diffraction structure 481 in which a recess is formed at a predetermined depth at the interface on the light incident surface side of the SiGe layer 11, and a silicon oxide film as an insulating film is embedded in the recess. The diffraction structure 481 can be formed as STI (Shallow Trench Isolation) using isotropic dry etching, for example.
 図12は、図11の回折構造481の平面形状を示す平面図である。回折構造481は、例えば、図12に示されるように、4x4の格子パターンの平面形状で形成することができる。なお、勿論、回折構造481の平面形状は、4x4の格子パターンに限定されず、その他の形状であってもよい。 FIG. 12 is a plan view showing the planar shape of the diffraction structure 481 in FIG. 11. The diffraction structure 481 can be formed, for example, in a planar shape of a 4x4 grating pattern, as shown in FIG. Note that, of course, the planar shape of the diffraction structure 481 is not limited to the 4x4 grating pattern, and may be any other shape.
 図13は、図10に示した直接ToFセンサの画素200のSiGe層11のおもて面側に、図11と同じ回折構造481を追加した断面構成例を示している。 FIG. 13 shows an example of a cross-sectional configuration in which the same diffraction structure 481 as in FIG. 11 is added to the front surface side of the SiGe layer 11 of the pixel 200 of the direct ToF sensor shown in FIG. 10.
 図11及び図13の画素200において、回折構造481以外の構成については、図10に示した画素200と同様であるので、その説明は省略する。 In the pixel 200 shown in FIGS. 11 and 13, the configuration other than the diffraction structure 481 is the same as that of the pixel 200 shown in FIG. 10, so a description thereof will be omitted.
 SiGe層11の裏面またはおもて面に回折構造481を持たせることにより、光散乱効果により光路長を増大させることができ、受光感度を向上させることができる。図11ないし図13は、図10に示した直接ToFセンサの画素構造に回折構造481を追加した例を示したが、図4に示したゲート方式のToFセンサ、図6に示したIR撮像センサ、及び、図8に示したCAPD方式のToFセンサの画素構造に回折構造481を追加してもよいことは言うまでもない。 By providing the diffraction structure 481 on the back or front surface of the SiGe layer 11, the optical path length can be increased due to the light scattering effect, and the light receiving sensitivity can be improved. 11 to 13 show examples in which a diffraction structure 481 is added to the pixel structure of the direct ToF sensor shown in FIG. 10, but the gate type ToF sensor shown in FIG. 4 and the IR imaging sensor shown in FIG. It goes without saying that the diffraction structure 481 may be added to the pixel structure of the CAPD type ToF sensor shown in FIG.
<8.素子分離部の変形例>
 図14は、光検出装置1における素子分離部13(画素分離部)の変形例を説明する図である。
<8. Modified example of element isolation section>
FIG. 14 is a diagram illustrating a modification of the element separation section 13 (pixel separation section) in the photodetection device 1.
 素子分離部13については、上述した各光検出装置1の断面構造において、図14のAのように、トレンチがSiGe層11を貫通し、SiGe層11の裏面側からおもて面側にしたがって溝幅が狭くなるテーパ状の断面形状か、または、図14のBのように、SiGe層11のおもて面側から裏面側にしたがって溝幅が狭くなる逆テーパ状の断面形状が採用されていた。 Regarding the element isolation part 13, in the cross-sectional structure of each photodetector 1 described above, a trench penetrates the SiGe layer 11 as shown in A in FIG. A tapered cross-sectional shape in which the groove width becomes narrower, or a reverse tapered cross-sectional shape in which the groove width becomes narrower from the front side to the back side of the SiGe layer 11, as shown in FIG. 14B, is adopted. was.
 しかしながら、素子分離部13は、SiGe層11を貫通せずに、SiGe層11の深さ方向の一部に形成された構造としてもよい。 However, the element isolation part 13 may have a structure in which it is formed in a part of the SiGe layer 11 in the depth direction without penetrating the SiGe layer 11.
 図14のCは、SiGe層11の裏面(図14において上面)側からトレンチを形成する方法によって、素子分離部13がSiGe層11の途中の深さまで形成された例を示している。 FIG. 14C shows an example in which the element isolation portion 13 is formed to a depth halfway through the SiGe layer 11 by a method of forming a trench from the back surface (the top surface in FIG. 14) of the SiGe layer 11.
 図14のDは、SiGe層11のおもて面(図14において下面)側からトレンチを形成する方法によって、素子分離部13がSiGe層11の途中の深さまで形成された例を示している。 D in FIG. 14 shows an example in which the element isolation part 13 is formed to a depth halfway through the SiGe layer 11 by forming a trench from the front surface (bottom surface in FIG. 14) of the SiGe layer 11. .
 さらに、上述した図11ないし図13では、画素200が、STI等による回折構造481を有する例について説明したが、素子分離部13のトレンチを形成する方法と同じ方法を用いて、画素200の回折構造を形成してもよい。 Furthermore, in FIGS. 11 to 13 described above, an example was explained in which the pixel 200 has the diffraction structure 481 formed by STI, etc., but the diffraction structure 481 of the pixel 200 can be A structure may be formed.
 図15のAは、SiGe層11の裏面側からトレンチを形成する方法を用いて、画素200の中央近傍領域に、回折構造491を持たせた例を示している。 FIG. 15A shows an example in which a diffraction structure 491 is provided in the region near the center of the pixel 200 using a method of forming a trench from the back side of the SiGe layer 11.
 図15のBは、SiGe層11のおもて面側からトレンチを形成する方法を用いて、画素200の中央近傍領域に、回折構造491を持たせた例を示している。素子分離部13のトレンチを形成する方法と同じ方法を用いて形成する回折構造491では、トレンチの深さを、図11ないし図13で示した回折構造481よりも深く形成することができる。 FIG. 15B shows an example in which a diffraction structure 491 is provided in a region near the center of the pixel 200 using a method of forming a trench from the front surface side of the SiGe layer 11. In the diffraction structure 491 formed using the same method as the method for forming the trench of the element isolation section 13, the depth of the trench can be formed deeper than the diffraction structure 481 shown in FIGS. 11 to 13.
 図15のCは、回折構造491の平面形状を示す平面図である。回折構造491は、例えば、図15のCに示されるように、画素中央部で交差する十字状の平面形状で形成することができる。なお、勿論、回折構造491の平面形状は、十字状の平面形状に限定されるものではなく、回折構造481のような格子パターンであってもよいし、その他の形状であってもよい。 FIG. 15C is a plan view showing the planar shape of the diffraction structure 491. The diffraction structure 491 can be formed, for example, in a cross-shaped planar shape that intersects at the center of the pixel, as shown in FIG. 15C. Note that, of course, the planar shape of the diffraction structure 491 is not limited to a cross-shaped planar shape, and may be a grating pattern like the diffraction structure 481 or other shapes.
<9.光検出装置の構成例>
 図16は、上述した画素200を有する光検出装置1の概略構成例を示すブロック図である。
<9. Configuration example of photodetection device>
FIG. 16 is a block diagram showing a schematic configuration example of the photodetecting device 1 having the above-mentioned pixel 200.
 図16に示される光検出装置1は、上述した画素200が行方向および列方向の行列状に配列された画素アレイ部521と、周辺回路部とを有する。周辺回路部は、例えば垂直駆動部522、カラム処理部523、水平駆動部524、およびシステム制御部525等から構成されている。 The photodetecting device 1 shown in FIG. 16 includes a pixel array section 521 in which the pixels 200 described above are arranged in rows and columns in the row and column directions, and a peripheral circuit section. The peripheral circuit section includes, for example, a vertical drive section 522, a column processing section 523, a horizontal drive section 524, a system control section 525, and the like.
 光検出装置1には、さらに信号処理部526およびデータ格納部527も設けられている。なお、信号処理部526およびデータ格納部527は、光検出装置1と同じ基板上に搭載してもよいし、光検出装置1とは別のモジュール内の基板上に配置してもよい。 The photodetector 1 is further provided with a signal processing section 526 and a data storage section 527. Note that the signal processing section 526 and the data storage section 527 may be mounted on the same substrate as the photodetection device 1, or may be arranged on a substrate in a module separate from the photodetection device 1.
 画素アレイ部521は、受光した光量に応じた電荷を生成し、その電荷に応じた信号を出力する画素200が行方向および列方向の行列状に配列された構成となっている。ここで、行方向とは、水平方向の画素200の配列方向をいい、列方向とは、垂直方向の画素200の配列方向をいう。行方向は、図中、横方向であり、列方向は図中、縦方向である。したがって、画素アレイ部521は、入射した光を光電変換し、その結果得られた電荷に応じた信号を出力する複数の画素200を有する。この画素200が、図3及び図4で説明した構成を有する場合、光検出装置1はゲート方式のToFセンサであり、図5及び図6で説明した構成を有する場合、光検出装置1はIR撮像センサである。また、画素200が図7及び図8で説明した構成を有する場合、光検出装置1はCAPD方式のToFセンサであり、図9及び図10で説明した構成を有する場合、光検出装置1はSPADを用いた直接ToFセンサである。 The pixel array section 521 has a configuration in which pixels 200 that generate charges according to the amount of received light and output signals according to the charges are arranged in matrix in the row and column directions. Here, the row direction refers to the direction in which the pixels 200 are arranged in the horizontal direction, and the column direction refers to the direction in which the pixels 200 are arranged in the vertical direction. The row direction is the horizontal direction in the figure, and the column direction is the vertical direction in the figure. Therefore, the pixel array section 521 includes a plurality of pixels 200 that photoelectrically convert incident light and output a signal according to the resulting charge. When this pixel 200 has the configuration explained in FIGS. 3 and 4, the photodetection device 1 is a gate type ToF sensor, and when it has the configuration explained in FIGS. 5 and 6, the photodetection device 1 is an IR It is an image sensor. Further, when the pixel 200 has the configuration explained in FIGS. 7 and 8, the photodetection device 1 is a CAPD type ToF sensor, and when the pixel 200 has the configuration explained in FIGS. 9 and 10, the photodetection device 1 is a SPAD type ToF sensor. This is a direct ToF sensor using
 画素アレイ部521においては、行列状の画素配列に対して、画素行ごとに画素駆動線528が行方向に沿って配線されるとともに、垂直信号線529が画素の列方向に沿って配線されている。例えば画素駆動線528は、画素200から信号を読み出す際の駆動を行うための駆動信号を伝送する。なお、図16では、画素駆動線528について1本の配線として示しているが、1本に限られるものではない。画素駆動線528の一端は、垂直駆動部522の各行に対応した出力端に接続されている。垂直信号線529は、図3等で説明した垂直信号線211A、211Bに相当する。 In the pixel array section 521, a pixel drive line 528 is wired along the row direction for each pixel row, and a vertical signal line 529 is wired along the pixel column direction for a matrix-like pixel arrangement. There is. For example, the pixel drive line 528 transmits a drive signal for driving when reading a signal from the pixel 200. Note that although the pixel drive line 528 is shown as one wiring in FIG. 16, it is not limited to one wiring. One end of the pixel drive line 528 is connected to an output end corresponding to each row of the vertical drive section 522. The vertical signal line 529 corresponds to the vertical signal lines 211A and 211B described with reference to FIG. 3 and the like.
 垂直駆動部522は、シフトレジスタやアドレスデコーダなどによって構成され、画素アレイ部521の各画素200を全画素同時あるいは行単位等で駆動する。すなわち、垂直駆動部522は、垂直駆動部522を制御するシステム制御部525とともに、画素アレイ部521の各画素200の動作を制御する制御回路を構成している。 The vertical drive section 522 is composed of a shift register, an address decoder, etc., and drives each pixel 200 of the pixel array section 521 simultaneously or in units of rows. That is, the vertical drive unit 522 forms a control circuit that controls the operation of each pixel 200 of the pixel array unit 521, together with the system control unit 525 that controls the vertical drive unit 522.
 垂直駆動部522による駆動制御に応じて画素行の各画素200から出力される画素信号は、垂直信号線529を通してカラム処理部523に入力される。カラム処理部523は、各画素200から垂直信号線529を通して出力される画素信号に対して所定の信号処理を行うとともに、信号処理後の画素信号を一時的に保持する。具体的には、カラム処理部523は、信号処理としてノイズ除去処理やAD(Analog to Digital)変換処理などを行う。 Pixel signals output from each pixel 200 in a pixel row according to drive control by the vertical drive unit 522 are input to the column processing unit 523 through the vertical signal line 529. The column processing unit 523 performs predetermined signal processing on the pixel signal output from each pixel 200 through the vertical signal line 529, and temporarily holds the pixel signal after the signal processing. Specifically, the column processing unit 523 performs noise removal processing, AD (Analog to Digital) conversion processing, etc. as signal processing.
 水平駆動部524は、シフトレジスタやアドレスデコーダなどによって構成され、カラム処理部523の画素列に対応する単位回路を順番に選択する。この水平駆動部524による選択走査により、カラム処理部523において単位回路ごとに信号処理された画素信号が順番に出力される。 The horizontal drive section 524 is composed of a shift register, an address decoder, etc., and sequentially selects unit circuits corresponding to the pixel columns of the column processing section 523. By this selective scanning by the horizontal driving section 524, pixel signals subjected to signal processing for each unit circuit in the column processing section 523 are sequentially output.
 システム制御部525は、各種のタイミング信号を生成するタイミングジェネレータなどによって構成され、そのタイミングジェネレータで生成された各種のタイミング信号を基に、垂直駆動部522、カラム処理部523、および水平駆動部524などの駆動制御を行う。 The system control unit 525 includes a timing generator that generates various timing signals, and based on the various timing signals generated by the timing generator, the vertical drive unit 522, column processing unit 523, and horizontal drive unit 524 Performs drive control such as
 信号処理部526は、少なくとも演算処理機能を有し、カラム処理部523から出力される画素信号に基づいて演算処理等の種々の信号処理を行う。データ格納部527は、信号処理部526での信号処理にあたって、その処理に必要なデータを一時的に格納する。 The signal processing unit 526 has at least an arithmetic processing function, and performs various signal processing such as arithmetic processing based on the pixel signal output from the column processing unit 523. The data storage section 527 temporarily stores data necessary for signal processing in the signal processing section 526.
 以上のように構成される光検出装置1は、カラム処理部523においてAD変換処理を行うAD変換回路を画素列ごとに配置したカラムADC型と呼ばれる回路構成である。 The photodetector 1 configured as described above has a circuit configuration called a column ADC type in which an AD conversion circuit that performs AD conversion processing in the column processing section 523 is arranged for each pixel column.
 光検出装置1は、光電変換部としてのフォトダイオード12がSiGe層11に形成されることで、赤外領域において高い量子効率を持つ裏面照射型の装置である。光検出装置1は、例えば、車両に搭載され、車外にある対象物までの距離を測定したり、撮影画像を生成することができる。あるいはまた、光検出装置1は、スマートフォン等に搭載され、対象物までの距離を測定したり、撮影画像を生成することができる。 The photodetector 1 is a back-illuminated device that has a high quantum efficiency in the infrared region because a photodiode 12 as a photoelectric conversion section is formed in the SiGe layer 11. The photodetection device 1 is mounted on a vehicle, for example, and can measure the distance to an object outside the vehicle and generate a photographed image. Alternatively, the photodetecting device 1 can be installed in a smartphone or the like, and can measure the distance to an object or generate a photographed image.
<10.光検出装置のイメージセンサとしての使用例>
 図17は、上述の光検出装置1を用いたイメージセンサの使用例を示す図である。
<10. Example of use of photodetector as an image sensor>
FIG. 17 is a diagram showing an example of use of an image sensor using the above-described photodetection device 1.
 上述の光検出装置1は、イメージセンサとして、例えば、以下のように、赤外光や、可視光、紫外光、X線等の光をセンシングする様々なケースに使用することができる。 The above-described photodetection device 1 can be used as an image sensor in various cases for sensing light such as infrared light, visible light, ultraviolet light, and X-rays, for example, as described below.
 ・ディジタルカメラや、カメラ機能付きの携帯機器等の、鑑賞の用に供される画像を撮影する装置
 ・自動停止等の安全運転や、運転者の状態の認識等のために、自動車の前方や後方、周囲、車内等を撮影する車載用センサ、走行車両や道路を監視する監視カメラ、車両間等の測距を行う測距センサ等の、交通の用に供される装置
 ・ユーザのジェスチャを撮影して、そのジェスチャに従った機器操作を行うために、TVや、冷蔵庫、エアーコンディショナ等の家電に供される装置
 ・内視鏡や、赤外光の受光による血管撮影を行う装置等の、医療やヘルスケアの用に供される装置
 ・防犯用途の監視カメラや、人物認証用途のカメラ等の、セキュリティの用に供される装置
 ・肌を撮影する肌測定器や、頭皮を撮影するマイクロスコープ等の、美容の用に供される装置
 ・スポーツ用途等向けのアクションカメラやウェアラブルカメラ等の、スポーツの用に供される装置
 ・畑や作物の状態を監視するためのカメラ等の、農業の用に供される装置
・Digital cameras, mobile devices with camera functions, and other devices that take images for viewing purposes Devices used for transportation, such as in-vehicle sensors that take pictures of the rear, surroundings, and interior of the car, surveillance cameras that monitor moving vehicles and roads, and distance sensors that measure the distance between vehicles, etc. ・User gestures Devices used in home appliances such as TVs, refrigerators, and air conditioners to take pictures and operate devices according to the gestures. - Endoscopes, devices that perform blood vessel imaging by receiving infrared light, etc. - Devices used for medical and healthcare purposes - Devices used for security, such as surveillance cameras for crime prevention and cameras for person authentication - Skin measurement devices that take pictures of the skin, and devices that take pictures of the scalp - Devices used for beauty purposes, such as microscopes for skin care. - Devices used for sports, such as action cameras and wearable cameras. - Cameras, etc. used to monitor the condition of fields and crops. , equipment used for agricultural purposes
<11.電子機器への適用例>
 本開示の技術は、光検出装置への適用に限られるものではない。即ち、本開示の技術は、デジタルスチルカメラやビデオカメラ等の撮像装置や、撮像機能を有する携帯端末装置や、画像読取部に固体撮像装置を用いる複写機など、画像取込部(光電変換部)に固体撮像装置を用いる電子機器全般に対して適用可能である。固体撮像装置は、ワンチップとして形成された形態であってもよいし、撮像部と信号処理部または光学系とがまとめてパッケージングされた撮像機能を有するモジュール形態であってもよい。
<11. Example of application to electronic equipment>
The technology of the present disclosure is not limited to application to photodetecting devices. That is, the technology of the present disclosure applies to an image capture unit (photoelectric conversion unit) in an image capture device such as a digital still camera or a video camera, a mobile terminal device having an image capture function, or a copying machine that uses a solid-state image capture device in an image reading unit. ) is applicable to all electronic devices that use solid-state imaging devices. The solid-state imaging device may be formed as a single chip, or may be a module having an imaging function in which an imaging section and a signal processing section or an optical system are packaged together.
 図18は、本開示の技術を適用した電子機器としての、撮像装置の構成例を示すブロック図である。 FIG. 18 is a block diagram showing a configuration example of an imaging device as an electronic device to which the technology of the present disclosure is applied.
 図18の撮像装置1000は、レンズ群などからなる光学部1001、図1の光検出装置1の構成が採用される固体撮像装置(撮像デバイス)1002、およびカメラ信号処理回路であるDSP(Digital Signal Processor)回路1003を備える。また、撮像装置1000は、フレームメモリ1004、表示部1005、記録部1006、操作部1007、および電源部1008も備える。DSP回路1003、フレームメモリ1004、表示部1005、記録部1006、操作部1007および電源部1008は、バスライン1009を介して相互に接続されている。 The imaging device 1000 in FIG. 18 includes an optical section 1001 consisting of a lens group, etc., a solid-state imaging device (imaging device) 1002 in which the configuration of the photodetection device 1 in FIG. 1 is adopted, and a DSP (Digital Signal (processor) circuit 1003. The imaging apparatus 1000 also includes a frame memory 1004, a display section 1005, a recording section 1006, an operation section 1007, and a power supply section 1008. DSP circuit 1003, frame memory 1004, display section 1005, recording section 1006, operation section 1007, and power supply section 1008 are interconnected via bus line 1009.
 光学部1001は、被写体からの入射光(像光)を取り込んで固体撮像装置1002の撮像面上に結像する。固体撮像装置1002は、光学部1001によって撮像面上に結像された入射光の光量を画素単位で電気信号に変換して画素信号として出力する。この固体撮像装置1002として、図1の光検出装置1、即ち、赤外領域において高い量子効率を持ち、赤外領域の光に対する感度を向上させた光検出装置を用いることができる。 The optical section 1001 takes in incident light (image light) from a subject and forms an image on the imaging surface of the solid-state imaging device 1002. The solid-state imaging device 1002 converts the amount of incident light imaged onto the imaging surface by the optical section 1001 into an electrical signal for each pixel, and outputs the electric signal as a pixel signal. As this solid-state imaging device 1002, it is possible to use the photodetection device 1 of FIG. 1, that is, a photodetection device that has high quantum efficiency in the infrared region and has improved sensitivity to light in the infrared region.
 表示部1005は、例えば、LCD(Liquid Crystal Display)や有機EL(Electro Luminescence)ディスプレイ等の薄型ディスプレイで構成され、固体撮像装置1002で撮像された動画または静止画を表示する。記録部1006は、固体撮像装置1002で撮像された動画または静止画を、ハードディスクや半導体メモリ等の記録媒体に記録する。 The display unit 1005 is configured with a thin display such as an LCD (Liquid Crystal Display) or an organic EL (Electro Luminescence) display, and displays moving images or still images captured by the solid-state imaging device 1002. A recording unit 1006 records a moving image or a still image captured by the solid-state imaging device 1002 on a recording medium such as a hard disk or a semiconductor memory.
 操作部1007は、ユーザによる操作の下に、撮像装置1000が持つ様々な機能について操作指令を発する。電源部1008は、DSP回路1003、フレームメモリ1004、表示部1005、記録部1006および操作部1007の動作電源となる各種の電源を、これら供給対象に対して適宜供給する。 The operation unit 1007 issues operation commands regarding various functions of the imaging device 1000 under operation by the user. A power supply unit 1008 appropriately supplies various power supplies that serve as operating power for the DSP circuit 1003, frame memory 1004, display unit 1005, recording unit 1006, and operation unit 1007 to these supply targets.
 上述したように、固体撮像装置1002として、上述した光検出装置1を用いることで、赤外領域の光に対する感度を向上させることができる。従って、ビデオカメラやデジタルスチルカメラ、さらには携帯電話機等のモバイル機器向けカメラモジュールなどの撮像装置1000においても、撮像画像の高画質化を図ることができる。 As described above, by using the above-described photodetection device 1 as the solid-state imaging device 1002, the sensitivity to light in the infrared region can be improved. Therefore, it is possible to improve the quality of captured images even in the imaging device 1000 such as a video camera, a digital still camera, or a camera module for mobile devices such as a mobile phone.
<12.内視鏡手術システムへの応用例>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、内視鏡手術システムに適用されてもよい。
<12. Example of application to endoscopic surgery system>
The technology according to the present disclosure (this technology) can be applied to various products. For example, the technology according to the present disclosure may be applied to an endoscopic surgery system.
 図19は、本開示に係る技術(本技術)が適用され得る内視鏡手術システムの概略的な構成の一例を示す図である。 FIG. 19 is a diagram illustrating an example of a schematic configuration of an endoscopic surgery system to which the technology according to the present disclosure (present technology) can be applied.
 図19では、術者(医師)11131が、内視鏡手術システム11000を用いて、患者ベッド11133上の患者11132に手術を行っている様子が図示されている。図示するように、内視鏡手術システム11000は、内視鏡11100と、気腹チューブ11111やエネルギー処置具11112等の、その他の術具11110と、内視鏡11100を支持する支持アーム装置11120と、内視鏡下手術のための各種の装置が搭載されたカート11200と、から構成される。 FIG. 19 shows an operator (doctor) 11131 performing surgery on a patient 11132 on a patient bed 11133 using the endoscopic surgery system 11000. As illustrated, the endoscopic surgery system 11000 includes an endoscope 11100, other surgical instruments 11110 such as a pneumoperitoneum tube 11111 and an energy treatment instrument 11112, and a support arm device 11120 that supports the endoscope 11100. , and a cart 11200 loaded with various devices for endoscopic surgery.
 内視鏡11100は、先端から所定の長さの領域が患者11132の体腔内に挿入される鏡筒11101と、鏡筒11101の基端に接続されるカメラヘッド11102と、から構成される。図示する例では、硬性の鏡筒11101を有するいわゆる硬性鏡として構成される内視鏡11100を図示しているが、内視鏡11100は、軟性の鏡筒を有するいわゆる軟性鏡として構成されてもよい。 The endoscope 11100 is composed of a lens barrel 11101 whose distal end is inserted into a body cavity of a patient 11132 over a predetermined length, and a camera head 11102 connected to the proximal end of the lens barrel 11101. In the illustrated example, an endoscope 11100 configured as a so-called rigid scope having a rigid tube 11101 is shown, but the endoscope 11100 may also be configured as a so-called flexible scope having a flexible tube. good.
 鏡筒11101の先端には、対物レンズが嵌め込まれた開口部が設けられている。内視鏡11100には光源装置11203が接続されており、当該光源装置11203によって生成された光が、鏡筒11101の内部に延設されるライトガイドによって当該鏡筒の先端まで導光され、対物レンズを介して患者11132の体腔内の観察対象に向かって照射される。なお、内視鏡11100は、直視鏡であってもよいし、斜視鏡又は側視鏡であってもよい。 An opening into which an objective lens is fitted is provided at the tip of the lens barrel 11101. A light source device 11203 is connected to the endoscope 11100, and the light generated by the light source device 11203 is guided to the tip of the lens barrel by a light guide extending inside the lens barrel 11101, and the light is guided to the tip of the lens barrel. Irradiation is directed toward an observation target within the body cavity of the patient 11132 through the lens. Note that the endoscope 11100 may be a direct-viewing mirror, a diagonal-viewing mirror, or a side-viewing mirror.
 カメラヘッド11102の内部には光学系及び撮像素子が設けられており、観察対象からの反射光(観察光)は当該光学系によって当該撮像素子に集光される。当該撮像素子によって観察光が光電変換され、観察光に対応する電気信号、すなわち観察像に対応する画像信号が生成される。当該画像信号は、RAWデータとしてカメラコントロールユニット(CCU: Camera Control Unit)11201に送信される。 An optical system and an image sensor are provided inside the camera head 11102, and reflected light (observation light) from an observation target is focused on the image sensor by the optical system. The observation light is photoelectrically converted by the image sensor, and an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image is generated. The image signal is transmitted as RAW data to a camera control unit (CCU) 11201.
 CCU11201は、CPU(Central Processing Unit)やGPU(Graphics Processing Unit)等によって構成され、内視鏡11100及び表示装置11202の動作を統括的に制御する。さらに、CCU11201は、カメラヘッド11102から画像信号を受け取り、その画像信号に対して、例えば現像処理(デモザイク処理)等の、当該画像信号に基づく画像を表示するための各種の画像処理を施す。 The CCU 11201 is configured with a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), and the like, and centrally controls the operations of the endoscope 11100 and the display device 11202. Further, the CCU 11201 receives an image signal from the camera head 11102, and performs various image processing on the image signal, such as development processing (demosaic processing), for displaying an image based on the image signal.
 表示装置11202は、CCU11201からの制御により、当該CCU11201によって画像処理が施された画像信号に基づく画像を表示する。 The display device 11202 displays an image based on an image signal subjected to image processing by the CCU 11201 under control from the CCU 11201.
 光源装置11203は、例えばLED(Light Emitting Diode)等の光源から構成され、術部等を撮影する際の照射光を内視鏡11100に供給する。 The light source device 11203 is composed of a light source such as an LED (Light Emitting Diode), and supplies irradiation light to the endoscope 11100 when photographing the surgical site or the like.
 入力装置11204は、内視鏡手術システム11000に対する入力インタフェースである。ユーザは、入力装置11204を介して、内視鏡手術システム11000に対して各種の情報の入力や指示入力を行うことができる。例えば、ユーザは、内視鏡11100による撮像条件(照射光の種類、倍率及び焦点距離等)を変更する旨の指示等を入力する。 The input device 11204 is an input interface for the endoscopic surgery system 11000. The user can input various information and instructions to the endoscopic surgery system 11000 via the input device 11204. For example, the user inputs an instruction to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) by the endoscope 11100.
 処置具制御装置11205は、組織の焼灼、切開又は血管の封止等のためのエネルギー処置具11112の駆動を制御する。気腹装置11206は、内視鏡11100による視野の確保及び術者の作業空間の確保の目的で、患者11132の体腔を膨らめるために、気腹チューブ11111を介して当該体腔内にガスを送り込む。レコーダ11207は、手術に関する各種の情報を記録可能な装置である。プリンタ11208は、手術に関する各種の情報を、テキスト、画像又はグラフ等各種の形式で印刷可能な装置である。 A treatment tool control device 11205 controls driving of an energy treatment tool 11112 for cauterizing tissue, incising, sealing blood vessels, or the like. The pneumoperitoneum device 11206 injects gas into the body cavity of the patient 11132 via the pneumoperitoneum tube 11111 in order to inflate the body cavity of the patient 11132 for the purpose of ensuring a field of view with the endoscope 11100 and a working space for the operator. send in. The recorder 11207 is a device that can record various information regarding surgery. The printer 11208 is a device that can print various types of information regarding surgery in various formats such as text, images, or graphs.
 なお、内視鏡11100に術部を撮影する際の照射光を供給する光源装置11203は、例えばLED、レーザ光源又はこれらの組み合わせによって構成される白色光源から構成することができる。RGBレーザ光源の組み合わせにより白色光源が構成される場合には、各色(各波長)の出力強度及び出力タイミングを高精度に制御することができるため、光源装置11203において撮像画像のホワイトバランスの調整を行うことができる。また、この場合には、RGBレーザ光源それぞれからのレーザ光を時分割で観察対象に照射し、その照射タイミングに同期してカメラヘッド11102の撮像素子の駆動を制御することにより、RGBそれぞれに対応した画像を時分割で撮像することも可能である。当該方法によれば、当該撮像素子にカラーフィルタを設けなくても、カラー画像を得ることができる。 Note that the light source device 11203 that supplies irradiation light to the endoscope 11100 when photographing the surgical site can be configured, for example, from a white light source configured by an LED, a laser light source, or a combination thereof. When a white light source is configured by a combination of RGB laser light sources, the output intensity and output timing of each color (each wavelength) can be controlled with high precision, so the white balance of the captured image is adjusted in the light source device 11203. It can be carried out. In this case, the laser light from each RGB laser light source is irradiated onto the observation target in a time-sharing manner, and the drive of the image sensor of the camera head 11102 is controlled in synchronization with the irradiation timing, thereby supporting each of RGB. It is also possible to capture images in a time-division manner. According to this method, a color image can be obtained without providing a color filter in the image sensor.
 また、光源装置11203は、出力する光の強度を所定の時間ごとに変更するようにその駆動が制御されてもよい。その光の強度の変更のタイミングに同期してカメラヘッド11102の撮像素子の駆動を制御して時分割で画像を取得し、その画像を合成することにより、いわゆる黒つぶれ及び白とびのない高ダイナミックレンジの画像を生成することができる。 Furthermore, the driving of the light source device 11203 may be controlled so that the intensity of the light it outputs is changed at predetermined time intervals. By controlling the drive of the image sensor of the camera head 11102 in synchronization with the timing of changes in the light intensity to acquire images in a time-division manner and compositing the images, a high dynamic It is possible to generate an image of a range.
 また、光源装置11203は、特殊光観察に対応した所定の波長帯域の光を供給可能に構成されてもよい。特殊光観察では、例えば、体組織における光の吸収の波長依存性を利用して、通常の観察時における照射光(すなわち、白色光)に比べて狭帯域の光を照射することにより、粘膜表層の血管等の所定の組織を高コントラストで撮影する、いわゆる狭帯域光観察(Narrow Band Imaging)が行われる。あるいは、特殊光観察では、励起光を照射することにより発生する蛍光により画像を得る蛍光観察が行われてもよい。蛍光観察では、体組織に励起光を照射し当該体組織からの蛍光を観察すること(自家蛍光観察)、又はインドシアニングリーン(ICG)等の試薬を体組織に局注するとともに当該体組織にその試薬の蛍光波長に対応した励起光を照射し蛍光像を得ること等を行うことができる。光源装置11203は、このような特殊光観察に対応した狭帯域光及び/又は励起光を供給可能に構成され得る。 Additionally, the light source device 11203 may be configured to be able to supply light in a predetermined wavelength band compatible with special light observation. Special light observation uses, for example, the wavelength dependence of light absorption in body tissues to illuminate the mucosal surface layer by irradiating a narrower band of light than the light used for normal observation (i.e., white light). So-called narrow band imaging is performed in which predetermined tissues such as blood vessels are photographed with high contrast. Alternatively, in the special light observation, fluorescence observation may be performed in which an image is obtained using fluorescence generated by irradiating excitation light. Fluorescence observation involves irradiating body tissues with excitation light and observing the fluorescence from the body tissues (autofluorescence observation), or locally injecting reagents such as indocyanine green (ICG) into the body tissues and It is possible to obtain a fluorescence image by irradiating excitation light corresponding to the fluorescence wavelength of the reagent. The light source device 11203 may be configured to be able to supply narrowband light and/or excitation light compatible with such special light observation.
 図20は、図19に示すカメラヘッド11102及びCCU11201の機能構成の一例を示すブロック図である。 FIG. 20 is a block diagram showing an example of the functional configuration of the camera head 11102 and CCU 11201 shown in FIG. 19.
 カメラヘッド11102は、レンズユニット11401と、撮像部11402と、駆動部11403と、通信部11404と、カメラヘッド制御部11405と、を有する。CCU11201は、通信部11411と、画像処理部11412と、制御部11413と、を有する。カメラヘッド11102とCCU11201とは、伝送ケーブル11400によって互いに通信可能に接続されている。 The camera head 11102 includes a lens unit 11401, an imaging section 11402, a driving section 11403, a communication section 11404, and a camera head control section 11405. The CCU 11201 includes a communication section 11411, an image processing section 11412, and a control section 11413. Camera head 11102 and CCU 11201 are communicably connected to each other by transmission cable 11400.
 レンズユニット11401は、鏡筒11101との接続部に設けられる光学系である。鏡筒11101の先端から取り込まれた観察光は、カメラヘッド11102まで導光され、当該レンズユニット11401に入射する。レンズユニット11401は、ズームレンズ及びフォーカスレンズを含む複数のレンズが組み合わされて構成される。 The lens unit 11401 is an optical system provided at the connection part with the lens barrel 11101. Observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102 and enters the lens unit 11401. The lens unit 11401 is configured by combining a plurality of lenses including a zoom lens and a focus lens.
 撮像部11402は、撮像素子で構成される。撮像部11402を構成する撮像素子は、1つ(いわゆる単板式)であってもよいし、複数(いわゆる多板式)であってもよい。撮像部11402が多板式で構成される場合には、例えば各撮像素子によってRGBそれぞれに対応する画像信号が生成され、それらが合成されることによりカラー画像が得られてもよい。あるいは、撮像部11402は、3D(Dimensional)表示に対応する右目用及び左目用の画像信号をそれぞれ取得するための1対の撮像素子を有するように構成されてもよい。3D表示が行われることにより、術者11131は術部における生体組織の奥行きをより正確に把握することが可能になる。なお、撮像部11402が多板式で構成される場合には、各撮像素子に対応して、レンズユニット11401も複数系統設けられ得る。 The imaging unit 11402 is composed of an image sensor. The imaging unit 11402 may include one image sensor (so-called single-plate type) or a plurality of image sensors (so-called multi-plate type). When the imaging unit 11402 is configured with a multi-plate type, for example, image signals corresponding to RGB are generated by each imaging element, and a color image may be obtained by combining them. Alternatively, the imaging unit 11402 may be configured to include a pair of imaging elements for respectively acquiring right-eye and left-eye image signals corresponding to 3D (dimensional) display. By performing 3D display, the operator 11131 can more accurately grasp the depth of the living tissue at the surgical site. Note that when the imaging section 11402 is configured with a multi-plate type, a plurality of lens units 11401 may be provided corresponding to each imaging element.
 また、撮像部11402は、必ずしもカメラヘッド11102に設けられなくてもよい。例えば、撮像部11402は、鏡筒11101の内部に、対物レンズの直後に設けられてもよい。 Furthermore, the imaging unit 11402 does not necessarily have to be provided in the camera head 11102. For example, the imaging unit 11402 may be provided inside the lens barrel 11101 immediately after the objective lens.
 駆動部11403は、アクチュエータによって構成され、カメラヘッド制御部11405からの制御により、レンズユニット11401のズームレンズ及びフォーカスレンズを光軸に沿って所定の距離だけ移動させる。これにより、撮像部11402による撮像画像の倍率及び焦点が適宜調整され得る。 The drive unit 11403 is constituted by an actuator, and moves the zoom lens and focus lens of the lens unit 11401 by a predetermined distance along the optical axis under control from the camera head control unit 11405. Thereby, the magnification and focus of the image captured by the imaging unit 11402 can be adjusted as appropriate.
 通信部11404は、CCU11201との間で各種の情報を送受信するための通信装置によって構成される。通信部11404は、撮像部11402から得た画像信号をRAWデータとして伝送ケーブル11400を介してCCU11201に送信する。 The communication unit 11404 is configured by a communication device for transmitting and receiving various information to and from the CCU 11201. The communication unit 11404 transmits the image signal obtained from the imaging unit 11402 to the CCU 11201 via the transmission cable 11400 as RAW data.
 また、通信部11404は、CCU11201から、カメラヘッド11102の駆動を制御するための制御信号を受信し、カメラヘッド制御部11405に供給する。当該制御信号には、例えば、撮像画像のフレームレートを指定する旨の情報、撮像時の露出値を指定する旨の情報、並びに/又は撮像画像の倍率及び焦点を指定する旨の情報等、撮像条件に関する情報が含まれる。 Furthermore, the communication unit 11404 receives a control signal for controlling the drive of the camera head 11102 from the CCU 11201 and supplies it to the camera head control unit 11405. The control signal may include, for example, information specifying the frame rate of the captured image, information specifying the exposure value at the time of capturing, and/or information specifying the magnification and focus of the captured image. Contains information about conditions.
 なお、上記のフレームレートや露出値、倍率、焦点等の撮像条件は、ユーザによって適宜指定されてもよいし、取得された画像信号に基づいてCCU11201の制御部11413によって自動的に設定されてもよい。後者の場合には、いわゆるAE(Auto Exposure)機能、AF(Auto Focus)機能及びAWB(Auto White Balance)機能が内視鏡11100に搭載されていることになる。 The above-mentioned frame rate, exposure value, magnification, focus, and other imaging conditions may be appropriately specified by the user, or may be automatically set by the control unit 11413 of the CCU 11201 based on the acquired image signal. In the latter case, the endoscope 11100 is equipped with so-called AE (Auto Exposure) function, AF (Auto Focus) function, and AWB (Auto White Balance) function.
 カメラヘッド制御部11405は、通信部11404を介して受信したCCU11201からの制御信号に基づいて、カメラヘッド11102の駆動を制御する。 The camera head control unit 11405 controls the drive of the camera head 11102 based on the control signal from the CCU 11201 received via the communication unit 11404.
 通信部11411は、カメラヘッド11102との間で各種の情報を送受信するための通信装置によって構成される。通信部11411は、カメラヘッド11102から、伝送ケーブル11400を介して送信される画像信号を受信する。 The communication unit 11411 is configured by a communication device for transmitting and receiving various information to and from the camera head 11102. The communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
 また、通信部11411は、カメラヘッド11102に対して、カメラヘッド11102の駆動を制御するための制御信号を送信する。画像信号や制御信号は、電気通信や光通信等によって送信することができる。 Furthermore, the communication unit 11411 transmits a control signal for controlling the drive of the camera head 11102 to the camera head 11102. The image signal and control signal can be transmitted by electrical communication, optical communication, or the like.
 画像処理部11412は、カメラヘッド11102から送信されたRAWデータである画像信号に対して各種の画像処理を施す。 The image processing unit 11412 performs various image processing on the image signal, which is RAW data, transmitted from the camera head 11102.
 制御部11413は、内視鏡11100による術部等の撮像、及び、術部等の撮像により得られる撮像画像の表示に関する各種の制御を行う。例えば、制御部11413は、カメラヘッド11102の駆動を制御するための制御信号を生成する。 The control unit 11413 performs various controls related to the imaging of the surgical site etc. by the endoscope 11100 and the display of the captured image obtained by imaging the surgical site etc. For example, the control unit 11413 generates a control signal for controlling the drive of the camera head 11102.
 また、制御部11413は、画像処理部11412によって画像処理が施された画像信号に基づいて、術部等が映った撮像画像を表示装置11202に表示させる。この際、制御部11413は、各種の画像認識技術を用いて撮像画像内における各種の物体を認識してもよい。例えば、制御部11413は、撮像画像に含まれる物体のエッジの形状や色等を検出することにより、鉗子等の術具、特定の生体部位、出血、エネルギー処置具11112の使用時のミスト等を認識することができる。制御部11413は、表示装置11202に撮像画像を表示させる際に、その認識結果を用いて、各種の手術支援情報を当該術部の画像に重畳表示させてもよい。手術支援情報が重畳表示され、術者11131に提示されることにより、術者11131の負担を軽減することや、術者11131が確実に手術を進めることが可能になる。 Furthermore, the control unit 11413 causes the display device 11202 to display a captured image showing the surgical site, etc., based on the image signal subjected to image processing by the image processing unit 11412. At this time, the control unit 11413 may recognize various objects in the captured image using various image recognition techniques. For example, the control unit 11413 detects the shape and color of the edge of an object included in the captured image to detect surgical tools such as forceps, specific body parts, bleeding, mist when using the energy treatment tool 11112, etc. can be recognized. When displaying the captured image on the display device 11202, the control unit 11413 may use the recognition result to superimpose and display various types of surgical support information on the image of the surgical site. By displaying the surgical support information in a superimposed manner and presenting it to the surgeon 11131, it becomes possible to reduce the burden on the surgeon 11131 and allow the surgeon 11131 to proceed with the surgery reliably.
 カメラヘッド11102及びCCU11201を接続する伝送ケーブル11400は、電気信号の通信に対応した電気信号ケーブル、光通信に対応した光ファイバ、又はこれらの複合ケーブルである。 The transmission cable 11400 connecting the camera head 11102 and the CCU 11201 is an electrical signal cable compatible with electrical signal communication, an optical fiber compatible with optical communication, or a composite cable thereof.
 ここで、図示する例では、伝送ケーブル11400を用いて有線で通信が行われていたが、カメラヘッド11102とCCU11201との間の通信は無線で行われてもよい。 Here, in the illustrated example, communication is performed by wire using the transmission cable 11400, but communication between the camera head 11102 and the CCU 11201 may be performed wirelessly.
 以上、本開示に係る技術が適用され得る内視鏡手術システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、カメラヘッド11102のレンズユニット11401及び撮像部11402に適用され得る。例えば、レンズユニット11401及び撮像部11402として、図5及び図6で説明した構成を有するIR撮像センサとしての光検出装置1を適用することができる。レンズユニット11401及び撮像部11402に本開示に係る技術を適用することにより、カメラヘッド11102を小型化しつつも、より鮮明な術部画像を得ることができる。 An example of an endoscopic surgery system to which the technology according to the present disclosure can be applied has been described above. The technology according to the present disclosure can be applied to the lens unit 11401 and the imaging section 11402 of the camera head 11102 among the configurations described above. For example, as the lens unit 11401 and the imaging section 11402, the photodetecting device 1 as an IR imaging sensor having the configuration described in FIGS. 5 and 6 can be applied. By applying the technology according to the present disclosure to the lens unit 11401 and the imaging section 11402, it is possible to obtain a clearer surgical site image while downsizing the camera head 11102.
 なお、ここでは、一例として内視鏡手術システムについて説明したが、本開示に係る技術は、その他、例えば、顕微鏡手術システム等に適用されてもよい。 Although an endoscopic surgery system has been described here as an example, the technology according to the present disclosure may be applied to other systems, such as a microsurgical system.
<13.移動体への応用例>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。
<13. Example of application to mobile objects>
The technology according to the present disclosure (this technology) can be applied to various products. For example, the technology according to the present disclosure may be realized as a device mounted on any type of moving body such as a car, electric vehicle, hybrid electric vehicle, motorcycle, bicycle, personal mobility, airplane, drone, ship, robot, etc. You can.
 図21は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。 FIG. 21 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile object control system to which the technology according to the present disclosure can be applied.
 車両制御システム12000は、通信ネットワーク12001を介して接続された複数の電子制御ユニットを備える。図21に示した例では、車両制御システム12000は、駆動系制御ユニット12010、ボディ系制御ユニット12020、車外情報検出ユニット12030、車内情報検出ユニット12040、及び統合制御ユニット12050を備える。また、統合制御ユニット12050の機能構成として、マイクロコンピュータ12051、音声画像出力部12052、及び車載ネットワークI/F(interface)12053が図示されている。 The vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001. In the example shown in FIG. 21, the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside vehicle information detection unit 12030, an inside vehicle information detection unit 12040, and an integrated control unit 12050. Further, as the functional configuration of the integrated control unit 12050, a microcomputer 12051, an audio/image output section 12052, and an in-vehicle network I/F (interface) 12053 are illustrated.
 駆動系制御ユニット12010は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット12010は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。 The drive system control unit 12010 controls the operation of devices related to the drive system of the vehicle according to various programs. For example, the drive system control unit 12010 includes a drive force generation device such as an internal combustion engine or a drive motor that generates drive force for the vehicle, a drive force transmission mechanism that transmits the drive force to wheels, and a drive force transmission mechanism that controls the steering angle of the vehicle. It functions as a control device for a steering mechanism to adjust and a braking device to generate braking force for the vehicle.
 ボディ系制御ユニット12020は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット12020は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット12020には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット12020は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 The body system control unit 12020 controls the operations of various devices installed in the vehicle body according to various programs. For example, the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as a headlamp, a back lamp, a brake lamp, a turn signal, or a fog lamp. In this case, radio waves transmitted from a portable device that replaces a key or signals from various switches may be input to the body control unit 12020. The body system control unit 12020 receives input of these radio waves or signals, and controls the door lock device, power window device, lamp, etc. of the vehicle.
 車外情報検出ユニット12030は、車両制御システム12000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット12030には、撮像部12031が接続される。車外情報検出ユニット12030は、撮像部12031に車外の画像を撮像させるとともに、撮像された画像を受信する。車外情報検出ユニット12030は、受信した画像に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。 The external information detection unit 12030 detects information external to the vehicle in which the vehicle control system 12000 is mounted. For example, an imaging section 12031 is connected to the outside-vehicle information detection unit 12030. The vehicle exterior information detection unit 12030 causes the imaging unit 12031 to capture an image of the exterior of the vehicle, and receives the captured image. The external information detection unit 12030 may perform object detection processing such as a person, car, obstacle, sign, or text on the road surface or distance detection processing based on the received image.
 撮像部12031は、光を受光し、その光の受光量に応じた電気信号を出力する光センサである。撮像部12031は、電気信号を画像として出力することもできるし、測距の情報として出力することもできる。また、撮像部12031が受光する光は、可視光であっても良いし、赤外線等の非可視光であっても良い。 The imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal according to the amount of received light. The imaging unit 12031 can output the electrical signal as an image or as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or non-visible light such as infrared rays.
 車内情報検出ユニット12040は、車内の情報を検出する。車内情報検出ユニット12040には、例えば、運転者の状態を検出する運転者状態検出部12041が接続される。運転者状態検出部12041は、例えば運転者を撮像するカメラを含み、車内情報検出ユニット12040は、運転者状態検出部12041から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。 The in-vehicle information detection unit 12040 detects in-vehicle information. For example, a driver condition detection section 12041 that detects the condition of the driver is connected to the in-vehicle information detection unit 12040. The driver condition detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 detects the degree of fatigue or concentration of the driver based on the detection information input from the driver condition detection unit 12041. It may be calculated, or it may be determined whether the driver is falling asleep.
 マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット12010に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行うことができる。 The microcomputer 12051 calculates control target values for the driving force generation device, steering mechanism, or braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, Control commands can be output to 12010. For example, the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions, including vehicle collision avoidance or impact mitigation, following distance based on vehicle distance, vehicle speed maintenance, vehicle collision warning, vehicle lane departure warning, etc. It is possible to perform cooperative control for the purpose of
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 In addition, the microcomputer 12051 controls the driving force generating device, steering mechanism, braking device, etc. based on information about the surroundings of the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform cooperative control for the purpose of autonomous driving, etc., which does not rely on operation.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030で取得される車外の情報に基づいて、ボディ系制御ユニット12020に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車外情報検出ユニット12030で検知した先行車又は対向車の位置に応じてヘッドランプを制御し、ハイビームをロービームに切り替える等の防眩を図ることを目的とした協調制御を行うことができる。 Furthermore, the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the outside information detection unit 12030. For example, the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or oncoming vehicle detected by the vehicle exterior information detection unit 12030, and performs cooperative control for the purpose of preventing glare, such as switching from high beam to low beam. It can be carried out.
 音声画像出力部12052は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図21の例では、出力装置として、オーディオスピーカ12061、表示部12062及びインストルメントパネル12063が例示されている。表示部12062は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。 The audio and image output unit 12052 transmits an output signal of at least one of audio and images to an output device that can visually or audibly notify information to the occupants of the vehicle or to the outside of the vehicle. In the example of FIG. 21, an audio speaker 12061, a display section 12062, and an instrument panel 12063 are illustrated as output devices. The display unit 12062 may include, for example, at least one of an on-board display and a head-up display.
 図22は、撮像部12031の設置位置の例を示す図である。 FIG. 22 is a diagram showing an example of the installation position of the imaging section 12031.
 図22では、車両12100は、撮像部12031として、撮像部12101,12102,12103,12104,12105を有する。 In FIG. 22, the vehicle 12100 has imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
 撮像部12101,12102,12103,12104,12105は、例えば、車両12100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部等の位置に設けられる。フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105は、主として車両12100の前方の画像を取得する。サイドミラーに備えられる撮像部12102,12103は、主として車両12100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部12104は、主として車両12100の後方の画像を取得する。撮像部12101及び12105で取得される前方の画像は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 The imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at positions such as the front nose, side mirrors, rear bumper, back door, and the top of the windshield inside the vehicle 12100. An imaging unit 12101 provided in the front nose and an imaging unit 12105 provided above the windshield inside the vehicle mainly acquire images in front of the vehicle 12100. Imaging units 12102 and 12103 provided in the side mirrors mainly capture images of the sides of the vehicle 12100. An imaging unit 12104 provided in the rear bumper or back door mainly captures images of the rear of the vehicle 12100. The images of the front acquired by the imaging units 12101 and 12105 are mainly used for detecting preceding vehicles, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like.
 なお、図22には、撮像部12101ないし12104の撮影範囲の一例が示されている。撮像範囲12111は、フロントノーズに設けられた撮像部12101の撮像範囲を示し、撮像範囲12112,12113は、それぞれサイドミラーに設けられた撮像部12102,12103の撮像範囲を示し、撮像範囲12114は、リアバンパ又はバックドアに設けられた撮像部12104の撮像範囲を示す。例えば、撮像部12101ないし12104で撮像された画像データが重ね合わせられることにより、車両12100を上方から見た俯瞰画像が得られる。 Note that FIG. 22 shows an example of the imaging range of the imaging units 12101 to 12104. An imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose, imaging ranges 12112 and 12113 indicate imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively, and an imaging range 12114 shows the imaging range of the imaging unit 12101 provided on the front nose. The imaging range of the imaging unit 12104 provided in the rear bumper or back door is shown. For example, by overlapping the image data captured by the imaging units 12101 to 12104, an overhead image of the vehicle 12100 viewed from above can be obtained.
 撮像部12101ないし12104の少なくとも1つは、距離情報を取得する機能を有していてもよい。例えば、撮像部12101ないし12104の少なくとも1つは、複数の撮像素子からなるステレオカメラであってもよいし、位相差検出用の画素を有する撮像素子であってもよい。 At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the imaging units 12101 to 12104 may be a stereo camera including a plurality of image sensors, or may be an image sensor having pixels for phase difference detection.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を基に、撮像範囲12111ないし12114内における各立体物までの距離と、この距離の時間的変化(車両12100に対する相対速度)を求めることにより、特に車両12100の進行路上にある最も近い立体物で、車両12100と略同じ方向に所定の速度(例えば、0km/h以上)で走行する立体物を先行車として抽出することができる。さらに、マイクロコンピュータ12051は、先行車の手前に予め確保すべき車間距離を設定し、自動ブレーキ制御(追従停止制御も含む)や自動加速制御(追従発進制御も含む)等を行うことができる。このように運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 For example, the microcomputer 12051 determines the distance to each three-dimensional object within the imaging ranges 12111 to 12114 and the temporal change in this distance (relative speed with respect to the vehicle 12100) based on the distance information obtained from the imaging units 12101 to 12104. In particular, by determining the three-dimensional object that is closest to the vehicle 12100 on its path and that is traveling at a predetermined speed (for example, 0 km/h or more) in approximately the same direction as the vehicle 12100, it is possible to extract the three-dimensional object as the preceding vehicle. can. Furthermore, the microcomputer 12051 can set an inter-vehicle distance to be secured in advance in front of the preceding vehicle, and perform automatic brake control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform cooperative control for the purpose of autonomous driving, etc., in which the vehicle travels autonomously without depending on the driver's operation.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を元に、立体物に関する立体物データを、2輪車、普通車両、大型車両、歩行者、電柱等その他の立体物に分類して抽出し、障害物の自動回避に用いることができる。例えば、マイクロコンピュータ12051は、車両12100の周辺の障害物を、車両12100のドライバが視認可能な障害物と視認困難な障害物とに識別する。そして、マイクロコンピュータ12051は、各障害物との衝突の危険度を示す衝突リスクを判断し、衝突リスクが設定値以上で衝突可能性がある状況であるときには、オーディオスピーカ12061や表示部12062を介してドライバに警報を出力することや、駆動系制御ユニット12010を介して強制減速や回避操舵を行うことで、衝突回避のための運転支援を行うことができる。 For example, the microcomputer 12051 transfers three-dimensional object data to other three-dimensional objects such as two-wheeled vehicles, regular vehicles, large vehicles, pedestrians, and utility poles based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic obstacle avoidance. For example, the microcomputer 12051 identifies obstacles around the vehicle 12100 into obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines a collision risk indicating the degree of risk of collision with each obstacle, and when the collision risk exceeds a set value and there is a possibility of a collision, the microcomputer 12051 transmits information via the audio speaker 12061 and the display unit 12062. By outputting a warning to the driver via the vehicle control unit 12010 and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
 撮像部12101ないし12104の少なくとも1つは、赤外線を検出する赤外線カメラであってもよい。例えば、マイクロコンピュータ12051は、撮像部12101ないし12104の撮像画像中に歩行者が存在するか否かを判定することで歩行者を認識することができる。かかる歩行者の認識は、例えば赤外線カメラとしての撮像部12101ないし12104の撮像画像における特徴点を抽出する手順と、物体の輪郭を示す一連の特徴点にパターンマッチング処理を行って歩行者か否かを判別する手順によって行われる。マイクロコンピュータ12051が、撮像部12101ないし12104の撮像画像中に歩行者が存在すると判定し、歩行者を認識すると、音声画像出力部12052は、当該認識された歩行者に強調のための方形輪郭線を重畳表示するように、表示部12062を制御する。また、音声画像出力部12052は、歩行者を示すアイコン等を所望の位置に表示するように表示部12062を制御してもよい。 At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether the pedestrian is present in the images captured by the imaging units 12101 to 12104. Such pedestrian recognition involves, for example, a procedure for extracting feature points in images captured by the imaging units 12101 to 12104 as infrared cameras, and a pattern matching process is performed on a series of feature points indicating the outline of an object to determine whether it is a pedestrian or not. This is done by a procedure that determines the When the microcomputer 12051 determines that a pedestrian is present in the images captured by the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 creates a rectangular outline for emphasis on the recognized pedestrian. The display unit 12062 is controlled to display the . The audio image output unit 12052 may also control the display unit 12062 to display an icon or the like indicating a pedestrian at a desired position.
 以上、本開示に係る技術が適用され得る車両制御システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、撮像部12031に適用され得る。例えば、撮像部12031として、図5及び図6で説明した構成を有するIR撮像センサとしての光検出装置1を適用することができる。あるいはまた、撮像部12031として、図9及び図10で説明した構成を有する直接ToFセンサとしての光検出装置1を適用することができる。撮像部12031に本開示に係る技術を適用することにより、小型化しつつも、より見やすい撮影画像を得ることができたり、距離情報を取得することができる。また、得られた撮影画像や距離情報を用いて、ドライバの疲労を軽減したり、ドライバや車両の安全度を高めることが可能になる。 An example of a vehicle control system to which the technology according to the present disclosure can be applied has been described above. The technology according to the present disclosure can be applied to the imaging unit 12031 among the configurations described above. For example, as the imaging unit 12031, the photodetection device 1 as an IR imaging sensor having the configuration described in FIGS. 5 and 6 can be applied. Alternatively, the photodetecting device 1 as a direct ToF sensor having the configuration described in FIGS. 9 and 10 can be applied as the imaging unit 12031. By applying the technology according to the present disclosure to the imaging unit 12031, it is possible to obtain a photographed image that is easier to see, and to obtain distance information, while reducing the size. Furthermore, by using the obtained captured images and distance information, it becomes possible to reduce driver fatigue and increase the safety level of the driver and the vehicle.
 上述した例では、第1導電型をP型、第2導電型をN型として、電子を信号電荷とした光検出装置について説明したが、本開示は正孔を信号電荷とする光検出装置にも適用することができる。すなわち、第1導電型をN型とし、第2導電型をP型として、前述の各半導体領域を逆の導電型の半導体領域で構成することができる。 In the above example, the first conductivity type is P type, the second conductivity type is N type, and the photodetection device uses electrons as the signal charge. However, the present disclosure describes a photodetection device using holes as the signal charge. can also be applied. That is, the first conductivity type can be an N type, the second conductivity type can be a P type, and each of the aforementioned semiconductor regions can be configured with semiconductor regions of opposite conductivity types.
 また、本開示は、可視光の入射光量の分布を検知して画像として撮像する光検出装置への適用に限らず、赤外線やX線、あるいは粒子等の入射量の分布を画像として撮像する光検出装置や、広義の意味として、圧力や静電容量など、他の物理量の分布を検知して画像として撮像する指紋検出センサ等の光検出装置(物理量分布検知装置)全般に対して適用可能である。 Furthermore, the present disclosure is not limited to application to a photodetection device that detects the distribution of the amount of incident light of visible light and captures the image as an image, but also applies to a light detection device that detects the distribution of the amount of incident light such as infrared rays, X-rays, or particles. It can be applied to detection devices and, in a broader sense, to general photodetection devices (physical quantity distribution detection devices) such as fingerprint detection sensors that detect the distribution of other physical quantities such as pressure and capacitance and capture them as images. be.
 本開示の実施の形態は、上述した実施の形態に限定されるものではなく、本開示の技術の要旨を逸脱しない範囲において種々の変更が可能である。 The embodiments of the present disclosure are not limited to the embodiments described above, and various changes can be made without departing from the gist of the technology of the present disclosure.
 例えば、上述した各画素200に適用した構成の全てまたは一部を適宜組み合わせた形態を採用することができる。また、上述した実施の形態では、光電変換部がSiGe層11に形成される例について説明したが、赤外領域の吸収係数をより高めるため、SiGe層11に代えて、ゲルマニウム層を採用してもよい。 For example, it is possible to adopt a form in which all or part of the configurations applied to each pixel 200 described above are combined as appropriate. Further, in the above-described embodiment, an example was described in which the photoelectric conversion section is formed in the SiGe layer 11, but in order to further increase the absorption coefficient in the infrared region, a germanium layer may be used instead of the SiGe layer 11. Good too.
 なお、本明細書に記載された効果はあくまで例示であって限定されるものではなく、本明細書に記載されたもの以外の効果があってもよい。 Note that the effects described in this specification are merely examples and are not limited, and there may be effects other than those described in this specification.
 なお、本開示の技術は、以下の構成を取ることができる。
(1)
 光電変換部が形成されたシリコンゲルマニウム層と、
 前記シリコンゲルマニウム層の光入射面側である第1面側に形成された画素間遮光膜と、
 前記シリコンゲルマニウム層の前記第1面と反対側の第2面側に形成されたMOSトランジスタと
 を備え、
 前記シリコンゲルマニウム層のゲルマニウム濃度は、一定濃度で形成されている
 光検出装置。
(2)
 前記シリコンゲルマニウム層におけるゲルマニウム濃度のばらつきは10%より小さく形成されている
 前記(1)に記載の光検出装置。
(3)
 前記シリコンゲルマニウム層の前記第2面側の上面にシリコン層をさらに備え、
 前記MOSトランジスタは、前記シリコン層の上に形成されている
 前記(1)または(2)に記載の光検出装置。
(4)
 前記シリコンゲルマニウム層を含む第1の半導体基板と、
 少なくともロジック回路が形成された第2の半導体基板とを積層して構成され、
 前記MOSトランジスタは、前記第2の半導体基板に形成されている
 前記(1)ないし(3)のいずれかに記載の光検出装置。
(5)
 前記光電変換部と、転送トランジスタと、増幅トランジスタと、選択トランジスタと、リセットトランジスタと、電荷排出トランジスタとを画素単位に有するゲート方式のToFセンサである
 前記(1)ないし(4)のいずれかに記載の光検出装置。
(6)
 前記光電変換部と、転送トランジスタと、増幅トランジスタと、選択トランジスタと、リセットトランジスタとを画素単位に有する撮像センサである
 前記(1)ないし(5)のいずれかに記載の光検出装置。
(7)
 前記光電変換部と、前記光電変換部に所定の電圧を印加する電圧印加部と、前記光電変換部で生成された電荷を検出する電荷検出部とを画素単位に有するCAPD方式のToFセンサである
 前記(1)ないし(5)のいずれかに記載の光検出装置。
(8)
 アバランシェ増倍領域を含むアバランシェフォトダイオードを、前記光電変換部として画素単位に有する直接ToFセンサである
 前記(1)ないし(5)のいずれかに記載の光検出装置。
(9)
 前記光電変換部を画素単位に分離する素子分離部をさらに備える
 前記(1)ないし(8)のいずれかに記載の光検出装置。
(10)
 前記素子分離部は、前記シリコンゲルマニウム層の裏面側からおもて面側にしたがって溝幅が狭くなるテーパ状の断面形状を有する
 前記(9)に記載の光検出装置。
(11)
 前記素子分離部は、前記シリコンゲルマニウム層のおもて面側から裏面側にしたがって溝幅が狭くなる逆テーパ状の断面形状を有する
 前記(9)に記載の光検出装置。
(12)
 前記素子分離部は、前記シリコンゲルマニウム層を貫通して形成されている
 前記(9)ないし(11)のいずれかに記載の光検出装置。
(13)
 前記素子分離部は、前記シリコンゲルマニウム層を貫通せず、前記シリコンゲルマニウム層の深さ方向の一部に形成されている
 前記(9)ないし(11)のいずれかに記載の光検出装置。
(14)
 前記シリコンゲルマニウム層の前記第1面側に回折構造をさらに備える
 前記(1)ないし(13)のいずれかに記載の光検出装置。
(15)
 前記光電変換部を画素単位に分離する素子分離部をさらに備え、
 前記回折構造は、前記素子分離部と同じ方法で前記シリコンゲルマニウム層の前記第1面側から形成された
 前記(14)に記載の光検出装置。
(16)
 前記シリコンゲルマニウム層の前記第2面側に回折構造をさらに備える
 前記(1)ないし(13)のいずれかに記載の光検出装置。
(17)
 前記光電変換部を画素単位に分離する素子分離部をさらに備え、
 前記回折構造は、前記素子分離部と同じ方法で前記シリコンゲルマニウム層の前記第2面側から形成された
 前記(16)に記載の光検出装置。
(18)
 前記回折構造は、STIで形成された
 前記(14)ないし(17)のいずれかに記載の光検出装置。
(19)
 シリコンゲルマニウム層に光電変換部を形成し、
 前記シリコンゲルマニウム層の光入射面側である第1面側に画素間遮光膜を形成し、
 前記シリコンゲルマニウム層の前記第1面と反対側の第2面側にMOSトランジスタを形成し、
 前記シリコンゲルマニウム層のゲルマニウム濃度が一定濃度で形成された
 光検出装置の製造方法。
(20)
 光電変換部が形成されたシリコンゲルマニウム層と、
 前記シリコンゲルマニウム層の光入射面側である第1面側に形成された画素間遮光膜と、
 前記シリコンゲルマニウム層の前記第1面と反対側の第2面側に形成されたMOSトランジスタと
 を備え、
 前記シリコンゲルマニウム層のゲルマニウム濃度は、一定濃度で形成されている
 光検出装置
 を備える電子機器。
Note that the technology of the present disclosure can take the following configuration.
(1)
a silicon germanium layer in which a photoelectric conversion section is formed;
an interpixel light shielding film formed on a first surface side that is a light incident surface side of the silicon germanium layer;
a MOS transistor formed on a second surface opposite to the first surface of the silicon germanium layer;
The silicon germanium layer is formed with a constant germanium concentration.
(2)
The photodetector according to (1), wherein the silicon germanium layer has a variation in germanium concentration of less than 10%.
(3)
further comprising a silicon layer on the upper surface of the second surface side of the silicon germanium layer;
The photodetector according to (1) or (2), wherein the MOS transistor is formed on the silicon layer.
(4)
a first semiconductor substrate including the silicon germanium layer;
a second semiconductor substrate on which at least a logic circuit is formed;
The photodetecting device according to any one of (1) to (3), wherein the MOS transistor is formed on the second semiconductor substrate.
(5)
A gate-type ToF sensor having the photoelectric conversion section, a transfer transistor, an amplification transistor, a selection transistor, a reset transistor, and a charge discharge transistor in each pixel, according to any one of (1) to (4) above. The photodetection device described.
(6)
The photodetection device according to any one of (1) to (5), which is an image sensor including the photoelectric conversion section, a transfer transistor, an amplification transistor, a selection transistor, and a reset transistor in each pixel.
(7)
The TOF sensor is a CAPD type ToF sensor that includes, in each pixel, the photoelectric conversion section, a voltage application section that applies a predetermined voltage to the photoelectric conversion section, and a charge detection section that detects the charge generated in the photoelectric conversion section. The photodetection device according to any one of (1) to (5) above.
(8)
The photodetection device according to any one of (1) to (5), which is a direct ToF sensor having an avalanche photodiode including an avalanche multiplication region in each pixel as the photoelectric conversion section.
(9)
The photodetection device according to any one of (1) to (8), further comprising an element separation section that separates the photoelectric conversion section into pixel units.
(10)
The photodetector according to (9), wherein the element isolation section has a tapered cross-sectional shape in which the groove width becomes narrower from the back side to the front side of the silicon germanium layer.
(11)
The photodetector according to (9), wherein the element isolation section has an inverted tapered cross-sectional shape in which the groove width becomes narrower from the front surface side to the back surface side of the silicon germanium layer.
(12)
The photodetector according to any one of (9) to (11), wherein the element isolation section is formed to penetrate the silicon germanium layer.
(13)
The photodetector according to any one of (9) to (11), wherein the element isolation section does not penetrate the silicon germanium layer but is formed in a portion of the silicon germanium layer in a depth direction.
(14)
The photodetection device according to any one of (1) to (13), further comprising a diffraction structure on the first surface side of the silicon germanium layer.
(15)
further comprising an element separation section that separates the photoelectric conversion section into pixel units,
The photodetecting device according to (14), wherein the diffraction structure is formed from the first surface side of the silicon germanium layer by the same method as the element isolation section.
(16)
The photodetection device according to any one of (1) to (13), further comprising a diffraction structure on the second surface side of the silicon germanium layer.
(17)
further comprising an element separation section that separates the photoelectric conversion section into pixel units,
The photodetecting device according to (16), wherein the diffraction structure is formed from the second surface side of the silicon germanium layer by the same method as the element isolation section.
(18)
The photodetecting device according to any one of (14) to (17), wherein the diffraction structure is formed by STI.
(19)
A photoelectric conversion section is formed on the silicon germanium layer,
forming an inter-pixel light-shielding film on a first surface side that is a light incident surface side of the silicon germanium layer;
forming a MOS transistor on a second surface opposite to the first surface of the silicon germanium layer;
A method for manufacturing a photodetector, wherein the silicon germanium layer has a constant germanium concentration.
(20)
a silicon germanium layer in which a photoelectric conversion section is formed;
an interpixel light shielding film formed on a first surface side that is a light incident surface side of the silicon germanium layer;
a MOS transistor formed on a second surface opposite to the first surface of the silicon germanium layer;
The germanium concentration of the silicon germanium layer is formed at a constant concentration.
 1 光検出装置, 11 SiGe層, 12 フォトダイオード, 13 素子分離部, 14 反射防止膜, 15 遮光膜, 16 レンズ層, 17 オンチップレンズ, 21 Si層, 31 ゲート絶縁膜, 32 ゲート電極, 33 サイドウォール, 43 絶縁層, 51 半導体基板, 61 配線層, 66 接合面, 111 シリコン基板, 112 SiGe応力緩和層, 200 画素, 321 転送トランジスタ, 323 リセットトランジスタ, 324 増幅トランジスタ, 325 選択トランジスタ, 331 Pウェル, 481 回折構造, 491 回折構造, 521 画素アレイ部, Tr MOSトランジスタ, AMP 増幅トランジスタ, FD 浮遊拡散領域, FDG 切替トランジスタ, FDL 付加容量, OFG 電荷排出トランジスタ, RST リセットトランジスタ, SEL 選択トランジスタ, TRG 転送トランジスタ 1 Photodetector, 11 SiGe layer, 12 Photodiode, 13 Element isolation section, 14 Anti-reflection film, 15 Light shielding film, 16 Lens layer, 17 On-chip lens, 21 Si layer, 31 Gate insulating film, 32 Gate electrode, 33 Sidewall, 43 Insulating layer, 51 Semiconductor substrate, 61 Wiring layer, 66 Bonding surface, 111 Silicon substrate, 112 SiGe stress relaxation layer, 200 Pixel, 321 Transfer transistor, 323 Reset transistor, 324 Amplification transistor, 325 selection transistor, 331P Well, 481 diffraction structure, 491 diffraction structure, 521 pixel array section, Tr MOS transistor, AMP amplification transistor, FD floating diffusion region, FDG switching transistor, FDL additional capacitor, OFG charge discharge transistor, RST reset transistor, S EL selection transistor, TRG transfer transistor

Claims (20)

  1.  光電変換部が形成されたシリコンゲルマニウム層と、
     前記シリコンゲルマニウム層の光入射面側である第1面側に形成された画素間遮光膜と、
     前記シリコンゲルマニウム層の前記第1面と反対側の第2面側に形成されたMOSトランジスタと
     を備え、
     前記シリコンゲルマニウム層のゲルマニウム濃度は、一定濃度で形成されている
     光検出装置。
    a silicon germanium layer in which a photoelectric conversion section is formed;
    an interpixel light shielding film formed on a first surface side that is a light incident surface side of the silicon germanium layer;
    a MOS transistor formed on a second surface opposite to the first surface of the silicon germanium layer;
    The silicon germanium layer is formed with a constant germanium concentration.
  2.  前記シリコンゲルマニウム層におけるゲルマニウム濃度のばらつきは10%より小さく形成されている
     請求項1に記載の光検出装置。
    The photodetection device according to claim 1, wherein the variation in germanium concentration in the silicon germanium layer is smaller than 10%.
  3.  前記シリコンゲルマニウム層の前記第2面側の上面にシリコン層をさらに備え、
     前記MOSトランジスタは、前記シリコン層の上に形成されている
     請求項1に記載の光検出装置。
    further comprising a silicon layer on the upper surface of the second surface side of the silicon germanium layer;
    The photodetection device according to claim 1, wherein the MOS transistor is formed on the silicon layer.
  4.  前記シリコンゲルマニウム層を含む第1の半導体基板と、
     少なくともロジック回路が形成された第2の半導体基板とを積層して構成され、
     前記MOSトランジスタは、前記第2の半導体基板に形成されている
     請求項1に記載の光検出装置。
    a first semiconductor substrate including the silicon germanium layer;
    a second semiconductor substrate on which at least a logic circuit is formed;
    The photodetection device according to claim 1, wherein the MOS transistor is formed on the second semiconductor substrate.
  5.  前記光電変換部と、転送トランジスタと、増幅トランジスタと、選択トランジスタと、リセットトランジスタと、電荷排出トランジスタとを画素単位に有するゲート方式のToFセンサである
     請求項1に記載の光検出装置。
    The photodetection device according to claim 1, which is a gate-type ToF sensor that includes the photoelectric conversion section, a transfer transistor, an amplification transistor, a selection transistor, a reset transistor, and a charge discharge transistor in each pixel.
  6.  前記光電変換部と、転送トランジスタと、増幅トランジスタと、選択トランジスタと、リセットトランジスタとを画素単位に有する撮像センサである
     請求項1に記載の光検出装置。
    The photodetection device according to claim 1, wherein the photodetection device is an image sensor including the photoelectric conversion section, a transfer transistor, an amplification transistor, a selection transistor, and a reset transistor in each pixel.
  7.  前記光電変換部と、前記光電変換部に所定の電圧を印加する電圧印加部と、前記光電変換部で生成された電荷を検出する電荷検出部とを画素単位に有するCAPD方式のToFセンサである
     請求項1に記載の光検出装置。
    The TOF sensor is a CAPD type ToF sensor that includes, in each pixel, the photoelectric conversion section, a voltage application section that applies a predetermined voltage to the photoelectric conversion section, and a charge detection section that detects the charge generated by the photoelectric conversion section. The photodetection device according to claim 1.
  8.  アバランシェ増倍領域を含むアバランシェフォトダイオードを、前記光電変換部として画素単位に有する直接ToFセンサである
     請求項1に記載の光検出装置。
    The photodetection device according to claim 1, wherein the photodetection device is a direct ToF sensor having an avalanche photodiode including an avalanche multiplication region in each pixel as the photoelectric conversion section.
  9.  前記光電変換部を画素単位に分離する素子分離部をさらに備える
     請求項1に記載の光検出装置。
    The photodetection device according to claim 1, further comprising an element separation section that separates the photoelectric conversion section into pixel units.
  10.  前記素子分離部は、前記シリコンゲルマニウム層の裏面側からおもて面側にしたがって溝幅が狭くなるテーパ状の断面形状を有する
     請求項9に記載の光検出装置。
    The photodetector according to claim 9, wherein the element isolation section has a tapered cross-sectional shape in which the groove width becomes narrower from the back side to the front side of the silicon germanium layer.
  11.  前記素子分離部は、前記シリコンゲルマニウム層のおもて面側から裏面側にしたがって溝幅が狭くなる逆テーパ状の断面形状を有する
     請求項9に記載の光検出装置。
    The photodetection device according to claim 9, wherein the element isolation section has an inverted tapered cross-sectional shape in which the groove width becomes narrower from the front surface side to the back surface side of the silicon germanium layer.
  12.  前記素子分離部は、前記シリコンゲルマニウム層を貫通して形成されている
     請求項9に記載の光検出装置。
    The photodetection device according to claim 9, wherein the element isolation section is formed to penetrate the silicon germanium layer.
  13.  前記素子分離部は、前記シリコンゲルマニウム層を貫通せず、前記シリコンゲルマニウム層の深さ方向の一部に形成されている
     請求項9に記載の光検出装置。
    The photodetection device according to claim 9, wherein the element isolation section does not penetrate the silicon germanium layer but is formed in a portion of the silicon germanium layer in the depth direction.
  14.  前記シリコンゲルマニウム層の前記第1面側に回折構造をさらに備える
     請求項1に記載の光検出装置。
    The photodetection device according to claim 1, further comprising a diffraction structure on the first surface side of the silicon germanium layer.
  15.  前記光電変換部を画素単位に分離する素子分離部をさらに備え、
     前記回折構造は、前記素子分離部と同じ方法で前記シリコンゲルマニウム層の前記第1面側から形成された
     請求項14に記載の光検出装置。
    further comprising an element separation section that separates the photoelectric conversion section into pixel units,
    The photodetection device according to claim 14, wherein the diffraction structure is formed from the first surface side of the silicon germanium layer by the same method as the element isolation section.
  16.  前記シリコンゲルマニウム層の前記第2面側に回折構造をさらに備える
     請求項1に記載の光検出装置。
    The photodetection device according to claim 1, further comprising a diffraction structure on the second surface side of the silicon germanium layer.
  17.  前記光電変換部を画素単位に分離する素子分離部をさらに備え、
     前記回折構造は、前記素子分離部と同じ方法で前記シリコンゲルマニウム層の前記第2面側から形成された
     請求項16に記載の光検出装置。
    further comprising an element separation section that separates the photoelectric conversion section into pixel units,
    The photodetection device according to claim 16, wherein the diffraction structure is formed from the second surface side of the silicon germanium layer by the same method as the element isolation section.
  18.  前記回折構造は、STIで形成された
     請求項14に記載の光検出装置。
    The photodetection device according to claim 14, wherein the diffraction structure is formed by STI.
  19.  シリコンゲルマニウム層に光電変換部を形成し、
     前記シリコンゲルマニウム層の光入射面側である第1面側に画素間遮光膜を形成し、
     前記シリコンゲルマニウム層の前記第1面と反対側の第2面側にMOSトランジスタを形成し、
     前記シリコンゲルマニウム層のゲルマニウム濃度が一定濃度で形成された
     光検出装置の製造方法。
    A photoelectric conversion section is formed on the silicon germanium layer,
    forming an inter-pixel light-shielding film on a first surface side that is a light incident surface side of the silicon germanium layer;
    forming a MOS transistor on a second surface opposite to the first surface of the silicon germanium layer;
    A method for manufacturing a photodetector, wherein the silicon germanium layer has a constant germanium concentration.
  20.  光電変換部が形成されたシリコンゲルマニウム層と、
     前記シリコンゲルマニウム層の光入射面側である第1面側に形成された画素間遮光膜と、
     前記シリコンゲルマニウム層の前記第1面と反対側の第2面側に形成されたMOSトランジスタと
     を備え、
     前記シリコンゲルマニウム層のゲルマニウム濃度は、一定濃度で形成されている
     光検出装置
     を備える電子機器。
    a silicon germanium layer in which a photoelectric conversion section is formed;
    an interpixel light shielding film formed on a first surface side that is a light incident surface side of the silicon germanium layer;
    a MOS transistor formed on a second surface opposite to the first surface of the silicon germanium layer;
    The germanium concentration of the silicon germanium layer is formed at a constant concentration.
PCT/JP2022/034521 2022-09-15 2022-09-15 Photodetection device, method for producing same, and electronic apparatus WO2024057470A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/034521 WO2024057470A1 (en) 2022-09-15 2022-09-15 Photodetection device, method for producing same, and electronic apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/034521 WO2024057470A1 (en) 2022-09-15 2022-09-15 Photodetection device, method for producing same, and electronic apparatus

Publications (1)

Publication Number Publication Date
WO2024057470A1 true WO2024057470A1 (en) 2024-03-21

Family

ID=90274547

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/034521 WO2024057470A1 (en) 2022-09-15 2022-09-15 Photodetection device, method for producing same, and electronic apparatus

Country Status (1)

Country Link
WO (1) WO2024057470A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013033864A (en) * 2011-08-02 2013-02-14 Sony Corp Solid state imaging device manufacturing method, solid state imaging element and electronic apparatus
US20150279894A1 (en) * 2014-03-28 2015-10-01 Taiwan Semiconductor Manufacturing Company, Ltd. CMOS Image Sensor with Epitaxial Passivation Layer
JP2016001633A (en) * 2014-06-11 2016-01-07 ソニー株式会社 Solid state image sensor and electronic equipment
JP2020516200A (en) * 2017-04-04 2020-05-28 アーティラックス・インコーポレイテッド High speed light sensing device III
JP2021158418A (en) * 2020-03-25 2021-10-07 凸版印刷株式会社 Solid-state imaging device and imaging system
WO2022014365A1 (en) * 2020-07-17 2022-01-20 ソニーセミコンダクタソリューションズ株式会社 Light-receiving element, manufacturing method therefor, and electronic device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2013033864A (en) * 2011-08-02 2013-02-14 Sony Corp Solid state imaging device manufacturing method, solid state imaging element and electronic apparatus
US20150279894A1 (en) * 2014-03-28 2015-10-01 Taiwan Semiconductor Manufacturing Company, Ltd. CMOS Image Sensor with Epitaxial Passivation Layer
JP2016001633A (en) * 2014-06-11 2016-01-07 ソニー株式会社 Solid state image sensor and electronic equipment
JP2020516200A (en) * 2017-04-04 2020-05-28 アーティラックス・インコーポレイテッド High speed light sensing device III
JP2021158418A (en) * 2020-03-25 2021-10-07 凸版印刷株式会社 Solid-state imaging device and imaging system
WO2022014365A1 (en) * 2020-07-17 2022-01-20 ソニーセミコンダクタソリューションズ株式会社 Light-receiving element, manufacturing method therefor, and electronic device

Similar Documents

Publication Publication Date Title
JP7439214B2 (en) Solid-state image sensor and electronic equipment
JP2023086799A (en) Optical detection element
KR20180117601A (en) The solid-
US20240047499A1 (en) Solid-state imaging device, method for manufacturing the same, and electronic apparatus
CN115696074B (en) Light detection device
JP7399105B2 (en) Solid-state image sensor and video recording device
JP7452962B2 (en) Imaging device
JPWO2018180575A1 (en) Solid-state imaging device, electronic device, and manufacturing method
KR20210075075A (en) Imaging devices and electronic devices
TW202133421A (en) Solid-state imaging device and electronic device
US20240079428A1 (en) Imaging device
EP4124010A1 (en) Sensor package, method for manufacturing same, and imaging device
WO2024057470A1 (en) Photodetection device, method for producing same, and electronic apparatus
CN114270516A (en) Image pickup element and image pickup apparatus
WO2023248926A1 (en) Imaging element and electronic device
WO2023248925A1 (en) Imaging element and electronic device
WO2023188899A1 (en) Light detecting device and electronic apparatus
WO2024058010A1 (en) Light detection device and electronic apparatus
WO2024024515A1 (en) Photodetection device and ranging system
WO2024057814A1 (en) Light-detection device and electronic instrument
WO2023234069A1 (en) Imaging device and electronic apparatus
WO2023210238A1 (en) Light detecting device, and electronic apparatus
WO2022145190A1 (en) Solid-state imaging device and electronic apparatus
US20220344390A1 (en) Organic cis image sensor
CN117716504A (en) Light detection device, method for manufacturing light detection device, and electronic apparatus