WO2024095751A1 - Light-detecting device and electronic apparatus - Google Patents

Light-detecting device and electronic apparatus Download PDF

Info

Publication number
WO2024095751A1
WO2024095751A1 PCT/JP2023/037431 JP2023037431W WO2024095751A1 WO 2024095751 A1 WO2024095751 A1 WO 2024095751A1 JP 2023037431 W JP2023037431 W JP 2023037431W WO 2024095751 A1 WO2024095751 A1 WO 2024095751A1
Authority
WO
WIPO (PCT)
Prior art keywords
aspect ratio
dimension
along
semiconductor layer
detection device
Prior art date
Application number
PCT/JP2023/037431
Other languages
French (fr)
Japanese (ja)
Inventor
信達 荒木
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2024095751A1 publication Critical patent/WO2024095751A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L21/00Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
    • H01L21/02Manufacture or treatment of semiconductor devices or of parts thereof
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L21/00Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
    • H01L21/02Manufacture or treatment of semiconductor devices or of parts thereof
    • H01L21/04Manufacture or treatment of semiconductor devices or of parts thereof the devices having potential barriers, e.g. a PN junction, depletion layer or carrier concentration layer
    • H01L21/18Manufacture or treatment of semiconductor devices or of parts thereof the devices having potential barriers, e.g. a PN junction, depletion layer or carrier concentration layer the devices having semiconductor bodies comprising elements of Group IV of the Periodic Table or AIIIBV compounds with or without impurities, e.g. doping materials
    • H01L21/30Treatment of semiconductor bodies using processes or apparatus not provided for in groups H01L21/20 - H01L21/26
    • H01L21/31Treatment of semiconductor bodies using processes or apparatus not provided for in groups H01L21/20 - H01L21/26 to form insulating layers thereon, e.g. for masking or by using photolithographic techniques; After treatment of these layers; Selection of materials for these layers
    • H01L21/3205Deposition of non-insulating-, e.g. conductive- or resistive-, layers on insulating layers; After-treatment of these layers
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L21/00Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
    • H01L21/70Manufacture or treatment of devices consisting of a plurality of solid state components formed in or on a common substrate or of parts thereof; Manufacture of integrated circuit devices or of parts thereof
    • H01L21/71Manufacture of specific parts of devices defined in group H01L21/70
    • H01L21/768Applying interconnections to be used for carrying current between separate components within a device comprising conductors and dielectrics
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L23/00Details of semiconductor or other solid state devices
    • H01L23/52Arrangements for conducting electric current within the device in operation from one component to another, i.e. interconnections, e.g. wires, lead frames
    • H01L23/522Arrangements for conducting electric current within the device in operation from one component to another, i.e. interconnections, e.g. wires, lead frames including external interconnections consisting of a multilayer structure of conductive and insulating layers inseparably formed on the semiconductor body
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L23/00Details of semiconductor or other solid state devices
    • H01L23/52Arrangements for conducting electric current within the device in operation from one component to another, i.e. interconnections, e.g. wires, lead frames
    • H01L23/522Arrangements for conducting electric current within the device in operation from one component to another, i.e. interconnections, e.g. wires, lead frames including external interconnections consisting of a multilayer structure of conductive and insulating layers inseparably formed on the semiconductor body
    • H01L23/532Arrangements for conducting electric current within the device in operation from one component to another, i.e. interconnections, e.g. wires, lead frames including external interconnections consisting of a multilayer structure of conductive and insulating layers inseparably formed on the semiconductor body characterised by the materials
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures

Definitions

  • This technology (the technology disclosed herein) relates to photodetection devices and electronic devices, and in particular to stacked photodetection devices and electronic devices.
  • Patent Document 1 describes a semiconductor device (e.g., a solid-state imaging device) that has three substrate layers.
  • the semiconductor device has connection wiring that is a through conductor that penetrates the substrate located at the center of the three substrate layers in the thickness direction.
  • the aim of this technology is to provide a photodetector and electronic device with a through conductor that suppresses degradation of electrical characteristics even when the aspect ratio is increased in accordance with miniaturization, etc.
  • the photodetector includes a layered structure in which a first semiconductor layer having a photoelectric conversion region, a first wiring layer, a second semiconductor layer, a second wiring layer, and a third semiconductor layer are layered in this order, and is provided with a through conductor extending along the thickness direction, the through conductor having a first portion and a second portion that overlap in a plan view, the first portion penetrates the second semiconductor layer along the thickness direction, and the first end, which is one end in the extension direction, protrudes into the first wiring layer, and the second portion is provided in the first wiring layer, and the second end, which is one end in the extension direction, is directly connected to the first end.
  • An electronic device includes the above-mentioned light detection device and an optical system that focuses image light from a subject on the above-mentioned light detection device.
  • 1 is a chip layout diagram showing a configuration example of a photodetector according to a first embodiment of the present technology.
  • 1 is a block diagram showing a configuration example of a light detection device according to a first embodiment of the present technology; 2 is an equivalent circuit diagram of a pixel of the photodetection device according to the first embodiment of the present technology.
  • 1 is a longitudinal sectional view of a pixel included in a photodetection device according to a first embodiment of the present technology.
  • FIG. 4B is a partially enlarged view showing a main part of FIG. 4A.
  • 5A to 5C are cross-sectional views illustrating steps of a manufacturing method for a light detection device according to the first embodiment of the present technology.
  • FIG. 5B is a cross-sectional view showing a process subsequent to FIG. 5A.
  • 5B is a cross-sectional view showing a process subsequent to FIG. 5B.
  • 5D is a cross-sectional view showing a process subsequent to FIG. 5C.
  • 5D is a cross-sectional view showing a process subsequent to FIG. 5D.
  • FIG. 11 is a vertical cross-sectional view of a pixel showing an example of a through conductor that does not have a multi-stage structure.
  • 4 is a longitudinal sectional view of a through conductor included in a light detection device according to a first modified example of the first embodiment of the present technology.
  • FIG. 11 is a longitudinal sectional view of a through conductor included in a light detection device according to a second modified example of the first embodiment of the present technology.
  • FIG. 13 is a longitudinal sectional view of a through conductor included in a light detection device according to a third modified example of the first embodiment of the present technology.
  • FIG. 13 is a longitudinal sectional view of a pixel included in a photodetection device according to a fourth modified example of the first embodiment of the present technology.
  • FIG. FIG. 1 is a block diagram showing an example of a schematic configuration of an electronic device.
  • FIG. 1 is a block diagram showing an example of a schematic configuration of a vehicle control system
  • 4 is an explanatory diagram showing an example of the installation positions of an outside-vehicle information detection unit and an imaging unit
  • FIG. 1 is a diagram illustrating an example of a schematic configuration of an endoscopic surgery system
  • 2 is a block diagram showing an example of the functional configuration of a camera head and a CCU.
  • CMOS complementary metal oxide semiconductor
  • the photodetection device 1 As shown in Fig. 1, the photodetection device 1 according to the first embodiment of the present technology is mainly composed of a semiconductor chip 2 having a rectangular two-dimensional planar shape when viewed in a plan view. That is, the photodetection device 1 is mounted on the semiconductor chip 2. As shown in Fig. 11, the photodetection device 1 takes in image light (incident light 106) from a subject via an optical system (optical lens) 102, converts the amount of incident light 106 formed on an imaging surface into an electrical signal on a pixel-by-pixel basis, and outputs the electrical signal as a pixel signal.
  • image light incident light 106
  • optical system optical lens
  • the semiconductor chip 2 on which the photodetector 1 is mounted has a square pixel region 2A located in the center of a two-dimensional plane including the X and Y directions that intersect with each other, and a peripheral region 2B located outside the pixel region 2A so as to surround the pixel region 2A.
  • the pixel region 2A is a light receiving surface that receives light collected by the optical system 102 shown in FIG. 11, for example.
  • a plurality of pixels 3 are arranged in a matrix on a two-dimensional plane including the X direction and the Y direction.
  • the pixels 3 are repeatedly arranged in each of the X direction and the Y direction that intersect with each other on the two-dimensional plane.
  • the X direction and the Y direction are orthogonal to each other, as an example.
  • the direction orthogonal to both the X direction and the Y direction is the Z direction (thickness direction, stacking direction).
  • the direction perpendicular to the Z direction is the horizontal direction.
  • a plurality of bonding pads 14 are arranged in the peripheral region 2B.
  • Each of the plurality of bonding pads 14 is arranged, for example, along each of the four sides of the semiconductor chip 2 in a two-dimensional plane.
  • Each of the plurality of bonding pads 14 is an input/output terminal used when electrically connecting the semiconductor chip 2 to an external device.
  • the semiconductor chip 2 includes a logic circuit 13.
  • the logic circuit 13 includes a vertical drive circuit 4, a column signal processing circuit 5, a horizontal drive circuit 6, an output circuit 7, and a control circuit 8.
  • the logic circuit 13 is configured with a CMOS (Complementary MOS) circuit having, as field effect transistors, for example, an n-channel conductivity type MOSFET (Metal Oxide Semiconductor Field Effect Transistor) and a p-channel conductivity type MOSFET.
  • CMOS Complementary MOS
  • the vertical drive circuit 4 is composed of, for example, a shift register.
  • the vertical drive circuit 4 sequentially selects the desired pixel drive lines 10, supplies pulses to the selected pixel drive lines 10 for driving the pixels 3, and drives each pixel 3 row by row. That is, the vertical drive circuit 4 sequentially selects and scans each pixel 3 in the pixel area 2A vertically row by row, and supplies pixel signals from the pixels 3 based on signal charges generated by the photoelectric conversion elements of each pixel 3 according to the amount of light received to the column signal processing circuit 5 via the vertical signal lines 11.
  • the column signal processing circuit 5 is arranged, for example, for each column of pixels 3, and performs signal processing such as noise removal for each pixel column on the signals output from one row of pixels 3.
  • the column signal processing circuit 5 performs signal processing such as CDS (Correlated Double Sampling) and AD (Analog Digital) conversion to remove pixel-specific fixed pattern noise.
  • a horizontal selection switch (not shown) is provided at the output stage of the column signal processing circuit 5 and connected between it and the horizontal signal line 12.
  • the horizontal drive circuit 6 is composed of, for example, a shift register.
  • the horizontal drive circuit 6 sequentially outputs horizontal scanning pulses to the column signal processing circuits 5, thereby selecting each of the column signal processing circuits 5 in turn, and causing each of the column signal processing circuits 5 to output a pixel signal that has been subjected to signal processing to the horizontal signal line 12.
  • the output circuit 7 processes and outputs pixel signals sequentially supplied from each of the column signal processing circuits 5 through the horizontal signal line 12.
  • the signal processing may include buffering, black level adjustment, column variation correction, various types of digital signal processing, etc.
  • the control circuit 8 generates clock signals and control signals that serve as the basis for the operation of the vertical drive circuit 4, column signal processing circuit 5, horizontal drive circuit 6, etc., based on the vertical synchronization signal, horizontal synchronization signal, and master clock signal. The control circuit 8 then outputs the generated clock signals and control signals to the vertical drive circuit 4, column signal processing circuit 5, horizontal drive circuit 6, etc.
  • ⁇ Pixels> 3 is an equivalent circuit diagram showing an example of the configuration of the pixel 3.
  • the pixel 3 includes a photoelectric conversion element PD, a charge accumulation region (floating diffusion) FD that accumulates (holds) the signal charge photoelectrically converted by the photoelectric conversion element PD, and a transfer transistor TR that transfers the signal charge photoelectrically converted by the photoelectric conversion element PD to the charge accumulation region FD.
  • the pixel 3 also includes a readout circuit 15 electrically connected to the charge accumulation region FD.
  • the photoelectric conversion element PD generates a signal charge according to the amount of light received.
  • the photoelectric conversion element PD also temporarily accumulates (holds) the generated signal charge.
  • the cathode side of the photoelectric conversion element PD is electrically connected to the source region of the transfer transistor TR, and the anode side is electrically connected to a reference potential line (e.g., ground).
  • a photodiode is used as the photoelectric conversion element PD.
  • the drain region of the transfer transistor TR is electrically connected to the charge storage region FD.
  • the gate electrode of the transfer transistor TR is electrically connected to the transfer transistor drive line of the pixel drive line 10 (see FIG. 2).
  • the charge storage region FD temporarily stores and holds the signal charge transferred from the photoelectric conversion element PD via the transfer transistor TR.
  • the readout circuit 15 reads out the signal charge stored in the charge storage region FD and outputs a pixel signal based on the signal charge.
  • the readout circuit 15 includes, but is not limited to, an amplification transistor AMP, a selection transistor SEL, and a reset transistor RST as pixel transistors.
  • These transistors are configured as MOSFETs having, for example, a gate insulating film made of a silicon oxide film (SiO 2 film), a gate electrode, and a pair of main electrode regions functioning as a source region and a drain region.
  • these transistors may be MISFETs (Metal Insulator Semiconductor FETs) whose gate insulating film is made of a silicon nitride film (Si 3 N 4 film) or a laminated film such as a silicon nitride film and a silicon oxide film.
  • MISFETs Metal Insulator Semiconductor FETs
  • the source region of the amplification transistor AMP is electrically connected to the drain region of the selection transistor SEL, and the drain region is electrically connected to the power supply line Vdd and the drain region of the reset transistor.
  • the gate electrode of the amplification transistor AMP is electrically connected to the charge storage region FD and the source region of the reset transistor RST.
  • the source region of the selection transistor SEL is electrically connected to the vertical signal line 11 (VSL), and the drain is electrically connected to the source region of the amplification transistor AMP.
  • the gate electrode of the selection transistor SEL is electrically connected to the selection transistor drive line of the pixel drive line 10 (see FIG. 2).
  • the source region of the reset transistor RST is electrically connected to the charge storage region FD and the gate electrode of the amplification transistor AMP, and the drain region is electrically connected to the power supply line Vdd and the drain region of the amplification transistor AMP.
  • the gate electrode of the reset transistor RST is electrically connected to the reset transistor drive line of the pixel drive line 10 (see FIG. 2).
  • the photodetector 1 (semiconductor chip 2) has a laminated structure in which, for example, a light-incident surface side laminate 80, a first semiconductor layer 20, a first wiring layer 30, a second semiconductor layer 40, a second wiring layer 50, and a third semiconductor layer 60 are laminated in this order.
  • the photodetector 1 (semiconductor chip 2) also has a through conductor 70 extending along the thickness direction of the photodetector 1.
  • the first semiconductor layer 20 will be described first.
  • the first semiconductor layer 20 is made of a semiconductor substrate.
  • the first semiconductor layer 20 is made of, but is not limited to, a single crystal silicon substrate, one surface of which is a first surface S1, and the other surface of which is a second surface S2.
  • the second surface S2 may be called a light incident surface or a back surface, and the first surface S1 may be called an element forming surface or a main surface.
  • a plurality of photoelectric conversion regions 20a arranged in the row direction and the column direction are provided in the portion of the first semiconductor layer 20 corresponding to the pixel region 2A.
  • the photoelectric conversion region 20a is provided for each pixel 3. For example, as shown in FIG.
  • an island-shaped photoelectric conversion region 20a partitioned by an isolation region 20b is provided for each pixel 3.
  • the isolation region 20b is, but is not limited to, a trench structure in which a groove is formed in the thickness direction of the first semiconductor layer 20 and an insulating film is embedded in the formed groove.
  • a trench structure in which both a conductor such as a metal material or polysilicon and an insulating material are embedded in the groove may be used.
  • the photoelectric conversion region 20a includes a semiconductor region of a first conductivity type (e.g., p-type) and a semiconductor region of a second conductivity type (e.g., n-type) for each photoelectric conversion region 20a.
  • the photoelectric conversion element PD shown in FIG. 3 is configured in the photoelectric conversion region 20a. At least a part of the photoelectric conversion region 20a photoelectrically converts incident light to generate a signal charge.
  • a transistor T1 is provided in the photoelectric conversion region 20a.
  • the transistor T1 is, for example, the transfer transistor TR shown in FIG. 3.
  • the photoelectric conversion region 20a is provided with a charge storage region FD shown in FIG. 3.
  • the number of pixels 3 is not limited to that shown in FIG. 4A.
  • the first wiring layer 30 is a wiring layer provided between the first semiconductor layer 20 and the second semiconductor layer 40, with one surface in contact with the first semiconductor layer 20 and the other surface in contact with the second semiconductor layer 40.
  • the first wiring layer 30 includes an insulating film 31.
  • the insulating film 31 is made of a known insulating material and includes, but is not limited to, for example, a silicon oxide layer.
  • a gate electrode G of a transistor T1 is provided in the insulating film 31 in a portion of the first wiring layer 30 corresponding to the pixel region 2A.
  • the second semiconductor layer 40 is made of a semiconductor substrate.
  • the second semiconductor layer 40 is, but is not limited to, made of, for example, a single crystal silicon substrate, one surface of which is the third surface S3, and the other surface of which is the fourth surface S4.
  • the third surface S3 is the surface on the second wiring layer 50 side
  • the fourth surface S4 is the surface on the first wiring layer 30 side.
  • a transistor T2 is provided in a portion of the second semiconductor layer 40 corresponding to the pixel region 2A.
  • the transistor T2 is, for example, a transistor included in the readout circuit 15 shown in FIG. 3.
  • a transistor included in the readout circuit 15 may be provided for each pixel 3, or a transistor included in a shared readout circuit 15 may be provided for each of a plurality of pixels 3.
  • the transistor T2 is provided at a position closer to the third surface S3 in the thickness direction of the second semiconductor layer 40.
  • the second wiring layer 50 is a wiring layer provided between the second semiconductor layer 40 and the third semiconductor layer 60, with one surface in contact with the second semiconductor layer 40 and the other surface in contact with the third semiconductor layer 60.
  • the second wiring layer 50 includes an insulating film 51, wiring 52, connection pads 53A and 53B, and vias (contacts) 54 ( FIG. 5D ), etc.
  • the wiring 52 and the connection pads 53A and 53B are stacked with the insulating film 51 interposed therebetween, as shown in FIG. 4A .
  • the second wiring layer 50 has a second wiring layer 50A and a second wiring layer 50B along the thickness direction. More specifically, the second wiring layer 50 is obtained by bonding the second wiring layer 50A laminated on the second semiconductor layer 40 and the second wiring layer 50B laminated on the third semiconductor layer 60. At that time, a connection pad 53A provided on the second wiring layer 50A and facing the surface of the second wiring layer 50A on the second wiring layer 50B side is bonded to a connection pad 53B provided on the second wiring layer 50B and facing the surface of the second wiring layer 50B on the second wiring layer 50A side. By bonding the connection pad 53A and the connection pad 53B, the wiring 52 provided on the second wiring layer 50A and the wiring 52 provided on the second wiring layer 50B are electrically connected.
  • a gate electrode G of the transistor T2 is provided on the insulating film 51 in the portion corresponding to the pixel region 2A of the second wiring layer 50A.
  • the insulating film 51 in the portion of the second wiring layer 50B that corresponds to the pixel region 2A is provided with, for example, a gate electrode G of the transistor T3 described below.
  • the insulating film 51 is made of a known insulating material, and includes, but is not limited to, a silicon oxide layer.
  • the wiring 52 and the connection pads 53A, 53B are made of a metal material. Materials that make up the wiring 52 and the connection pads 53A, 53B include, but are not limited to, copper (Cu) and aluminum (Al).
  • the third semiconductor layer 60 is made of a semiconductor substrate.
  • the third semiconductor layer 60 is made of, for example, a single crystal silicon substrate, but is not limited thereto, and the surface on the second wiring layer 50 side is the fifth surface S5.
  • the fifth surface S5 may also be called the element formation surface or the main surface.
  • a transistor T3 is provided in a portion of the third semiconductor layer 60 corresponding to the pixel region 2A.
  • the transistor T3 is, for example, a transistor included in the logic circuit 13.
  • the light-incident surface side laminate 80 has a laminated structure in which, for example, a planarization film 81, a color filter 82, and a microlens (on-chip lens) 83 are laminated in this order from the second surface S2 side, although this is not limited thereto.
  • the planarization film 81 is made of a known insulating material or a known resin material, and may be made of, for example, silicon oxide, although this is not limited thereto.
  • a color filter 82 is provided for each pixel 3, and separates the light incident on the photoelectric conversion region 20a into different colors.
  • the color filter 82 is made of, for example, a resin material.
  • a microlens 83 is provided for each pixel 3, and is made of, for example, a resin material.
  • the through conductor 70 is provided in the pixel region 2A of the pixel region 2A and the peripheral region 2B shown in FIG. 1. More specifically, the through conductor 70 is provided for each pixel 3, for example. As shown in FIGS. 4A and 4B, the through conductor 70 is a vertical wiring extending along the thickness direction of the photodetector 1, and penetrates the second semiconductor layer 40 along the thickness direction.
  • the through conductor 70 has a first portion 71 and a second portion 72 that overlap each other in a plan view. That is, the through conductor 70 includes a multi-stage configuration (a two-stage configuration in this embodiment) of the first portion 71 and the second portion 72. Note that, when a barrier metal BM is provided as shown in FIG.
  • the through conductor 70 also includes the barrier metal BM.
  • the first portion 71 penetrates the second semiconductor layer 40 along the thickness direction, and a first end 71a, which is one end in the extending direction, protrudes into the first wiring layer 30.
  • the second portion 72 is provided in the first wiring layer 30, and a second end 72a, which is one end in the extending direction, is directly connected to the first end 71a.
  • the through conductor 70 is a through-silicon electrode (TSV, Through-Silicon Via). Note that the through conductor 70 and the first semiconductor layer 20 are insulated from each other using a known insulating material.
  • the third end 71b which is the other end in the extension direction of the first portion 71, protrudes into the second wiring layer 50.
  • the fourth end 72b which is the other end in the extension direction of the second portion 72, is in the vicinity of the surface (first surface S1) of the first semiconductor layer 20 on the first wiring layer 30 side.
  • the fourth end 72b being in the vicinity of the first surface S1 includes the fourth end 72b being connected to the first surface S1, being buried in the first semiconductor layer 20 from the first surface S1, being connected to an electrode member provided on the first surface S1, and the like.
  • the fourth end 72b may be connected to a diffusion region such as the transistor T1 or the charge storage region FD.
  • the electrode member is, for example, the gate electrode G of the transistor, an electrode E (described later) electrically connected to multiple charge storage regions FD, and the like.
  • the third end 71b is connected to the wiring 52, which is a horizontal wiring extending along the horizontal direction, and the fourth end 72b is embedded in the first semiconductor layer 20 from the first surface S1.
  • the first end 71a and the second end 72a are joined within the first wiring layer 30.
  • the joining position between the first end 71a and the second end 72a is at a position 5 nm to 100 nm in the thickness direction from the surface (fourth surface S4) of the second semiconductor layer 40 on the first wiring layer 30 side.
  • the distance in the thickness direction between this joining position and the fourth surface S4 is called the distance d1.
  • the first portion 71 is tapered in a width direction. More specifically, the first portion 71 is tapered in a width direction toward the first semiconductor layer 20.
  • the dimension w71b along the width direction of the third end 71b is greater than the dimension w71a along the width direction of the first end 71a.
  • the second portion 72 is tapered in a width direction. More specifically, the second portion 72 is tapered in a width direction toward the first semiconductor layer 20.
  • the dimension w72a along the width direction of the second end 72a is greater than the dimension w72b along the width direction of the fourth end 72b.
  • the difference in the dimensions along the width direction between the first end 71a and the second end 72a, which are directly connected to each other, is set to 20 nm or more to ensure an overlap margin.
  • This difference in size may be, for example, the difference between the diameter of the end face of the first end 71a and the diameter of the end face of the second end 72a.
  • the size w72a along the width direction of the second end 72a is set to be 20 nm or more larger than the size w71a along the width direction of the first end 71a (w72a-w71a ⁇ 20 nm).
  • the dimension h71 along the extension direction from the third end 71b to the first end 71a of the first portion 71 and the dimension h72 along the extension direction from the second end 72a to the fourth end 72b of the second portion 72 may be the same dimension or different dimensions. Since the first portion 71 has the role of penetrating the second semiconductor layer 40, the dimension h71 along the extension direction is set to be larger than the thickness of the second semiconductor layer 40. The thicker the second semiconductor layer 40, the larger the dimension h71 must be. In this embodiment, there is a demand for a larger dimension h71, so the dimension h71 is set to be larger than the dimension h72.
  • the first aspect ratio which is the aspect ratio of the first portion 71, can be obtained by dividing the dimension h71 along the extension direction of the first portion 71 by the dimension along the width direction of the first portion 71. More specifically, the first aspect ratio can be obtained by dividing the dimension h71 by the dimension w71b along the width direction of the third end 71b.
  • the third end 71b is the end of the first portion 71 closest to the third semiconductor layer 60, and is the end on the side where the formation of a hole 71h, described below, into which the first portion 71 is embedded, begins.
  • the third end 71b also has the largest dimension among the dimensions along the width direction of the first portion 71.
  • the second aspect ratio which is the aspect ratio of the second portion 72
  • the second aspect ratio can be obtained by dividing the dimension h72 along the extension direction of the second portion 72 by the dimension along the width direction of the second portion 72. More specifically, the second aspect ratio can be obtained by dividing the dimension h72 by the dimension w72a along the width direction of the second end 72a, which is the largest dimension among the dimensions along the width direction of the second portion 72.
  • the third aspect ratio which is the aspect ratio of the through conductor 70, can be obtained by dividing the dimension h70 along the extension direction of the through conductor 70 by the dimension along the width direction of the first portion 71. More specifically, the third aspect ratio can be obtained by dividing the dimension h70 by the dimension w71b along the width direction of the third end 71b. That is, the third aspect ratio is obtained using the dimension w71b used when obtaining the above-mentioned first aspect ratio.
  • the reason for using the dimension along the width direction of the first portion 71 of the first portion 71 and the second portion 72 when obtaining the third aspect ratio is that, for example, since the first portion 71 is a portion that penetrates the first semiconductor layer 20, there is a high demand for controlling the dimension in the width direction.
  • the dimension h70 is the dimension along the extension direction from the third end 71b to the fourth end 72b.
  • the third aspect ratio of the through conductor 70 may be, for example, 10 or more, 15 or more, 20 or more, 25 or more, or 30 or more.
  • the third aspect ratio may be 15 or less, 20 or less, 25 or less, 30 or less, or 35 or less.
  • the first aspect ratio of the first portion 71 and the second aspect ratio of the second portion 72 are both different from the aspect ratio of the through conductor 70. More specifically, both the first aspect ratio and the second aspect ratio are set lower than the aspect ratio of the through conductor 70. In general, lowering the aspect ratio of the hole improves the coverage of the barrier metal BM at the bottom of the hole.
  • the first portion 71 and the second portion 72 having such aspect ratios realize a through conductor 70 with a high aspect ratio.
  • Each of the first aspect ratio and the second aspect ratio may be, for example, 25 or less, or may be, for example, 16 or less. And each of the first aspect ratio and the second aspect ratio may be, for example, 10 or more.
  • the first aspect ratio and the second aspect ratio may be the same or different.
  • the first portion 71 and the second portion 72 are preferably made of a conductive material with good embeddability.
  • conductive materials include tungsten (W), copper (Cu), aluminum (Al), aluminum-copper alloy, cobalt (Co), gold (Au), nickel (Ni), titanium (Ti), titanium nitride, molybdenum (Mo), tantalum (Ta), tantalum nitride, platinum (Pt), and ruthenium (Ru).
  • W tungsten
  • Cu copper
  • Al aluminum-copper alloy
  • cobalt (Co) gold
  • Au nickel
  • Ti titanium
  • Ti titanium nitride
  • Mo molybdenum
  • Ta tantalum
  • platinum (Pt) platinum
  • Ru ruthenium
  • FIGS. 5A to 5E a method for manufacturing the photodetector 1 will be described with reference to FIGS. 5A to 5E. Note that in FIGS. 5A to 5E, the description of the gate insulating film of the transistor and the like is omitted.
  • a semiconductor substrate including a first semiconductor layer 20 in which elements such as a transistor T1, a photoelectric conversion element PD (not shown) and a diffusion region such as a charge storage region FD are formed is prepared. Electrode members such as a gate electrode G and an electrode E of the transistor T1 are provided on the first surface S1 side of the first semiconductor layer 20.
  • an insulating film m1 is laminated on the exposed surface on the first surface S1 side.
  • a hole 72h for embedding the second portion 72 is formed by using a known photolithography technique and an etching technique at a position where the insulating film m1 overlaps the electrode member in a plan view and at a position where the insulating film m1 overlaps the portion of the first semiconductor layer 20 to which the 72 is to be connected.
  • a material constituting the second portion 72 is embedded in the hole 72h by using a known film formation technique.
  • polishing is performed by a chemical mechanical polishing (CMP) method to remove excess portions of the material constituting the second portion 72 and flatten the exposed surface, thereby forming the second portion 72.
  • CMP chemical mechanical polishing
  • the insulating film m1 may be a known insulating film, and may include, for example, a layer made of silicon oxide or a layer made of silicon nitride, although this is not limited thereto.
  • a planarizing film m2 and an insulating film m3 are laminated in that order on the exposed surface of the planarized insulating film m1.
  • the planarizing film m2 is laminated to planarize the exposed surface polished by the CMP method.
  • the planarizing film m2 is a known insulating film, such as a silicon oxide film.
  • the insulating film m3 is a layer laminated to bond wafers together, such as a silicon nitride film.
  • a semiconductor substrate 40w is prepared with an insulating film m4 laminated on the fourth surface S4 side.
  • the insulating film m4 is a known insulating film, for example, a silicon oxide film. Then, the exposed surface of the insulating film m4 is bonded to the exposed surface of the insulating film m3, thereby bonding the semiconductor substrate 40w side to the first semiconductor layer 20 side.
  • the interface between the insulating films m3 and m4 after bonding becomes the bonding surface S shown in FIG. 4B.
  • the exposed surface of the semiconductor substrate 40w is ground to leave a portion that will become the second semiconductor layer 40.
  • a hole 40h is formed in the second semiconductor layer 40, and the inside of the hole 40h is filled with an insulating film m5.
  • the hole 40h is formed at a position that overlaps the second portion 72 in a planar view. The etching of the hole 72h is performed until it penetrates the second semiconductor layer 40 and the second end 72a of the second portion 72 is exposed.
  • the insulating film m5 is a known insulating material, and is not limited to this, for example, a silicon oxide film.
  • the insulating film m6 is a known insulating material, and is not limited to this, but includes, for example, a silicon oxide film and a silicon nitride film. Note that the order in which the holes 72h are formed, the insulating film m5 is filled in, and the elements and other components are formed, as described in this embodiment, is merely an example, and they may be formed in a different order.
  • holes 71h and holes 54h are formed using known lithography and etching techniques.
  • Hole 71h is provided at a position overlapping with second portion 72 in plan view, and is formed by etching insulating films m5, m6, etc.
  • hole 71h is provided to a depth reaching second end 72a of second portion 72.
  • Hole 54h is provided at a position overlapping with gate electrode G and diffusion region of transistor T2 in plan view, and is formed by etching insulating film m6, etc.
  • hole 54h is provided to a depth reaching gate electrode G and diffusion region of transistor T2.
  • materials constituting first portion 71 and via 54 are embedded in holes 71h and holes 754h.
  • polishing is performed by chemical mechanical polishing to remove excess portions of materials constituting first portion 71 and via 54, and the exposed surfaces are flattened. In this way, first portion 71 and via 54 are formed.
  • the second wiring layer 50A is completed.
  • a substrate including a third semiconductor layer 60 on which the second wiring layer 50B is laminated is prepared, and the second wiring layer 50B is bonded to the second wiring layer 50A.
  • the light incident surface side laminate 80 is formed, and the photodetector 1 is almost completed.
  • the photodetector 1 is then singulated to obtain the semiconductor chip 2.
  • the through conductor 70A does not have a multi-stage structure, but is composed of a single conductor 73.
  • a countermeasure is required to prevent the influence on the electrical conduction between the through conductor 70A and its connection target, depending on the aspect ratio. More specifically, a countermeasure is required to prevent the influence on the electrical resistance between the through conductor 70A and its connection target.
  • the photodetector 1 includes a laminated structure in which a first semiconductor layer 20 having a photoelectric conversion region 20a, a first wiring layer 30, a second semiconductor layer 40, a second wiring layer 50, and a third semiconductor layer 60 are laminated in this order, and includes a through conductor 70 extending along the thickness direction, and the through conductor 70 has a first portion 71 and a second portion 72 that overlap in a plan view, the first portion 71 penetrates the second semiconductor layer 40 along the thickness direction, and the first end 71a, which is one end in the extension direction, protrudes into the first wiring layer 30, and the second portion 72 is provided in the first wiring layer 30, and the second end 72a, which is one end in the extension direction, is directly connected to the first end 71a.
  • the through conductor 70 includes a multi-stage structure of the first portion 71 and the second portion 72
  • the role of the through conductor 70 can be divided and shared between the first portion 71 and the second portion 72. More specifically, the first portion 71 can be made to play a role of penetrating the second semiconductor layer 40, and the second portion 72 can be made to play a role of connecting to a connection target. Therefore, the first portion 71 and the second portion 72 can be configured to be suitable for their respective roles. This can prevent the width dimension of the first portion 71 from increasing, and can prevent the degree of freedom of layout of wiring and elements including the through conductor 70 from decreasing. Since the width dimension of the first portion 71 can be prevented from increasing, the parasitic capacitance between the through conductor 70 and other wiring, etc.
  • the wiring delay can be prevented from increasing. Since the width dimension of the first portion 71 can be prevented from increasing, the distance between the through conductor 70 and the transistor can be prevented from becoming too small, and the influence of the bias of the through conductor 70 can be prevented from increasing, and the characteristic fluctuation of the transistor can be prevented from increasing. Furthermore, since the connection between the second portion 72 and its connection target can be prevented from deteriorating, the electrical resistance between the second portion 72 and its connection target can be prevented from increasing.
  • the through conductor 70 is formed in a first portion 71 and a second portion 72, the yield and reliability can be prevented from decreasing. More specifically, since the etching for forming the hole for embedding the through conductor 70 is performed in a first portion 71 and a second portion 72, the dimension in the extension direction of the hole is less likely to be insufficient. This can prevent the yield and reliability of the through conductor 70 from decreasing. In addition, in the process of embedding the barrier metal BM in the formed hole, the film formation of the barrier metal BM is performed in a first portion 71 and a second portion 72, so the coverage of the barrier metal BM on the bottom surface of the hole can be prevented from deteriorating.
  • the embedding of the material is performed in a first portion 71 and a second portion 72, so that voids are less likely to occur in the through conductor 70. This can prevent the reliability from decreasing.
  • the value obtained by dividing the dimension of the first portion 71 along the extension direction by the dimension of the first portion 71 along the width direction is defined as the first aspect ratio
  • the value obtained by dividing the dimension of the second portion 72 along the extension direction by the dimension of the second portion 72 along the width direction is defined as the second aspect ratio
  • the value obtained by dividing the dimension of the through conductor 70 along the extension direction by the dimension of the first portion 71 along the width direction is defined as the third aspect ratio
  • both the first aspect ratio and the second aspect ratio are lower than the third aspect ratio. Therefore, the third aspect ratio of the through conductor 70 can be increased while suppressing a decrease in the yield and reliability of the through conductor 70 and suppressing an increase in the electrical resistance between the second portion 72 and its connection target.
  • the second aspect ratio is configured to be lower than the first aspect ratio. Therefore, the barrier metal BM is less likely to provide insufficient coverage of the bottom surface of the hole in which the second portion 72 is provided. This makes it possible to prevent the electrical resistance between the second portion 72 and its connection target from increasing.
  • the third aspect ratio is 10 or more, 15 or more, 20 or more, 25 or more, or 30 or more.
  • the first aspect ratio and the second aspect ratio are 25 or less or 16 or less, and 10 or more, respectively.
  • the difference in dimension along the width direction between the first end 71a and the second end 72a is 20 nm or more. Therefore, it is possible to prevent a reduction in the overlap margin between the first end 71a and the second end 72a, and to prevent a deterioration in the electrical connectivity between the first portion 71 and the second portion 72.
  • the junction position between the first end 71a and the second end 72a is at a position 5 nm to 100 nm in the thickness direction from the surface (fourth surface S4) of the second semiconductor layer 40 on the first wiring layer 30 side. This allows the second end 72a to be provided at a position different from the fourth surface S4 in the thickness direction, ensuring insulation between the second end 72a and the second semiconductor layer 40.
  • the insulating film m1 functioning as a planarizing film and the insulating film m2 functioning as a bonding film with the second semiconductor layer 40 side are laminated on the second end 72a, so that deterioration of the bonding between the first wiring layer 30 side and the second semiconductor layer 40 side can be suppressed.
  • the distance in the thickness direction between the joining position of the first end 71a and the second end 72a and the fourth surface S4 is set to distance d1, but the distance in the thickness direction between the joining position and the bonding surface S may also be set to distance d1.
  • Distance d1 is 5 nm or more and 100 nm or less.
  • the dimension w72a along the width direction of the second end 72a is set to be larger than the dimension w71a along the width direction of the first end 71a, but the present technology is not limited to this.
  • the dimension w71a along the width direction of the first end 71a may be set to be larger than the dimension w72a along the width direction of the second end 72a.
  • the difference in the dimension along the width direction between the first end 71a and the second end 72a directly connected to each other is set to 20 nm or more in order to ensure an overlap margin. More specifically, the dimension w71a along the width direction of the first end 71a is set to be 20 nm or more larger than the dimension w72a along the width direction of the second end 72a (w71a-w72a ⁇ 20 nm).
  • the first portion 71 and the second portion 72 are provided in a tapered shape, but the present technology is not limited to this.
  • the first portion 71 and the second portion 72 are provided with approximately the same widthwise dimension along the thickness direction. More specifically, the widthwise dimension of the first portion 71 is dimension w71a, and the widthwise dimension of the second portion 72 is dimension w72a.
  • This modified example 3 is a combination of the modified examples 1 and 2.
  • the dimension w71a along the width direction of the first end 71a is set to be larger than the dimension w72a along the width direction of the second end 72a.
  • the first portion 71 and the second portion 72 are each set to have approximately the same width dimension along the thickness direction. More specifically, the width direction dimension of the first portion 71 is dimension w71a, and the width direction dimension of the second portion 72 is dimension w72a.
  • the fourth end 72b of the second portion 72 is located near the surface (first surface S1) of the first semiconductor layer 20 on the first wiring layer 30 side, but the present technology is not limited to this.
  • the fourth end 72b which is the other end in the extension direction of the second portion 72, is connected to the wiring 32, which is a horizontal wiring provided in the first wiring layer 30 and extends along the horizontal direction.
  • the first wiring layer 30 has the wiring 32 and a via (not shown) which is a vertical wiring.
  • the wiring 32 is stacked via an insulating film 31.
  • the electronic device 100 includes a solid-state imaging device 101, an optical lens 102, a shutter device 103, a drive circuit 104, and a signal processing circuit 105.
  • the electronic device 100 is, for example, an electronic device such as a camera, but is not limited thereto.
  • the electronic device 100 also includes the above-mentioned photodetector 1 as the solid-state imaging device 101.
  • the optical lens (optical system) 102 focuses image light (incident light 106) from the subject onto the imaging surface of the solid-state imaging device 101. This causes signal charges to accumulate in the solid-state imaging device 101 for a certain period of time.
  • the shutter device 103 controls the light irradiation period and light blocking period for the solid-state imaging device 101.
  • the drive circuit 104 supplies a drive signal that controls the transfer operation of the solid-state imaging device 101 and the shutter operation of the shutter device 103.
  • the drive signal (timing signal) supplied from the drive circuit 104 transfers signals from the solid-state imaging device 101.
  • the signal processing circuit 105 performs various signal processing on signals (pixel signals) output from the solid-state imaging device 101.
  • the video signals that have undergone signal processing are stored in a storage medium such as a memory, or output to a monitor.
  • the through conductors 70 have a multi-stage configuration in the solid-state imaging device 101, so that the width dimension of the portion of the through conductors 70 that penetrates the second semiconductor layer 40 can be prevented from increasing, and the degree of freedom in the layout of the wiring and elements including the through conductors 70 can be prevented from decreasing. Also, the electrical resistance between the through conductors 70 and their connection targets can be prevented from increasing.
  • the electronic device 100 is not limited to a camera, but may be other electronic devices.
  • it may be an imaging device such as a camera module for a mobile device such as a mobile phone.
  • the electronic device 100 may also include, as the solid-state imaging device 101, a photodetector 1 according to either the first embodiment or its modified examples, or a photodetector 1 according to a combination of at least two of the first embodiment and its modified examples.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure may be realized as a device mounted on any type of moving body such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility device, an airplane, a drone, a ship, or a robot.
  • FIG. 12 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile object control system to which the technology disclosed herein can be applied.
  • the vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside vehicle information detection unit 12030, an inside vehicle information detection unit 12040, and an integrated control unit 12050.
  • Also shown as functional components of the integrated control unit 12050 are a microcomputer 12051, an audio/video output unit 12052, and an in-vehicle network I/F (interface) 12053.
  • the drive system control unit 12010 controls the operation of devices related to the drive system of the vehicle according to various programs.
  • the drive system control unit 12010 functions as a control device for a drive force generating device for generating the drive force of the vehicle, such as an internal combustion engine or a drive motor, a drive force transmission mechanism for transmitting the drive force to the wheels, a steering mechanism for adjusting the steering angle of the vehicle, and a braking device for generating a braking force for the vehicle.
  • the body system control unit 12020 controls the operation of various devices installed in the vehicle body according to various programs.
  • the body system control unit 12020 functions as a control device for a keyless entry system, a smart key system, a power window device, or various lamps such as headlamps, tail lamps, brake lamps, turn signals, and fog lamps.
  • radio waves or signals from various switches transmitted from a portable device that replaces a key can be input to the body system control unit 12020.
  • the body system control unit 12020 accepts the input of these radio waves or signals and controls the vehicle's door lock device, power window device, lamps, etc.
  • the outside-vehicle information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000.
  • the image capturing unit 12031 is connected to the outside-vehicle information detection unit 12030.
  • the outside-vehicle information detection unit 12030 causes the image capturing unit 12031 to capture images outside the vehicle and receives the captured images.
  • the outside-vehicle information detection unit 12030 may perform object detection processing or distance detection processing for people, cars, obstacles, signs, or characters on the road surface based on the received images.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal according to the amount of light received.
  • the imaging unit 12031 can output the electrical signal as an image, or as distance measurement information.
  • the light received by the imaging unit 12031 may be visible light, or may be invisible light such as infrared light.
  • the in-vehicle information detection unit 12040 detects information inside the vehicle.
  • a driver state detection unit 12041 that detects the state of the driver is connected.
  • the driver state detection unit 12041 includes, for example, a camera that captures an image of the driver, and the in-vehicle information detection unit 12040 may calculate the driver's degree of fatigue or concentration based on the detection information input from the driver state detection unit 12041, or may determine whether the driver is dozing off.
  • the microcomputer 12051 can calculate the control target values of the driving force generating device, steering mechanism, or braking device based on the information inside and outside the vehicle acquired by the outside vehicle information detection unit 12030 or the inside vehicle information detection unit 12040, and output a control command to the drive system control unit 12010.
  • the microcomputer 12051 can perform cooperative control aimed at realizing the functions of an ADAS (Advanced Driver Assistance System), including avoiding or mitigating vehicle collisions, following based on the distance between vehicles, maintaining vehicle speed, vehicle collision warning, or vehicle lane departure warning.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 can also control the driving force generating device, steering mechanism, braking device, etc. based on information about the surroundings of the vehicle acquired by the outside vehicle information detection unit 12030 or the inside vehicle information detection unit 12040, thereby performing cooperative control aimed at automatic driving, which allows the vehicle to travel autonomously without relying on the driver's operation.
  • the microcomputer 12051 can also output control commands to the body system control unit 12020 based on information outside the vehicle acquired by the outside-vehicle information detection unit 12030. For example, the microcomputer 12051 can control the headlamps according to the position of a preceding vehicle or an oncoming vehicle detected by the outside-vehicle information detection unit 12030, and perform cooperative control aimed at preventing glare, such as switching high beams to low beams.
  • the audio/image output unit 12052 transmits at least one output signal of audio and image to an output device capable of visually or audibly notifying the occupants of the vehicle or the outside of the vehicle of information.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices.
  • the display unit 12062 may include, for example, at least one of an on-board display and a head-up display.
  • FIG. 13 shows an example of the installation position of the imaging unit 12031.
  • the vehicle 12100 has imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at the front nose, side mirrors, rear bumper, back door, and the top of the windshield inside the vehicle cabin of the vehicle 12100.
  • the imaging unit 12101 provided at the front nose and the imaging unit 12105 provided at the top of the windshield inside the vehicle cabin mainly acquire images of the front of the vehicle 12100.
  • the imaging units 12102 and 12103 provided at the side mirrors mainly acquire images of the sides of the vehicle 12100.
  • the imaging unit 12104 provided at the rear bumper or back door mainly acquires images of the rear of the vehicle 12100.
  • the images of the front acquired by the imaging units 12101 and 12105 are mainly used to detect preceding vehicles, pedestrians, obstacles, traffic lights, traffic signs, lanes, etc.
  • FIG. 13 shows an example of the imaging ranges of the imaging units 12101 to 12104.
  • Imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
  • imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
  • imaging range 12114 indicates the imaging range of the imaging unit 12104 provided on the rear bumper or back door.
  • an overhead image of the vehicle 12100 viewed from above is obtained by superimposing the image data captured by the imaging units 12101 to 12104.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera consisting of multiple imaging elements, or an imaging element having pixels for detecting phase differences.
  • the microcomputer 12051 can obtain the distance to each solid object within the imaging ranges 12111 to 12114 and the change in this distance over time (relative speed with respect to the vehicle 12100) based on the distance information obtained from the imaging units 12101 to 12104, and can extract as a preceding vehicle, in particular, the closest solid object on the path of the vehicle 12100 that is traveling in approximately the same direction as the vehicle 12100 at a predetermined speed (e.g., 0 km/h or faster). Furthermore, the microcomputer 12051 can set the inter-vehicle distance that should be maintained in advance in front of the preceding vehicle, and perform automatic braking control (including follow-up stop control) and automatic acceleration control (including follow-up start control). In this way, cooperative control can be performed for the purpose of automatic driving, which runs autonomously without relying on the driver's operation.
  • automatic braking control including follow-up stop control
  • automatic acceleration control including follow-up start control
  • the microcomputer 12051 classifies and extracts three-dimensional object data on three-dimensional objects, such as two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects, based on the distance information obtained from the imaging units 12101 to 12104, and can use the data to automatically avoid obstacles.
  • the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see.
  • the microcomputer 12051 determines the collision risk, which indicates the risk of collision with each obstacle, and when the collision risk is equal to or exceeds a set value and there is a possibility of a collision, it can provide driving assistance for collision avoidance by outputting an alarm to the driver via the audio speaker 12061 or the display unit 12062, or by forcibly decelerating or steering to avoid a collision via the drive system control unit 12010.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104. The recognition of such a pedestrian is performed, for example, by a procedure of extracting feature points in the captured image of the imaging units 12101 to 12104 as infrared cameras, and a procedure of performing pattern matching processing on a series of feature points that indicate the contour of an object to determine whether or not it is a pedestrian.
  • the audio/image output unit 12052 controls the display unit 12062 to superimpose a rectangular contour line for emphasis on the recognized pedestrian.
  • the audio/image output unit 12052 may also control the display unit 12062 to display an icon or the like indicating a pedestrian at a desired position.
  • the technology of the present disclosure can be applied to, for example, the imaging unit 12031.
  • a light detection device 1 having a through conductor 70 can be applied to the imaging unit 12031.
  • the technology according to the present disclosure (the present technology) can be applied to various products.
  • the technology according to the present disclosure may be applied to an endoscopic surgery system.
  • FIG. 14 is a diagram showing an example of the general configuration of an endoscopic surgery system to which the technology disclosed herein (the present technology) can be applied.
  • an operator (doctor) 11131 is shown using an endoscopic surgery system 11000 to perform surgery on a patient 11132 on a patient bed 11133.
  • the endoscopic surgery system 11000 is composed of an endoscope 11100, other surgical tools 11110 such as an insufflation tube 11111 and an energy treatment tool 11112, a support arm device 11120 that supports the endoscope 11100, and a cart 11200 on which various devices for endoscopic surgery are mounted.
  • the endoscope 11100 is composed of a lens barrel 11101, the tip of which is inserted into the body cavity of the patient 11132 at a predetermined length, and a camera head 11102 connected to the base end of the lens barrel 11101.
  • the endoscope 11100 is configured as a so-called rigid scope having a rigid lens barrel 11101, but the endoscope 11100 may also be configured as a so-called flexible scope having a flexible lens barrel.
  • the tip of the tube 11101 has an opening into which an objective lens is fitted.
  • a light source device 11203 is connected to the endoscope 11100, and light generated by the light source device 11203 is guided to the tip of the tube by a light guide extending inside the tube 11101, and is irradiated via the objective lens towards an object to be observed inside the body cavity of the patient 11132.
  • the endoscope 11100 may be a direct-viewing endoscope, an oblique-viewing endoscope, or a side-viewing endoscope.
  • An optical system and an image sensor are provided inside the camera head 11102, and the reflected light (observation light) from the object of observation is focused on the image sensor by the optical system.
  • the observation light is photoelectrically converted by the image sensor to generate an electrical signal corresponding to the observation light, i.e., an image signal corresponding to the observed image.
  • the image signal is sent to the camera control unit (CCU: Camera Control Unit) 11201 as RAW data.
  • CCU Camera Control Unit
  • the CCU 11201 is composed of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), etc., and controls the overall operation of the endoscope 11100 and the display device 11202. Furthermore, the CCU 11201 receives an image signal from the camera head 11102, and performs various image processing on the image signal, such as development processing (demosaic processing), in order to display an image based on the image signal.
  • a CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the display device 11202 under the control of the CCU 11201, displays an image based on the image signal that has been subjected to image processing by the CCU 11201.
  • the light source device 11203 is composed of a light source such as an LED (Light Emitting Diode) and supplies irradiation light to the endoscope 11100 when photographing the surgical site, etc.
  • a light source such as an LED (Light Emitting Diode) and supplies irradiation light to the endoscope 11100 when photographing the surgical site, etc.
  • the input device 11204 is an input interface for the endoscopic surgery system 11000.
  • a user can input various information and instructions to the endoscopic surgery system 11000 via the input device 11204.
  • the user inputs an instruction to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) of the endoscope 11100.
  • the treatment tool control device 11205 controls the operation of the energy treatment tool 11112 for cauterizing tissue, incising, sealing blood vessels, etc.
  • the insufflation device 11206 sends gas into the body cavity of the patient 11132 via the insufflation tube 11111 to inflate the body cavity in order to ensure a clear field of view for the endoscope 11100 and to ensure a working space for the surgeon.
  • the recorder 11207 is a device capable of recording various types of information related to the surgery.
  • the printer 11208 is a device capable of printing various types of information related to the surgery in various formats such as text, images, or graphs.
  • the light source device 11203 that supplies illumination light to the endoscope 11100 when photographing the surgical site can be composed of a white light source composed of, for example, an LED, a laser light source, or a combination of these.
  • a white light source composed of, for example, an LED, a laser light source, or a combination of these.
  • the white light source is composed of a combination of RGB laser light sources, the output intensity and output timing of each color (each wavelength) can be controlled with high precision, so that the white balance of the captured image can be adjusted in the light source device 11203.
  • the light source device 11203 may be controlled to change the intensity of the light it outputs at predetermined time intervals.
  • the image sensor of the camera head 11102 may be controlled to acquire images in a time-division manner in synchronization with the timing of the change in the light intensity, and the images may be synthesized to generate an image with a high dynamic range that is free of so-called blackout and whiteout.
  • the light source device 11203 may be configured to supply light of a predetermined wavelength band corresponding to special light observation.
  • special light observation for example, by utilizing the wavelength dependency of light absorption in body tissue, a narrow band of light is irradiated compared to the light irradiated during normal observation (i.e., white light), and a specific tissue such as blood vessels on the surface of the mucosa is photographed with high contrast, so-called narrow band imaging is performed.
  • fluorescence observation may be performed in which an image is obtained by fluorescence generated by irradiating excitation light.
  • excitation light is irradiated to body tissue and fluorescence from the body tissue is observed (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected into the body tissue and excitation light corresponding to the fluorescence wavelength of the reagent is irradiated to the body tissue to obtain a fluorescent image.
  • the light source device 11203 may be configured to supply narrow band light and/or excitation light corresponding to such special light observation.
  • FIG. 15 is a block diagram showing an example of the functional configuration of the camera head 11102 and CCU 11201 shown in FIG. 14.
  • the camera head 11102 has a lens unit 11401, an imaging unit 11402, a drive unit 11403, a communication unit 11404, and a camera head control unit 11405.
  • the CCU 11201 has a communication unit 11411, an image processing unit 11412, and a control unit 11413.
  • the camera head 11102 and the CCU 11201 are connected to each other via a transmission cable 11400 so that they can communicate with each other.
  • the lens unit 11401 is an optical system provided at the connection with the lens barrel 11101. Observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102 and enters the lens unit 11401.
  • the lens unit 11401 is composed of a combination of multiple lenses including a zoom lens and a focus lens.
  • the imaging unit 11402 is composed of an imaging element.
  • the imaging element constituting the imaging unit 11402 may be one (so-called single-plate type) or multiple (so-called multi-plate type).
  • each imaging element may generate an image signal corresponding to each of RGB, and a color image may be obtained by combining these.
  • the imaging unit 11402 may be configured to have a pair of imaging elements for acquiring image signals for the right eye and the left eye corresponding to 3D (dimensional) display. By performing 3D display, the surgeon 11131 can more accurately grasp the depth of the biological tissue in the surgical site.
  • 3D dimensional
  • the imaging unit 11402 does not necessarily have to be provided in the camera head 11102.
  • the imaging unit 11402 may be provided inside the lens barrel 11101, immediately after the objective lens.
  • the driving unit 11403 is composed of an actuator, and moves the zoom lens and focus lens of the lens unit 11401 a predetermined distance along the optical axis under the control of the camera head control unit 11405. This allows the magnification and focus of the image captured by the imaging unit 11402 to be adjusted appropriately.
  • the communication unit 11404 is configured with a communication device for transmitting and receiving various information to and from the CCU 11201.
  • the communication unit 11404 transmits the image signal obtained from the imaging unit 11402 as RAW data to the CCU 11201 via the transmission cable 11400.
  • the communication unit 11404 also receives control signals for controlling the operation of the camera head 11102 from the CCU 11201, and supplies them to the camera head control unit 11405.
  • the control signals include information on the imaging conditions, such as information specifying the frame rate of the captured image, information specifying the exposure value during imaging, and/or information specifying the magnification and focus of the captured image.
  • the above-mentioned frame rate, exposure value, magnification, focus, and other imaging conditions may be appropriately specified by the user, or may be automatically set by the control unit 11413 of the CCU 11201 based on the acquired image signal.
  • the endoscope 11100 is equipped with so-called AE (Auto Exposure) function, AF (Auto Focus) function, and AWB (Auto White Balance) function.
  • the camera head control unit 11405 controls the operation of the camera head 11102 based on a control signal from the CCU 11201 received via the communication unit 11404.
  • the communication unit 11411 is configured with a communication device for transmitting and receiving various information to and from the camera head 11102.
  • the communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
  • the communication unit 11411 also transmits to the camera head 11102 a control signal for controlling the operation of the camera head 11102.
  • the image signal and the control signal can be transmitted by electrical communication, optical communication, etc.
  • the image processing unit 11412 performs various image processing operations on the image signal, which is the RAW data transmitted from the camera head 11102.
  • the control unit 11413 performs various controls related to the imaging of the surgical site, etc. by the endoscope 11100, and the display of the captured images obtained by imaging the surgical site, etc. For example, the control unit 11413 generates a control signal for controlling the driving of the camera head 11102.
  • the control unit 11413 also causes the display device 11202 to display the captured image showing the surgical site, etc., based on the image signal that has been image-processed by the image processing unit 11412. At this time, the control unit 11413 may recognize various objects in the captured image using various image recognition techniques. For example, the control unit 11413 can recognize surgical tools such as forceps, specific body parts, bleeding, mist generated when the energy treatment tool 11112 is used, etc., by detecting the shape and color of the edges of objects included in the captured image. When the control unit 11413 causes the display device 11202 to display the captured image, it may use the recognition result to superimpose various types of surgical support information on the image of the surgical site. By superimposing the surgical support information and presenting it to the surgeon 11131, the burden on the surgeon 11131 can be reduced and the surgeon 11131 can proceed with the surgery reliably.
  • various image recognition techniques such as forceps, specific body parts, bleeding, mist generated when the energy treatment tool 11112 is used, etc.
  • the transmission cable 11400 that connects the camera head 11102 and the CCU 11201 is an electrical signal cable that supports electrical signal communication, an optical fiber that supports optical communication, or a composite cable of these.
  • communication is performed wired using a transmission cable 11400, but communication between the camera head 11102 and the CCU 11201 may also be performed wirelessly.
  • the technology of the present disclosure can be applied to the imaging section 11402 of the camera head 11102.
  • a light detection device 1 having a through conductor 70 can be applied to the imaging section 11402.
  • this technology can be applied to light detection devices in general, including not only the solid-state imaging devices as image sensors described above, but also distance measurement sensors that measure distance, also known as ToF (Time of Flight) sensors.
  • Distance measurement sensors emit light toward an object, detect the reflected light that is reflected back from the surface of the object, and calculate the distance to the object based on the flight time from when the light is emitted to when the reflected light is received.
  • the structure of the through conductor 70 described above can be used as the structure of this distance measurement sensor.
  • the materials cited as constituting the above-mentioned components may contain additives, impurities, etc.
  • the isolation region 20b is an insulating film, but the present technology is not limited to this. Isolation region 20b may have any known configuration, and may, for example, be configured such that polysilicon is embedded via an insulating film on the second surface S2 side of the trench, and an insulating film is embedded on the first surface S1 side of the trench.
  • the present technology may be configured as follows. (1) a laminated structure in which a first semiconductor layer having a photoelectric conversion region, a first wiring layer, a second semiconductor layer, a second wiring layer, and a third semiconductor layer are laminated in this order; A through conductor extending along a thickness direction is provided, the through conductor has a first portion and a second portion that overlap in a plan view, the first portion penetrates the second semiconductor layer along a thickness direction, and a first end portion which is one end portion in an extending direction protrudes into the first wiring layer; the second portion is provided in the first wiring layer, and a second end portion which is one end portion in an extending direction is directly connected to the first end portion; Light detection device.
  • a value obtained by dividing a dimension of the first portion along an extension direction by a dimension of the first portion along a width direction is defined as a first aspect ratio, which is an aspect ratio of the first portion;
  • a value obtained by dividing a dimension of the second portion along an extension direction by a dimension of the second portion along a width direction is defined as a second aspect ratio that is an aspect ratio of the second portion;
  • a value obtained by dividing a dimension of the through conductor along an extension direction by a dimension of the first portion along a width direction is defined as a third aspect ratio which is an aspect ratio of the through conductor
  • the optical detection device according to any one of (2) to (4), wherein the third aspect ratio is 15 or more.
  • a value obtained by dividing a dimension of the first portion along an extension direction by a dimension of the first portion along a width direction is defined as a first aspect ratio, which is an aspect ratio of the first portion;
  • a value obtained by dividing a dimension of the second portion along an extension direction by a dimension of the second portion along a width direction is defined as a second aspect ratio which is an aspect ratio of the second portion.
  • the optical detection device according to (10), wherein the first aspect ratio and the second aspect ratio are each 16 or less.
  • the light detection device according to any one of (1) to (11), wherein a difference in dimension between the first end and the second end along the width direction is 20 nm or more.
  • a photodetector device according to any one of (1) to (12), wherein a junction position between the first end and the second end is located at a position that is 5 nm to 100 nm in a thickness direction from a surface of the second semiconductor layer facing the first wiring layer.
  • a photodetection device according to any one of (1) to (13), wherein, in a planar view, the through conductor is located in the pixel region of a pixel region in which the photoelectric conversion region is provided and a peripheral region surrounding the pixel region.
  • a horizontal wiring is provided in the first wiring layer and extends along a horizontal direction, The photodetector according to any one of (1) to (14), wherein a fourth end portion, which is the other end portion in the extending direction of the second portion, is connected to the horizontal wiring.
  • each of the first portion and the second portion is made of tungsten, copper, aluminum, an aluminum-copper alloy, cobalt, gold, nickel, titanium, titanium nitride, molybdenum, tantalum, tantalum nitride, platinum, or ruthenium.
  • the light detection device includes: a laminated structure in which a first semiconductor layer having a photoelectric conversion region, a first wiring layer, a second semiconductor layer, a second wiring layer, and a third semiconductor layer are laminated in this order; A through conductor extending along a thickness direction is provided, the through conductor has a first portion and a second portion that overlap in a plan view, the first portion penetrates the second semiconductor layer along a thickness direction, and a first end portion which is one end portion in an extending direction protrudes into the first wiring layer; the second portion is provided in the first wiring layer, and a second end portion which is one end portion in an extending direction is directly connected to the first end portion; Electronics.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Power Engineering (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Manufacturing & Machinery (AREA)
  • Electromagnetism (AREA)
  • Solid State Image Pick-Up Elements (AREA)

Abstract

Provided is a light-detecting device which comprises a through conductor and in which degradation of electrical characteristics is suppressed even at an aspect ratio tailored for miniaturization or the like. The light-detecting device includes: a layered structure in which the following are layered in order given, a first semiconductor layer having a photoelectric conversion region, a first wiring layer, a second semiconductor layer, a second wiring layer, and a third semiconductor layer; and a through conductor extending along the thickness direction. The through conductor has a first section and a second section which overlap in a planar view. The first section goes through the second semiconductor layer along the thickness direction. A first end of the first section, which is one end in the extension direction, projects into the first wiring layer. The second section is provided in the first wiring layer. A second end of the second section, which is one end in the extension direction, is connected directly to the first end.

Description

光検出装置及び電子機器Photodetection device and electronic device
 本技術(本開示に係る技術)は、光検出装置及び電子機器に関し、特に、積層型の光検出装置及び電子機器に関する。 This technology (the technology disclosed herein) relates to photodetection devices and electronic devices, and in particular to stacked photodetection devices and electronic devices.
 従来、トランジスタ等の素子がそれぞれ形成された複数の基板を積層することにより、縦方向に素子密度が増大された積層型の半導体装置が知られている。例えば特許文献1には、三層の基板を有する半導体装置(例えば固体撮像装置)が記載されている。そして、半導体装置は、三層の基板のうち厚み方向において中央に位置する基板を貫通する貫通導体である接続配線を有している。  Conventionally, stacked semiconductor devices are known in which the element density is increased in the vertical direction by stacking multiple substrates on which elements such as transistors are formed. For example, Patent Document 1 describes a semiconductor device (e.g., a solid-state imaging device) that has three substrate layers. The semiconductor device has connection wiring that is a through conductor that penetrates the substrate located at the center of the three substrate layers in the thickness direction.
特開2021-5656号公報JP 2021-5656 A
 画素の寸法の微細化に伴い、接続配線に対して、貫通方向の寸法をあまり変えずに幅方向の寸法を細くしたいという要求がある。すなわち、接続配線のアスペクト比を高くしたいという要求がある。 As pixel dimensions become smaller, there is a demand to narrow the width of the connecting wiring without significantly changing the penetration dimension. In other words, there is a demand to increase the aspect ratio of the connecting wiring.
 本技術は、微細化等に応じたアスペクト比となっても電気的特性が劣化するのが抑制された貫通導体を備えた光検出装置及び電子機器を提供することを目的とする。 The aim of this technology is to provide a photodetector and electronic device with a through conductor that suppresses degradation of electrical characteristics even when the aspect ratio is increased in accordance with miniaturization, etc.
 本技術の一態様に係る光検出装置は、光電変換領域を有する第1半導体層と、第1配線層と、第2半導体層と、第2配線層と、第3半導体層とを、この順で積層した積層構造を含み、厚み方向に沿って延在する貫通導体を備え、上記貫通導体は、平面視で重なる第1部分と第2部分とを有し、上記第1部分は、上記第2半導体層を厚み方向に沿って貫通し、且つ延在方向の一方の端部である第1端部は上記第1配線層内に突出し、上記第2部分は、上記第1配線層内に設けられ、且つ延在方向の一方の端部である第2端部が、上記第1端部に直に接続されている。 The photodetector according to one aspect of the present technology includes a layered structure in which a first semiconductor layer having a photoelectric conversion region, a first wiring layer, a second semiconductor layer, a second wiring layer, and a third semiconductor layer are layered in this order, and is provided with a through conductor extending along the thickness direction, the through conductor having a first portion and a second portion that overlap in a plan view, the first portion penetrates the second semiconductor layer along the thickness direction, and the first end, which is one end in the extension direction, protrudes into the first wiring layer, and the second portion is provided in the first wiring layer, and the second end, which is one end in the extension direction, is directly connected to the first end.
 本技術の一態様に係る電子機器は、上記光検出装置と、上記光検出装置に被写体からの像光を結像させる光学系と、を備える。 An electronic device according to one aspect of the present technology includes the above-mentioned light detection device and an optical system that focuses image light from a subject on the above-mentioned light detection device.
本技術の第1実施形態に係る光検出装置の一構成例を示すチップレイアウト図である。1 is a chip layout diagram showing a configuration example of a photodetector according to a first embodiment of the present technology. 本技術の第1実施形態に係る光検出装置の一構成例を示すブロック図である。1 is a block diagram showing a configuration example of a light detection device according to a first embodiment of the present technology; 本技術の第1実施形態に係る光検出装置の画素の等価回路図である。2 is an equivalent circuit diagram of a pixel of the photodetection device according to the first embodiment of the present technology. 本技術の第1実施形態に係る光検出装置が有する画素の縦断面図である。1 is a longitudinal sectional view of a pixel included in a photodetection device according to a first embodiment of the present technology. 図4Aの要部を拡大して示す部分拡大図である。FIG. 4B is a partially enlarged view showing a main part of FIG. 4A. 本技術の第1実施形態に係る光検出装置の製造方法を示す工程断面図である。5A to 5C are cross-sectional views illustrating steps of a manufacturing method for a light detection device according to the first embodiment of the present technology. 図5Aに引き続く工程断面図である。5B is a cross-sectional view showing a process subsequent to FIG. 5A. 図5Bに引き続く工程断面図である。5B is a cross-sectional view showing a process subsequent to FIG. 5B. 図5Cに引き続く工程断面図である。5D is a cross-sectional view showing a process subsequent to FIG. 5C. 図5Dに引き続く工程断面図である。5D is a cross-sectional view showing a process subsequent to FIG. 5D. 多段構成を有さない貫通導体の例を示す画素の縦断面図である。FIG. 11 is a vertical cross-sectional view of a pixel showing an example of a through conductor that does not have a multi-stage structure. 本技術の第1実施形態の変形例1に係る光検出装置が有する貫通導体の縦断面図である。4 is a longitudinal sectional view of a through conductor included in a light detection device according to a first modified example of the first embodiment of the present technology. FIG. 本技術の第1実施形態の変形例2に係る光検出装置が有する貫通導体の縦断面図である。11 is a longitudinal sectional view of a through conductor included in a light detection device according to a second modified example of the first embodiment of the present technology. FIG. 本技術の第1実施形態の変形例3に係る光検出装置が有する貫通導体の縦断面図である。13 is a longitudinal sectional view of a through conductor included in a light detection device according to a third modified example of the first embodiment of the present technology. FIG. 本技術の第1実施形態の変形例4に係る光検出装置が有する画素の縦断面図である。13 is a longitudinal sectional view of a pixel included in a photodetection device according to a fourth modified example of the first embodiment of the present technology. FIG. 電子機器の概略的な構成の一例を示すブロック図である。FIG. 1 is a block diagram showing an example of a schematic configuration of an electronic device. 車両制御システムの概略的な構成の一例を示すブロック図である。1 is a block diagram showing an example of a schematic configuration of a vehicle control system; 車外情報検出部及び撮像部の設置位置の一例を示す説明図である。4 is an explanatory diagram showing an example of the installation positions of an outside-vehicle information detection unit and an imaging unit; FIG. 内視鏡手術システムの概略的な構成の一例を示す図である。1 is a diagram illustrating an example of a schematic configuration of an endoscopic surgery system. カメラヘッド及びCCUの機能構成の一例を示すブロック図である。2 is a block diagram showing an example of the functional configuration of a camera head and a CCU. FIG.
 以下、本技術を実施するための好適な形態について図面を参照しながら説明する。なお、以下に説明する実施形態は、本技術の代表的な実施形態の一例を示したものであり、これにより本技術の範囲が狭く解釈されることはない。 Below, a preferred embodiment for implementing the present technology will be described with reference to the drawings. Note that the embodiment described below is an example of a representative embodiment of the present technology, and should not be construed as narrowing the scope of the present technology.
 以下の図面の記載において、同一又は類似の部分には同一又は類似の符号を付している。ただし、図面は模式的なものであり、厚みと平面寸法との関係、各層の厚みの比率等は現実のものとは異なることに留意すべきである。したがって、具体的な厚みや寸法は以下の説明を参酌して判断すべきものである。又、図面相互間においても互いの寸法の関係や比率が異なる部分が含まれていることはもちろんである。又、本技術を説明するのに適した図面を採用しているため、図面相互間において構成の相違がある場合がある。 In the following drawings, the same or similar parts are given the same or similar reference symbols. However, it should be noted that the drawings are schematic, and the relationship between thickness and planar dimensions, the thickness ratio of each layer, etc., differ from the actual ones. Therefore, specific thicknesses and dimensions should be determined by taking into consideration the following explanation. Furthermore, the drawings naturally contain parts with different dimensional relationships and ratios. Furthermore, because drawings suitable for explaining this technology have been used, there may be differences in configuration between the drawings.
 また、以下に示す実施形態は、本技術の技術的思想を具体化するための装置や方法を例示するものであって、本技術の技術的思想は、構成部品の材質、形状、構造、配置等を下記のものに特定するものでない。本技術の技術的思想は、特許請求の範囲に記載された請求項が規定する技術的範囲内において、種々の変更を加えることができる。 The embodiments shown below are merely examples of devices and methods for embodying the technical ideas of the present technology, and the technical ideas of the present technology do not specify the materials, shapes, structures, arrangements, etc. of the components as described below. The technical ideas of the present technology can be modified in various ways within the technical scope defined by the claims.
 また、以下の説明における上下等の方向の定義は、単に説明の便宜上の定義であって、本開示の技術的思想を限定するものではない。例えば、対象を90°回転して観察すれば上下は左右に変換して読まれ、180°回転して観察すれば上下は反転して読まれることは勿論である。 Furthermore, the definitions of directions such as up and down in the following explanation are merely for the convenience of explanation and do not limit the technical ideas of this disclosure. For example, if an object is rotated 90 degrees and observed, up and down are converted into left and right and read, and of course, if it is rotated 180 degrees and observed, up and down are read inverted.
 説明は以下の順序で行う。
1.第1実施形態
2.第2実施形態
   電子機器への応用例
   移動体への応用例
   内視鏡手術システムへの応用例
The explanation will be given in the following order.
1. First embodiment 2. Second embodiment Application example to electronic device Application example to moving object Application example to endoscopic surgery system
 [第1実施形態]
 この実施形態では、裏面照射型のCMOS(Complementary Metal Oxide Semiconductor)イメージセンサである光検出装置に本技術を適用した一例について説明する。
[First embodiment]
In this embodiment, an example in which the present technology is applied to a photodetector that is a back-illuminated complementary metal oxide semiconductor (CMOS) image sensor will be described.
 ≪光検出装置の全体構成≫
 まず、光検出装置1の全体構成について説明する。図1に示すように、本技術の第1実施形態に係る光検出装置1は、平面視したときの二次元平面形状が方形状の半導体チップ2を主体に構成されている。すなわち、光検出装置1は、半導体チップ2に搭載されている。この光検出装置1は、図11に示すように、光学系(光学レンズ)102を介して被写体からの像光(入射光106)を取り込み、撮像面上に結像された入射光106の光量を画素単位で電気信号に変換して画素信号として出力する。
<Overall configuration of the photodetector>
First, the overall configuration of the photodetection device 1 will be described. As shown in Fig. 1, the photodetection device 1 according to the first embodiment of the present technology is mainly composed of a semiconductor chip 2 having a rectangular two-dimensional planar shape when viewed in a plan view. That is, the photodetection device 1 is mounted on the semiconductor chip 2. As shown in Fig. 11, the photodetection device 1 takes in image light (incident light 106) from a subject via an optical system (optical lens) 102, converts the amount of incident light 106 formed on an imaging surface into an electrical signal on a pixel-by-pixel basis, and outputs the electrical signal as a pixel signal.
 図1に示すように、光検出装置1が搭載された半導体チップ2は、互いに交差するX方向及びY方向を含む二次元平面において、中央部に設けられた方形状の画素領域2Aと、この画素領域2Aの外側に画素領域2Aを囲むようにして設けられた周辺領域2Bとを備えている。 As shown in FIG. 1, the semiconductor chip 2 on which the photodetector 1 is mounted has a square pixel region 2A located in the center of a two-dimensional plane including the X and Y directions that intersect with each other, and a peripheral region 2B located outside the pixel region 2A so as to surround the pixel region 2A.
 画素領域2Aは、例えば図11に示す光学系102により集光される光を受光する受光面である。そして、画素領域2Aには、X方向及びY方向を含む二次元平面において複数の画素3が行列状に配置されている。換言すれば、画素3は、二次元平面内で互いに交差するX方向及びY方向のそれぞれの方向に繰り返し配置されている。なお、本実施形態においては、一例としてX方向とY方向とが直交している。また、X方向とY方向との両方に直交する方向がZ方向(厚み方向、積層方向)である。また、Z方向に垂直な方向が水平方向である。 The pixel region 2A is a light receiving surface that receives light collected by the optical system 102 shown in FIG. 11, for example. In the pixel region 2A, a plurality of pixels 3 are arranged in a matrix on a two-dimensional plane including the X direction and the Y direction. In other words, the pixels 3 are repeatedly arranged in each of the X direction and the Y direction that intersect with each other on the two-dimensional plane. In this embodiment, the X direction and the Y direction are orthogonal to each other, as an example. The direction orthogonal to both the X direction and the Y direction is the Z direction (thickness direction, stacking direction). The direction perpendicular to the Z direction is the horizontal direction.
 図1に示すように、周辺領域2Bには、複数のボンディングパッド14が配置されている。複数のボンディングパッド14の各々は、例えば、半導体チップ2の二次元平面における4つの辺の各々の辺に沿って配列されている。複数のボンディングパッド14の各々は、半導体チップ2を外部装置と電気的に接続する際に用いられる入出力端子である。 As shown in FIG. 1, a plurality of bonding pads 14 are arranged in the peripheral region 2B. Each of the plurality of bonding pads 14 is arranged, for example, along each of the four sides of the semiconductor chip 2 in a two-dimensional plane. Each of the plurality of bonding pads 14 is an input/output terminal used when electrically connecting the semiconductor chip 2 to an external device.
 <ロジック回路>
 図2に示すように、半導体チップ2は、ロジック回路13を備えている。ロジック回路13は、垂直駆動回路4、カラム信号処理回路5、水平駆動回路6、出力回路7及び制御回路8などを含んでいる。ロジック回路13は、電界効果トランジスタとして、例えば、nチャネル導電型のMOSFET(Metal Oxide Semiconductor Field Effect Transistor)及びpチャネル導電型のMOSFETを有するCMOS(Complenentary MOS)回路で構成されている。
<Logic circuit>
2, the semiconductor chip 2 includes a logic circuit 13. The logic circuit 13 includes a vertical drive circuit 4, a column signal processing circuit 5, a horizontal drive circuit 6, an output circuit 7, and a control circuit 8. The logic circuit 13 is configured with a CMOS (Complementary MOS) circuit having, as field effect transistors, for example, an n-channel conductivity type MOSFET (Metal Oxide Semiconductor Field Effect Transistor) and a p-channel conductivity type MOSFET.
 垂直駆動回路4は、例えばシフトレジスタによって構成されている。垂直駆動回路4は、所望の画素駆動線10を順次選択し、選択した画素駆動線10に画素3を駆動するためのパルスを供給し、各画素3を行単位で駆動する。即ち、垂直駆動回路4は、画素領域2Aの各画素3を行単位で順次垂直方向に選択走査し、各画素3の光電変換素子が受光量に応じて生成した信号電荷に基づく画素3からの画素信号を、垂直信号線11を通してカラム信号処理回路5に供給する。 The vertical drive circuit 4 is composed of, for example, a shift register. The vertical drive circuit 4 sequentially selects the desired pixel drive lines 10, supplies pulses to the selected pixel drive lines 10 for driving the pixels 3, and drives each pixel 3 row by row. That is, the vertical drive circuit 4 sequentially selects and scans each pixel 3 in the pixel area 2A vertically row by row, and supplies pixel signals from the pixels 3 based on signal charges generated by the photoelectric conversion elements of each pixel 3 according to the amount of light received to the column signal processing circuit 5 via the vertical signal lines 11.
 カラム信号処理回路5は、例えば画素3の列毎に配置されており、1行分の画素3から出力される信号に対して画素列毎にノイズ除去等の信号処理を行う。例えばカラム信号処理回路5は、画素固有の固定パターンノイズを除去するためのCDS(Correlated Double Sampling:相関2重サンプリング)及びAD(Analog Digital)変換等の信号処理を行う。カラム信号処理回路5の出力段には水平選択スイッチ(図示せず)が水平信号線12との間に接続されて設けられる。 The column signal processing circuit 5 is arranged, for example, for each column of pixels 3, and performs signal processing such as noise removal for each pixel column on the signals output from one row of pixels 3. For example, the column signal processing circuit 5 performs signal processing such as CDS (Correlated Double Sampling) and AD (Analog Digital) conversion to remove pixel-specific fixed pattern noise. A horizontal selection switch (not shown) is provided at the output stage of the column signal processing circuit 5 and connected between it and the horizontal signal line 12.
 水平駆動回路6は、例えばシフトレジスタによって構成されている。水平駆動回路6は、水平走査パルスをカラム信号処理回路5に順次出力することによって、カラム信号処理回路5の各々を順番に選択し、カラム信号処理回路5の各々から信号処理が行われた画素信号を水平信号線12に出力させる。 The horizontal drive circuit 6 is composed of, for example, a shift register. The horizontal drive circuit 6 sequentially outputs horizontal scanning pulses to the column signal processing circuits 5, thereby selecting each of the column signal processing circuits 5 in turn, and causing each of the column signal processing circuits 5 to output a pixel signal that has been subjected to signal processing to the horizontal signal line 12.
 出力回路7は、カラム信号処理回路5の各々から水平信号線12を通して順次に供給される画素信号に対し、信号処理を行って出力する。信号処理としては、例えば、バッファリング、黒レベル調整、列ばらつき補正、各種デジタル信号処理等を用いることができる。 The output circuit 7 processes and outputs pixel signals sequentially supplied from each of the column signal processing circuits 5 through the horizontal signal line 12. For example, the signal processing may include buffering, black level adjustment, column variation correction, various types of digital signal processing, etc.
 制御回路8は、垂直同期信号、水平同期信号、及びマスタクロック信号に基づいて、垂直駆動回路4、カラム信号処理回路5、及び水平駆動回路6等の動作の基準となるクロック信号や制御信号を生成する。そして、制御回路8は、生成したクロック信号や制御信号を、垂直駆動回路4、カラム信号処理回路5、及び水平駆動回路6等に出力する。 The control circuit 8 generates clock signals and control signals that serve as the basis for the operation of the vertical drive circuit 4, column signal processing circuit 5, horizontal drive circuit 6, etc., based on the vertical synchronization signal, horizontal synchronization signal, and master clock signal. The control circuit 8 then outputs the generated clock signals and control signals to the vertical drive circuit 4, column signal processing circuit 5, horizontal drive circuit 6, etc.
 <画素>
 図3は、画素3の一構成例を示す等価回路図である。画素3は、光電変換素子PDと、この光電変換素子PDで光電変換された信号電荷を蓄積(保持)する電荷蓄積領域(フローティングディフュージョン:Floating Diffusion)FDと、この光電変換素子PDで光電変換された信号電荷を電荷蓄積領域FDに転送する転送トランジスタTRと、を備えている。また、画素3は、電荷蓄積領域FDに電気的に接続された読出し回路15を備えている。
<Pixels>
3 is an equivalent circuit diagram showing an example of the configuration of the pixel 3. The pixel 3 includes a photoelectric conversion element PD, a charge accumulation region (floating diffusion) FD that accumulates (holds) the signal charge photoelectrically converted by the photoelectric conversion element PD, and a transfer transistor TR that transfers the signal charge photoelectrically converted by the photoelectric conversion element PD to the charge accumulation region FD. The pixel 3 also includes a readout circuit 15 electrically connected to the charge accumulation region FD.
 光電変換素子PDは、受光量に応じた信号電荷を生成する。光電変換素子PDはまた、生成された信号電荷を一時的に蓄積(保持)する。光電変換素子PDは、カソード側が転送トランジスタTRのソース領域と電気的に接続され、アノード側が基準電位線(例えばグランド)と電気的に接続されている。光電変換素子PDとしては、例えばフォトダイオードが用いられている。 The photoelectric conversion element PD generates a signal charge according to the amount of light received. The photoelectric conversion element PD also temporarily accumulates (holds) the generated signal charge. The cathode side of the photoelectric conversion element PD is electrically connected to the source region of the transfer transistor TR, and the anode side is electrically connected to a reference potential line (e.g., ground). For example, a photodiode is used as the photoelectric conversion element PD.
 転送トランジスタTRのドレイン領域は、電荷蓄積領域FDと電気的に接続されている。転送トランジスタTRのゲート電極は、画素駆動線10(図2参照)のうちの転送トランジスタ駆動線と電気的に接続されている。 The drain region of the transfer transistor TR is electrically connected to the charge storage region FD. The gate electrode of the transfer transistor TR is electrically connected to the transfer transistor drive line of the pixel drive line 10 (see FIG. 2).
 電荷蓄積領域FDは、光電変換素子PDから転送トランジスタTRを介して転送された信号電荷を一時的に蓄積して保持する。 The charge storage region FD temporarily stores and holds the signal charge transferred from the photoelectric conversion element PD via the transfer transistor TR.
 読出し回路15は、電荷蓄積領域FDに蓄積された信号電荷を読み出し、信号電荷に基づく画素信号を出力する。読出し回路15は、これに限定されないが、画素トランジスタとして、例えば、増幅トランジスタAMPと、選択トランジスタSELと、リセットトランジスタRSTと、を備えている。これらのトランジスタ(AMP,SEL,RST)は、例えば、酸化シリコン膜(SiO膜)からなるゲート絶縁膜と、ゲート電極と、ソース領域及びドレイン領域として機能する一対の主電極領域と、を有するMOSFETで構成されている。また、これらのトランジスタとしては、ゲート絶縁膜が窒化シリコン膜(Si膜)、或いは窒化シリコン膜及び酸化シリコン膜などの積層膜からなるMISFET(Metal Insulator Semiconductor FET)でも構わない。 The readout circuit 15 reads out the signal charge stored in the charge storage region FD and outputs a pixel signal based on the signal charge. The readout circuit 15 includes, but is not limited to, an amplification transistor AMP, a selection transistor SEL, and a reset transistor RST as pixel transistors. These transistors (AMP, SEL, RST) are configured as MOSFETs having, for example, a gate insulating film made of a silicon oxide film (SiO 2 film), a gate electrode, and a pair of main electrode regions functioning as a source region and a drain region. In addition, these transistors may be MISFETs (Metal Insulator Semiconductor FETs) whose gate insulating film is made of a silicon nitride film (Si 3 N 4 film) or a laminated film such as a silicon nitride film and a silicon oxide film.
 増幅トランジスタAMPは、ソース領域が選択トランジスタSELのドレイン領域と電気的に接続され、ドレイン領域が電源線Vdd及びリセットトランジスタのドレイン領域と電気的に接続されている。そして、増幅トランジスタAMPのゲート電極は、電荷蓄積領域FD及びリセットトランジスタRSTのソース領域と電気的に接続されている。 The source region of the amplification transistor AMP is electrically connected to the drain region of the selection transistor SEL, and the drain region is electrically connected to the power supply line Vdd and the drain region of the reset transistor. The gate electrode of the amplification transistor AMP is electrically connected to the charge storage region FD and the source region of the reset transistor RST.
 選択トランジスタSELは、ソース領域が垂直信号線11(VSL)と電気的に接続され、ドレインが増幅トランジスタAMPのソース領域と電気的に接続されている。そして、選択トランジスタSELのゲート電極は、画素駆動線10(図2参照)のうちの選択トランジスタ駆動線と電気的に接続されている。 The source region of the selection transistor SEL is electrically connected to the vertical signal line 11 (VSL), and the drain is electrically connected to the source region of the amplification transistor AMP. The gate electrode of the selection transistor SEL is electrically connected to the selection transistor drive line of the pixel drive line 10 (see FIG. 2).
 リセットトランジスタRSTは、ソース領域が電荷蓄積領域FD及び増幅トランジスタAMPのゲート電極と電気的に接続され、ドレイン領域が電源線Vdd及び増幅トランジスタAMPのドレイン領域と電気的に接続されている。リセットトランジスタRSTのゲート電極は、画素駆動線10(図2参照)のうちのリセットトランジスタ駆動線と電気的に接続されている。 The source region of the reset transistor RST is electrically connected to the charge storage region FD and the gate electrode of the amplification transistor AMP, and the drain region is electrically connected to the power supply line Vdd and the drain region of the amplification transistor AMP. The gate electrode of the reset transistor RST is electrically connected to the reset transistor drive line of the pixel drive line 10 (see FIG. 2).
 ≪光検出装置の具体的な構成≫
 次に、光検出装置1の具体的な構成について、図4A及び図4Bを用いて説明する。なお、図面によっては、バリアメタル層の記載を省略している場合がある。
<Specific configuration of the light detection device>
4A and 4B, a specific configuration of the photodetector 1 will be described. Note that in some drawings, the illustration of a barrier metal layer may be omitted.
 <光検出装置の積層構造>
 図4Aに示すように、光検出装置1(半導体チップ2)は、例えば、光入射面側積層体80と、第1半導体層20と、第1配線層30と、第2半導体層40と、第2配線層50と、第3半導体層60と、をこの順で積層した積層構造を有する。また、光検出装置1(半導体チップ2)は、光検出装置1の厚み方向に沿って延在する貫通導体70を有している。以下、第1半導体層20から説明する。
<Layer structure of photodetector>
4A , the photodetector 1 (semiconductor chip 2) has a laminated structure in which, for example, a light-incident surface side laminate 80, a first semiconductor layer 20, a first wiring layer 30, a second semiconductor layer 40, a second wiring layer 50, and a third semiconductor layer 60 are laminated in this order. The photodetector 1 (semiconductor chip 2) also has a through conductor 70 extending along the thickness direction of the photodetector 1. Hereinafter, the first semiconductor layer 20 will be described first.
 <第1半導体層>
 第1半導体層20は、半導体基板で構成されている。第1半導体層20は、これには限定されないが、例えば、単結晶シリコン基板で構成されていて、一方の面が第1の面S1であり、他方の面が第2の面S2である。なお、第2の面S2を光入射面又は裏面と呼び、第1の面S1を素子形成面又は主面と呼ぶこともある。第1半導体層20の画素領域2Aに相当する部分には、行方向及び列方向に配列された複数の光電変換領域20aが設けられている。光電変換領域20aは、画素3毎に設けられている。例えば、図4Aに示すように、第1半導体層20の画素領域2Aに相当する部分には、分離領域20bで区画された島状の光電変換領域20aが画素3毎に設けられている。そして、分離領域20bは、これには限定されないが、例えば、第1半導体層20に厚み方向に沿って溝を形成し、形成した溝内に絶縁膜を埋め込んだトレンチ構造である。なお、溝内に金属材料やポリシリコン等の導体と絶縁材料との両方を埋め込んだトレンチ構造であっても良い。光電変換領域20aは、光電変換領域20a毎に、第1導電型(例えばp型)の半導体領域と、第2導電型(例えばn型)の半導体領域とを含む。そして、光電変換領域20aには、図3に示した光電変換素子PDが構成されている。光電変換領域20aの少なくとも一部は、入射した光を光電変換し、信号電荷を生成する。また、光電変換領域20aには、例えば、トランジスタT1が設けられている。トランジスタT1は、例えば、図3に示した転送トランジスタTRである。また、図示は省略するが、光電変換領域20aには、図3に示した電荷蓄積領域FDが設けられている。なお、画素3の数は、図4Aに限定されるものではない。
<First Semiconductor Layer>
The first semiconductor layer 20 is made of a semiconductor substrate. The first semiconductor layer 20 is made of, but is not limited to, a single crystal silicon substrate, one surface of which is a first surface S1, and the other surface of which is a second surface S2. The second surface S2 may be called a light incident surface or a back surface, and the first surface S1 may be called an element forming surface or a main surface. In the portion of the first semiconductor layer 20 corresponding to the pixel region 2A, a plurality of photoelectric conversion regions 20a arranged in the row direction and the column direction are provided. The photoelectric conversion region 20a is provided for each pixel 3. For example, as shown in FIG. 4A, in the portion of the first semiconductor layer 20 corresponding to the pixel region 2A, an island-shaped photoelectric conversion region 20a partitioned by an isolation region 20b is provided for each pixel 3. The isolation region 20b is, but is not limited to, a trench structure in which a groove is formed in the thickness direction of the first semiconductor layer 20 and an insulating film is embedded in the formed groove. In addition, a trench structure in which both a conductor such as a metal material or polysilicon and an insulating material are embedded in the groove may be used. The photoelectric conversion region 20a includes a semiconductor region of a first conductivity type (e.g., p-type) and a semiconductor region of a second conductivity type (e.g., n-type) for each photoelectric conversion region 20a. The photoelectric conversion element PD shown in FIG. 3 is configured in the photoelectric conversion region 20a. At least a part of the photoelectric conversion region 20a photoelectrically converts incident light to generate a signal charge. In addition, for example, a transistor T1 is provided in the photoelectric conversion region 20a. The transistor T1 is, for example, the transfer transistor TR shown in FIG. 3. In addition, although not shown, the photoelectric conversion region 20a is provided with a charge storage region FD shown in FIG. 3. In addition, the number of pixels 3 is not limited to that shown in FIG. 4A.
 <第1配線層>
 第1配線層30は、第1半導体層20と第2半導体層40との間に設けられた配線層であり、一方の面が第1半導体層20に接し、他方の面が第2半導体層40に接している。第1配線層30は、絶縁膜31を含む。絶縁膜31は、公知の絶縁材料で構成されていて、これには限定されないが、例えば、酸化シリコン層等を含んでいる。第1配線層30の画素領域2Aに相当する部分の絶縁膜31には、例えば、トランジスタT1のゲート電極Gが設けられている。
<First Wiring Layer>
The first wiring layer 30 is a wiring layer provided between the first semiconductor layer 20 and the second semiconductor layer 40, with one surface in contact with the first semiconductor layer 20 and the other surface in contact with the second semiconductor layer 40. The first wiring layer 30 includes an insulating film 31. The insulating film 31 is made of a known insulating material and includes, but is not limited to, for example, a silicon oxide layer. For example, a gate electrode G of a transistor T1 is provided in the insulating film 31 in a portion of the first wiring layer 30 corresponding to the pixel region 2A.
 <第2半導体層>
 第2半導体層40は、半導体基板で構成されている。第2半導体層40は、これには限定されないが、例えば、単結晶シリコン基板で構成されていて、一方の面が第3の面S3であり、他方の面が第4の面S4である。本実施形態では、第3の面S3が第2配線層50側の面であり、第4の面S4が第1配線層30側の面であるとして、説明する。第2半導体層40の画素領域2Aに相当する部分には、トランジスタT2が設けられている。トランジスタT2は、例えば、図3に示した読出し回路15が有するトランジスタである。なお、第2半導体層40には、1つの画素3毎に読出し回路15が有するトランジスタが設けられていても良いし、複数の画素3毎に共有の読出し回路15が有するトランジスタが設けられていても良い。また、本実施形態では、トランジスタT2は、第2半導体層40の厚み方向において、第3の面S3寄りの位置に設けられている。
<Second Semiconductor Layer>
The second semiconductor layer 40 is made of a semiconductor substrate. The second semiconductor layer 40 is, but is not limited to, made of, for example, a single crystal silicon substrate, one surface of which is the third surface S3, and the other surface of which is the fourth surface S4. In this embodiment, the third surface S3 is the surface on the second wiring layer 50 side, and the fourth surface S4 is the surface on the first wiring layer 30 side. A transistor T2 is provided in a portion of the second semiconductor layer 40 corresponding to the pixel region 2A. The transistor T2 is, for example, a transistor included in the readout circuit 15 shown in FIG. 3. In addition, in the second semiconductor layer 40, a transistor included in the readout circuit 15 may be provided for each pixel 3, or a transistor included in a shared readout circuit 15 may be provided for each of a plurality of pixels 3. In addition, in this embodiment, the transistor T2 is provided at a position closer to the third surface S3 in the thickness direction of the second semiconductor layer 40.
 <第2配線層>
 第2配線層50は、第2半導体層40と第3半導体層60との間に設けられた配線層であり、一方の面が第2半導体層40に接し、他方の面が第3半導体層60に接している。第2配線層50は、絶縁膜51、配線52、接続パッド53A,53B、及び、ビア(コンタクト)54(図5D)等を含む。配線52及び接続パッド53A,53Bは、図4Aに示すように絶縁膜51を介して積層されている。
<Second Wiring Layer>
The second wiring layer 50 is a wiring layer provided between the second semiconductor layer 40 and the third semiconductor layer 60, with one surface in contact with the second semiconductor layer 40 and the other surface in contact with the third semiconductor layer 60. The second wiring layer 50 includes an insulating film 51, wiring 52, connection pads 53A and 53B, and vias (contacts) 54 ( FIG. 5D ), etc. The wiring 52 and the connection pads 53A and 53B are stacked with the insulating film 51 interposed therebetween, as shown in FIG. 4A .
 第2配線層50は、厚み方向に沿って第2配線層50Aと第2配線層50Bとを有している。より具体的には、第2配線層50は、第2半導体層40に積層された第2配線層50Aと、第3半導体層60に積層された第2配線層50Bとを接合して得られる。その際、第2配線層50Aに設けられ且つ第2配線層50Aの第2配線層50B側の面に臨む接続パッド53Aと、第2配線層50Bに設けられ且つ第2配線層50Bの第2配線層50A側の面に臨む接続パッド53Bとが接合される。接続パッド53Aと接続パッド53Bとが接合されることにより、第2配線層50Aに設けられた配線52と、第2配線層50Bに設けられた配線52とが電気的に接続される。第2配線層50Aの画素領域2Aに相当する部分の絶縁膜51には、例えば、トランジスタT2のゲート電極Gが設けられている。また、第2配線層50Bの画素領域2Aに相当する部分の絶縁膜51には、例えば、後述のトランジスタT3のゲート電極Gが設けられている。 The second wiring layer 50 has a second wiring layer 50A and a second wiring layer 50B along the thickness direction. More specifically, the second wiring layer 50 is obtained by bonding the second wiring layer 50A laminated on the second semiconductor layer 40 and the second wiring layer 50B laminated on the third semiconductor layer 60. At that time, a connection pad 53A provided on the second wiring layer 50A and facing the surface of the second wiring layer 50A on the second wiring layer 50B side is bonded to a connection pad 53B provided on the second wiring layer 50B and facing the surface of the second wiring layer 50B on the second wiring layer 50A side. By bonding the connection pad 53A and the connection pad 53B, the wiring 52 provided on the second wiring layer 50A and the wiring 52 provided on the second wiring layer 50B are electrically connected. For example, a gate electrode G of the transistor T2 is provided on the insulating film 51 in the portion corresponding to the pixel region 2A of the second wiring layer 50A. In addition, the insulating film 51 in the portion of the second wiring layer 50B that corresponds to the pixel region 2A is provided with, for example, a gate electrode G of the transistor T3 described below.
 絶縁膜51は、公知の絶縁材料で構成されていて、これには限定されないが、例えば、酸化シリコン層等を含んでいる。配線52及び接続パッド53A,53Bは、金属材料製である。配線52及び接続パッド53A,53Bを構成する材料として、これには限定されないが、例えば、銅(Cu)及びアルミニウム(Al)を挙げることができる。 The insulating film 51 is made of a known insulating material, and includes, but is not limited to, a silicon oxide layer. The wiring 52 and the connection pads 53A, 53B are made of a metal material. Materials that make up the wiring 52 and the connection pads 53A, 53B include, but are not limited to, copper (Cu) and aluminum (Al).
 <第3半導体層>
 第3半導体層60は、半導体基板で構成されている。第3半導体層60は、これには限定されないが、例えば、単結晶シリコン基板で構成されていて、第2配線層50側の面が第5の面S5である。なお、第5の面S5を素子形成面又は主面と呼ぶこともある。第3半導体層60の画素領域2Aに相当する部分には、トランジスタT3が設けられている。トランジスタT3は、例えば、ロジック回路13が有するトランジスタである。
<Third Semiconductor Layer>
The third semiconductor layer 60 is made of a semiconductor substrate. The third semiconductor layer 60 is made of, for example, a single crystal silicon substrate, but is not limited thereto, and the surface on the second wiring layer 50 side is the fifth surface S5. The fifth surface S5 may also be called the element formation surface or the main surface. A transistor T3 is provided in a portion of the third semiconductor layer 60 corresponding to the pixel region 2A. The transistor T3 is, for example, a transistor included in the logic circuit 13.
 <光入射面側積層体>
 光入射面側積層体80は、第2の面S2側から、これには限定されないが、例えば、平坦化膜81と、カラーフィルタ82と、マイクロレンズ(オンチップレンズ)83と、をその順で積層した積層構造を有する。平坦化膜81は、公知の絶縁材料又は公知の樹脂材料で構成されていて、これには限定されないが、例えば、酸化シリコンで構成されていても良い。
<Light-incident surface side laminate>
The light-incident surface side laminate 80 has a laminated structure in which, for example, a planarization film 81, a color filter 82, and a microlens (on-chip lens) 83 are laminated in this order from the second surface S2 side, although this is not limited thereto. The planarization film 81 is made of a known insulating material or a known resin material, and may be made of, for example, silicon oxide, although this is not limited thereto.
 カラーフィルタ82は、画素3毎に設けられていて、光電変換領域20aへの入射光を色分離する。カラーフィルタ82は、例えば樹脂性の材料で構成されている。マイクロレンズ83は、画素3毎に設けられていて、例えば樹脂性の材料で構成されている。 A color filter 82 is provided for each pixel 3, and separates the light incident on the photoelectric conversion region 20a into different colors. The color filter 82 is made of, for example, a resin material. A microlens 83 is provided for each pixel 3, and is made of, for example, a resin material.
 <貫通導体>
 貫通導体70は、図1に示す画素領域2Aと周辺領域2Bとのうちの画素領域2Aに設けられている。より具体的には、貫通導体70は、例えば、画素3毎に設けられている。図4A及び図4Bに示すように、貫通導体70は、光検出装置1の厚み方向に沿って延在する縦配線であり、第2半導体層40を厚み方向に沿って貫通している。貫通導体70は、平面視で互いに重なる第1部分71と第2部分72とを有している。つまり、貫通導体70は、第1部分71と第2部分72との多段構成(本実施形態では2段構成)を含んでいる。なお、図4Bに示すようにバリアメタルBMを設ける場合、貫通導体70はバリアメタルBMも含むものとする。第1部分71は、第2半導体層40を厚み方向に沿って貫通し、且つ延在方向の一方の端部である第1端部71aは第1配線層30内に突出している。そして、第2部分72は、第1配線層30内に設けられ、且つ延在方向の一方の端部である第2端部72aが、第1端部71aに直に接続されている。本実施形態では第2半導体層40はシリコン製であるので、貫通導体70は、シリコン貫通電極(TSV、Through-Silicon Via)である。なお、貫通導体70と第1半導体層20との間は、公知の絶縁材料を用いて絶縁されている。
<Through conductor>
The through conductor 70 is provided in the pixel region 2A of the pixel region 2A and the peripheral region 2B shown in FIG. 1. More specifically, the through conductor 70 is provided for each pixel 3, for example. As shown in FIGS. 4A and 4B, the through conductor 70 is a vertical wiring extending along the thickness direction of the photodetector 1, and penetrates the second semiconductor layer 40 along the thickness direction. The through conductor 70 has a first portion 71 and a second portion 72 that overlap each other in a plan view. That is, the through conductor 70 includes a multi-stage configuration (a two-stage configuration in this embodiment) of the first portion 71 and the second portion 72. Note that, when a barrier metal BM is provided as shown in FIG. 4B, the through conductor 70 also includes the barrier metal BM. The first portion 71 penetrates the second semiconductor layer 40 along the thickness direction, and a first end 71a, which is one end in the extending direction, protrudes into the first wiring layer 30. The second portion 72 is provided in the first wiring layer 30, and a second end 72a, which is one end in the extending direction, is directly connected to the first end 71a. In this embodiment, since the second semiconductor layer 40 is made of silicon, the through conductor 70 is a through-silicon electrode (TSV, Through-Silicon Via). Note that the through conductor 70 and the first semiconductor layer 20 are insulated from each other using a known insulating material.
 また、第1部分71の延在方向の他方の端部である第3端部71bは、第2配線層50内に突出している。そして、第2部分72の延在方向の他方の端部である第4端部72bは、第1半導体層20の第1配線層30側の面(第1の面S1)の近傍にある。第4端部72bが第1の面S1の近傍にあるとは、第4端部72bが第1の面S1に接続されていること、第1の面S1から第1半導体層20内に埋まっていること、第1の面S1に設けられた電極部材に接続されていること等を含む。第4端部72bは、トランジスタT1や電荷蓄積領域FD等の拡散領域に接続されていても良い。また、電極部材とは、例えば、トランジスタのゲート電極G、複数の電荷蓄積領域FDに電気的に接続された後述の電極E等である。なお、図4Aに示す例では、第3端部71bは、水平方向に沿って延在する横配線である配線52に接続されていて、第4端部72bは第1の面S1から第1半導体層20内に埋まっている。 The third end 71b, which is the other end in the extension direction of the first portion 71, protrudes into the second wiring layer 50. The fourth end 72b, which is the other end in the extension direction of the second portion 72, is in the vicinity of the surface (first surface S1) of the first semiconductor layer 20 on the first wiring layer 30 side. The fourth end 72b being in the vicinity of the first surface S1 includes the fourth end 72b being connected to the first surface S1, being buried in the first semiconductor layer 20 from the first surface S1, being connected to an electrode member provided on the first surface S1, and the like. The fourth end 72b may be connected to a diffusion region such as the transistor T1 or the charge storage region FD. The electrode member is, for example, the gate electrode G of the transistor, an electrode E (described later) electrically connected to multiple charge storage regions FD, and the like. In the example shown in FIG. 4A, the third end 71b is connected to the wiring 52, which is a horizontal wiring extending along the horizontal direction, and the fourth end 72b is embedded in the first semiconductor layer 20 from the first surface S1.
 第1端部71aと第2端部72aとは、第1配線層30内において、接合されている。第1端部71aと第2端部72aとの接合位置は、第2半導体層40の第1配線層30側の面(第4の面S4)から厚み方向に5nm以上100nm以下の位置にある。この、接合位置と第4の面S4との間の厚み方向に沿った距離を、距離d1と呼ぶ。 The first end 71a and the second end 72a are joined within the first wiring layer 30. The joining position between the first end 71a and the second end 72a is at a position 5 nm to 100 nm in the thickness direction from the surface (fourth surface S4) of the second semiconductor layer 40 on the first wiring layer 30 side. The distance in the thickness direction between this joining position and the fourth surface S4 is called the distance d1.
 第1部分71は、厚み方向に沿って幅が小さくなるテーパ形状に設けられている。より具体的には、第1部分71は、第1半導体層20へ向けて幅が小さくなるテーパ形状に設けられている。第3端部71bの幅方向に沿った寸法w71bは、第1端部71aの幅方向に沿った寸法w71aより大きく設けられている。そして、第2部分72は、厚み方向に沿って幅が小さくなるテーパ形状に設けられている。より具体的には、第2部分72は、第1半導体層20へ向けて幅が小さくなるテーパ形状に設けられている。第2端部72aの幅方向に沿った寸法w72aは、第4端部72bの幅方向に沿った寸法w72bより大きく設けられている。また、互いに直に接続された第1端部71aと第2端部72aとの幅方向に沿った寸法の差は、重ね合わせマージンを確保するために、20nm以上に設けられている。なお、この寸法の差は、例えば、第1端部71aの端面の径と第2端部72aの端面の径との差であっても良い。本実施形態では、第2端部72aの幅方向に沿った寸法w72aは、第1端部71aの幅方向に沿った寸法w71aより、20nm以上大きく設けられている(w72a-w71a≧20nm)。 The first portion 71 is tapered in a width direction. More specifically, the first portion 71 is tapered in a width direction toward the first semiconductor layer 20. The dimension w71b along the width direction of the third end 71b is greater than the dimension w71a along the width direction of the first end 71a. The second portion 72 is tapered in a width direction. More specifically, the second portion 72 is tapered in a width direction toward the first semiconductor layer 20. The dimension w72a along the width direction of the second end 72a is greater than the dimension w72b along the width direction of the fourth end 72b. The difference in the dimensions along the width direction between the first end 71a and the second end 72a, which are directly connected to each other, is set to 20 nm or more to ensure an overlap margin. This difference in size may be, for example, the difference between the diameter of the end face of the first end 71a and the diameter of the end face of the second end 72a. In this embodiment, the size w72a along the width direction of the second end 72a is set to be 20 nm or more larger than the size w71a along the width direction of the first end 71a (w72a-w71a≧20 nm).
 第1部分71の第3端部71bから第1端部71aまでの延在方向に沿った寸法h71と、第2部分72の第2端部72aから第4端部72bまでの延在方向に沿った寸法h72とは、同じ寸法であっても良いし、異なる寸法であっても良い。なお、第1部分71は、第2半導体層40を貫通する役割を有している関係上、延在方向に沿った寸法h71が第2半導体層40の厚みより大きく設けられている。そして、第2半導体層40が厚くなればなるほど、寸法h71を大きくする必要がある。本実施形態では、寸法h71を大きく設ける要求があるので、寸法h71を寸法h72より大きく設けている。 The dimension h71 along the extension direction from the third end 71b to the first end 71a of the first portion 71 and the dimension h72 along the extension direction from the second end 72a to the fourth end 72b of the second portion 72 may be the same dimension or different dimensions. Since the first portion 71 has the role of penetrating the second semiconductor layer 40, the dimension h71 along the extension direction is set to be larger than the thickness of the second semiconductor layer 40. The thicker the second semiconductor layer 40, the larger the dimension h71 must be. In this embodiment, there is a demand for a larger dimension h71, so the dimension h71 is set to be larger than the dimension h72.
 また、第1部分71のアスペクト比である第1アスペクト比は、第1部分71の延在方向に沿った寸法h71を、第1部分71の幅方向に沿った寸法で除して求めることができる。より具体的には、第1アスペクト比は、寸法h71を、第3端部71bの幅方向に沿った寸法w71bで除して求めることができる。第3端部71bは、第1部分71のうち最も第3半導体層60寄りの端部であり、第1部分71を埋め込む後述の穴71hを形成し始める側の端部である。また、第3端部71bは、第1部分71の幅方向に沿った寸法のうち最も大きい寸法を有している。 The first aspect ratio, which is the aspect ratio of the first portion 71, can be obtained by dividing the dimension h71 along the extension direction of the first portion 71 by the dimension along the width direction of the first portion 71. More specifically, the first aspect ratio can be obtained by dividing the dimension h71 by the dimension w71b along the width direction of the third end 71b. The third end 71b is the end of the first portion 71 closest to the third semiconductor layer 60, and is the end on the side where the formation of a hole 71h, described below, into which the first portion 71 is embedded, begins. The third end 71b also has the largest dimension among the dimensions along the width direction of the first portion 71.
 同様に、第2部分72のアスペクト比である第2アスペクト比は、第2部分72の延在方向に沿った寸法h72を、第2部分72の幅方向に沿った寸法で除して求めることができる。より具体的には、第2アスペクト比は、寸法h72を、第2部分72の幅方向に沿った寸法のうち最も大きい寸法である第2端部72aの幅方向に沿った寸法w72aで除して求めることができる。 Similarly, the second aspect ratio, which is the aspect ratio of the second portion 72, can be obtained by dividing the dimension h72 along the extension direction of the second portion 72 by the dimension along the width direction of the second portion 72. More specifically, the second aspect ratio can be obtained by dividing the dimension h72 by the dimension w72a along the width direction of the second end 72a, which is the largest dimension among the dimensions along the width direction of the second portion 72.
 貫通導体70のアスペクト比である第3アスペクト比は、貫通導体70の延在方向に沿った寸法h70を、第1部分71の幅方向に沿った寸法で除すことにより、求めることができる。より具体的には、第3アスペクト比は、寸法h70を、第3端部71bの幅方向に沿った寸法w71bで除して求めることができる。すなわち、第3アスペクト比は、上述の第1アスペクト比を求める際に用いられた寸法w71bを用いて求められている。第3アスペクト比を求める際に、第1部分71と第2部分72とのうち第1部分71の幅方向に沿った寸法を用いる理由としては、第1部分71が第1半導体層20を貫通する部分であるため、例えば、その幅方向の寸法を制御する要求が大きいことが挙げられる。なお、寸法h70は、第3端部71bから第4端部72bまでの延在方向に沿った寸法である。 The third aspect ratio, which is the aspect ratio of the through conductor 70, can be obtained by dividing the dimension h70 along the extension direction of the through conductor 70 by the dimension along the width direction of the first portion 71. More specifically, the third aspect ratio can be obtained by dividing the dimension h70 by the dimension w71b along the width direction of the third end 71b. That is, the third aspect ratio is obtained using the dimension w71b used when obtaining the above-mentioned first aspect ratio. The reason for using the dimension along the width direction of the first portion 71 of the first portion 71 and the second portion 72 when obtaining the third aspect ratio is that, for example, since the first portion 71 is a portion that penetrates the first semiconductor layer 20, there is a high demand for controlling the dimension in the width direction. The dimension h70 is the dimension along the extension direction from the third end 71b to the fourth end 72b.
 貫通導体70の第3アスペクト比は、例えば、10以上、15以上、20以上、25以上、又は、30以上であっても良い。そして、第3アスペクト比は、15以下、20以下、25以下、30以下、35以下であっても良い。なお、第1部分71の第1アスペクト比及び第2部分72の第2アスペクト比の両方とも、貫通導体70のアスペクト比と異ならせている。より具体的には、第1アスペクト比及び第2アスペクト比の両方とも、貫通導体70のアスペクト比より低く設けられている。一般的に、穴のアスペクト比を低くすることにより、穴の底面におけるバリアメタルBMの被覆性が良くなる。このようなアスペクト比を有する第1部分71及び第2部分72により、高いアスペクト比を有する貫通導体70を実現している。 The third aspect ratio of the through conductor 70 may be, for example, 10 or more, 15 or more, 20 or more, 25 or more, or 30 or more. The third aspect ratio may be 15 or less, 20 or less, 25 or less, 30 or less, or 35 or less. The first aspect ratio of the first portion 71 and the second aspect ratio of the second portion 72 are both different from the aspect ratio of the through conductor 70. More specifically, both the first aspect ratio and the second aspect ratio are set lower than the aspect ratio of the through conductor 70. In general, lowering the aspect ratio of the hole improves the coverage of the barrier metal BM at the bottom of the hole. The first portion 71 and the second portion 72 having such aspect ratios realize a through conductor 70 with a high aspect ratio.
 第1アスペクト比及び第2アスペクト比のそれぞれは、例えば、25以下であっても良く、また例えば、16以下であっても良い。そして、第1アスペクト比及び第2アスペクト比のそれぞれは、例えば、10以上であっても良い。 Each of the first aspect ratio and the second aspect ratio may be, for example, 25 or less, or may be, for example, 16 or less. And each of the first aspect ratio and the second aspect ratio may be, for example, 10 or more.
 第1アスペクト比と第2アスペクト比とは同じであっても良いし、異なっていても良い。第1アスペクト比と第2アスペクト比とを異ならせる場合、第2アスペクト比を第1アスペクト比より低く設けることが望ましい。それは、第2部分72は第2半導体層40を貫通する部分ではないので、幅方向の寸法を小さくすることへの要求が第1部分71程高くないこと、及び第2部分72と第1半導体層20及び電極等の接続対象との間の接続の信頼性に対する要求があるからである。 The first aspect ratio and the second aspect ratio may be the same or different. When the first aspect ratio and the second aspect ratio are different, it is desirable to set the second aspect ratio lower than the first aspect ratio. This is because the second portion 72 does not penetrate the second semiconductor layer 40, and therefore the requirement for reducing the dimension in the width direction is not as high as that for the first portion 71, and there is a requirement for reliability of the connection between the second portion 72 and the first semiconductor layer 20 and the connection object such as an electrode.
 第1部分71及び第2部分72は、埋込性が良好な導電材料を用いることが好ましい。そのような導電材料として、例えば、タングステン(W)、銅(Cu)、アルミニウム(Al)、アルミニウム-銅合金、コバルト(Co)、金(Au)、ニッケル(Ni)、チタン(Ti)、窒化チタン、モリブデン(Mo)、タンタル(Ta)、窒化タンタル、プラチナ(Pt)、又はルテニウム(Ru)を挙げることができる。本実施形態では、第1部分71及び第2部分72を、タングステンを用いて形成する例について、説明する。また第1部分71及び第2部分72の外側は、チタン等の公知の材料製であるバリアメタルBMにより覆われている。 The first portion 71 and the second portion 72 are preferably made of a conductive material with good embeddability. Examples of such conductive materials include tungsten (W), copper (Cu), aluminum (Al), aluminum-copper alloy, cobalt (Co), gold (Au), nickel (Ni), titanium (Ti), titanium nitride, molybdenum (Mo), tantalum (Ta), tantalum nitride, platinum (Pt), and ruthenium (Ru). In this embodiment, an example in which the first portion 71 and the second portion 72 are formed using tungsten will be described. The outside of the first portion 71 and the second portion 72 is covered with a barrier metal BM made of a known material such as titanium.
 ≪光検出装置の製造方法≫
 以下、図5Aから図5Eまでを参照して、光検出装置1の製造方法について説明する。なお、図5Aから図5Eまでにおいては、トランジスタのゲート絶縁膜等の記載を省略している。まず、図5Aに示すように、トランジスタT1等の素子、図示を省略する光電変換素子PD及び電荷蓄積領域FD等の拡散領域が形成された第1半導体層20を含む半導体基板を準備する。第1半導体層20の第1の面S1側には、トランジスタT1のゲート電極G、電極E等の電極部材が設けられている。その後、第1の面S1側の露出面に、絶縁膜m1を積層する。そして、絶縁膜m1の平面視で電極部材に重なる位置、及び第1半導体層20のうち第72を接続させたい部分に重なる位置等に、公知のフォトグラフィ技術及びエッチング技術を用いて、第2部分72を埋め込むための穴72hを形成する。そして、公知の成膜技術を用いて、穴72h内に、第2部分72を構成する材料を埋め込む。その後、化学機械研磨(CMP、Chemical Mechanical Polishing)法による研磨を行って、第2部分72を構成する材料のうち余分な部分を除去し、露出面を平坦化する。これにより、第2部分72を形成している。なお、絶縁膜m1は公知の絶縁膜であっても良く、これには限定されないが、例えば、酸化シリコンからなる層、窒化シリコンからなる層を含んでいても良い。
<Method for manufacturing photodetector>
Hereinafter, a method for manufacturing the photodetector 1 will be described with reference to FIGS. 5A to 5E. Note that in FIGS. 5A to 5E, the description of the gate insulating film of the transistor and the like is omitted. First, as shown in FIG. 5A, a semiconductor substrate including a first semiconductor layer 20 in which elements such as a transistor T1, a photoelectric conversion element PD (not shown) and a diffusion region such as a charge storage region FD are formed is prepared. Electrode members such as a gate electrode G and an electrode E of the transistor T1 are provided on the first surface S1 side of the first semiconductor layer 20. Then, an insulating film m1 is laminated on the exposed surface on the first surface S1 side. Then, a hole 72h for embedding the second portion 72 is formed by using a known photolithography technique and an etching technique at a position where the insulating film m1 overlaps the electrode member in a plan view and at a position where the insulating film m1 overlaps the portion of the first semiconductor layer 20 to which the 72 is to be connected. Then, a material constituting the second portion 72 is embedded in the hole 72h by using a known film formation technique. Thereafter, polishing is performed by a chemical mechanical polishing (CMP) method to remove excess portions of the material constituting the second portion 72 and flatten the exposed surface, thereby forming the second portion 72. Note that the insulating film m1 may be a known insulating film, and may include, for example, a layer made of silicon oxide or a layer made of silicon nitride, although this is not limited thereto.
 次に、図5Bに示すように、平坦化された絶縁膜m1の露出面に、平坦化膜m2と、絶縁膜m3とをその順で積層する。平坦化膜m2は、CMP法により研磨された露出面を平坦化するために積層されている。平坦化膜m2は、公知の絶縁膜であり、例えば、酸化シリコン膜である。絶縁膜m3は、ウエハ同士を接合するために積層された層であり、例えば、窒化シリコン膜である。 Next, as shown in FIG. 5B, a planarizing film m2 and an insulating film m3 are laminated in that order on the exposed surface of the planarized insulating film m1. The planarizing film m2 is laminated to planarize the exposed surface polished by the CMP method. The planarizing film m2 is a known insulating film, such as a silicon oxide film. The insulating film m3 is a layer laminated to bond wafers together, such as a silicon nitride film.
 その後、図5Cに示すように、第4の面S4側に絶縁膜m4が積層された半導体基板40wを準備する。絶縁膜m4は、公知の絶縁膜であり、例えば、酸化シリコン膜である。そして、絶縁膜m4の露出面と絶縁膜m3の露出面とを貼り合わせることにより、半導体基板40w側と第1半導体層20側とを貼り合わせる。貼り合わせた後の絶縁膜m3と絶縁膜m4との境界面が、図4Bに示した貼り合わせ面Sとなる。 Then, as shown in FIG. 5C, a semiconductor substrate 40w is prepared with an insulating film m4 laminated on the fourth surface S4 side. The insulating film m4 is a known insulating film, for example, a silicon oxide film. Then, the exposed surface of the insulating film m4 is bonded to the exposed surface of the insulating film m3, thereby bonding the semiconductor substrate 40w side to the first semiconductor layer 20 side. The interface between the insulating films m3 and m4 after bonding becomes the bonding surface S shown in FIG. 4B.
 そして、図5Dに示すように、半導体基板40wの露出面を研削し、第2半導体層40となる部分を残す。その後、公知のフォトグラフィ技術及びエッチング技術を用いて、第2半導体層40に穴40hを形成し、穴40hの内部を絶縁膜m5で埋める。穴40hは、平面視で第2部分72に重なる位置に形成する。穴72hのエッチングは、第2半導体層40を貫通し、第2部分72の第2端部72aが露出するまで行う。絶縁膜m5は公知の絶縁材料であり、これには限定されないが、例えば、酸化シリコン膜である。その後、第2半導体層40の第3の面S3側に、にトランジスタT2等の素子や拡散領域等を形成し、第3の面S3側の露出面を覆うように、絶縁膜m6を積層する。絶縁膜m6は公知の絶縁材料であり、これには限定されないが、例えば、酸化シリコン膜や窒化シリコン膜を含んでいる。なお、本実施形態で説明した穴72hの形成、絶縁膜m5の埋込、素子等の形成を行う順番は一例であり、これ以外の順番で形成しても良い。 Then, as shown in FIG. 5D, the exposed surface of the semiconductor substrate 40w is ground to leave a portion that will become the second semiconductor layer 40. Then, using known photography and etching techniques, a hole 40h is formed in the second semiconductor layer 40, and the inside of the hole 40h is filled with an insulating film m5. The hole 40h is formed at a position that overlaps the second portion 72 in a planar view. The etching of the hole 72h is performed until it penetrates the second semiconductor layer 40 and the second end 72a of the second portion 72 is exposed. The insulating film m5 is a known insulating material, and is not limited to this, for example, a silicon oxide film. Then, elements such as a transistor T2 and a diffusion region are formed on the third surface S3 side of the second semiconductor layer 40, and an insulating film m6 is laminated so as to cover the exposed surface on the third surface S3 side. The insulating film m6 is a known insulating material, and is not limited to this, but includes, for example, a silicon oxide film and a silicon nitride film. Note that the order in which the holes 72h are formed, the insulating film m5 is filled in, and the elements and other components are formed, as described in this embodiment, is merely an example, and they may be formed in a different order.
 次に、図5Eに示すように、公知のリソグラフィ技術及びエッチング技術を用いて、穴71h及び穴54hを形成する。穴71hは、平面視で第2部分72に重なる位置に設けられていて、絶縁膜m5,m6等をエッチングして形成されている。そして、穴71hは、第2部分72の第2端部72aに達する深さまで設けられている。穴54hは、平面視でトランジスタT2のゲート電極Gや拡散領域等に重なる位置に設けられていて、絶縁膜m6等をエッチングして形成されている。そして、穴54hは、トランジスタT2のゲート電極Gや拡散領域等に達する深さまで設けられている。その後、公知の成膜技術を用いて、穴71h及び穴754h内に、第1部分71及びビア54を構成する材料を埋め込む。その後、化学機械研磨法による研磨を行って、第1部分71及びビア54を構成する材料のうち余分な部分を除去し、露出面を平坦化する。これにより、第1部分71及びビア54を形成している。 Next, as shown in FIG. 5E, holes 71h and holes 54h are formed using known lithography and etching techniques. Hole 71h is provided at a position overlapping with second portion 72 in plan view, and is formed by etching insulating films m5, m6, etc. Then, hole 71h is provided to a depth reaching second end 72a of second portion 72. Hole 54h is provided at a position overlapping with gate electrode G and diffusion region of transistor T2 in plan view, and is formed by etching insulating film m6, etc. Then, hole 54h is provided to a depth reaching gate electrode G and diffusion region of transistor T2. Then, using known film formation techniques, materials constituting first portion 71 and via 54 are embedded in holes 71h and holes 754h. Then, polishing is performed by chemical mechanical polishing to remove excess portions of materials constituting first portion 71 and via 54, and the exposed surfaces are flattened. In this way, first portion 71 and via 54 are formed.
 その後、図示及び詳細な説明は省略するが、第2配線層50Aを完成させる。そして、第2配線層50Bを積層した第3半導体層60を含む基板を準備し、第2配線層50Bを第2配線層50Aに接合する。その後、光入射面側積層体80を形成し、光検出装置1がほぼ完成する。そして、光検出装置1を個片化することにより、半導体チップ2を得る。 After that, although illustrations and detailed explanations are omitted, the second wiring layer 50A is completed. Then, a substrate including a third semiconductor layer 60 on which the second wiring layer 50B is laminated is prepared, and the second wiring layer 50B is bonded to the second wiring layer 50A. After that, the light incident surface side laminate 80 is formed, and the photodetector 1 is almost completed. The photodetector 1 is then singulated to obtain the semiconductor chip 2.
 ≪第1実施形態の主な効果≫
 以下、第1実施形態の主な効果を説明するが、その前に、図6に示すように、貫通導体70に代えて貫通導体70Aを有した光検出装置について、説明する。貫通導体70Aは多段構成を有しておらず、単一の導体73からなる。また、画素3の寸法の微細化に伴い、このような貫通導体70Aに対して、トランジスタ特性に与える影響を抑制するために延在方向の寸法を変えずに幅方向の寸法を細くしたいという要求、レイアウトの自由度の観点からの延在方向の寸法をあまり変えずに幅方向の寸法を細くしたいという要求などがある。しかし、画素3の寸法の微細化が進んだり第2半導体層の厚みを大きくすると、アスペクト比の高低によっては、貫通導体70Aとその接続対象との間の電気的導通等への影響に対する回避策が必要である。より具体的には、貫通導体70Aとその接続対象との間の電気抵抗への影響に対する回避策が必要である。
<<Main Effects of the First Embodiment>>
Below, the main effects of the first embodiment will be described, but before that, as shown in FIG. 6, a photodetector having a through conductor 70A instead of the through conductor 70 will be described. The through conductor 70A does not have a multi-stage structure, but is composed of a single conductor 73. In addition, with the miniaturization of the pixel 3, there is a demand for narrowing the widthwise dimension of the through conductor 70A without changing the dimension in the extension direction in order to suppress the influence on the transistor characteristics, and a demand for narrowing the widthwise dimension without changing the dimension in the extension direction much from the viewpoint of the freedom of layout. However, when the dimension of the pixel 3 is miniaturized or the thickness of the second semiconductor layer is increased, a countermeasure is required to prevent the influence on the electrical conduction between the through conductor 70A and its connection target, depending on the aspect ratio. More specifically, a countermeasure is required to prevent the influence on the electrical resistance between the through conductor 70A and its connection target.
 これに対して、本技術の第1実施形態に係る光検出装置1では、光電変換領域20aを有する第1半導体層20と、第1配線層30と、第2半導体層40と、第2配線層50と、第3半導体層60とを、この順で積層した積層構造を含み、厚み方向に沿って延在する貫通導体70を備え、貫通導体70は、平面視で重なる第1部分71と第2部分72とを有し、第1部分71は、第2半導体層40を厚み方向に沿って貫通し、且つ延在方向の一方の端部である第1端部71aは第1配線層30内に突出し、第2部分72は、第1配線層30内に設けられ、且つ延在方向の一方の端部である第2端部72aが、第1端部71aに直に接続されている。貫通導体70が第1部分71と第2部分72との多段構成を含むので、貫通導体70の役割を第1部分71と第2部分72とに分けて分担させることができる。より具体的には、第1部分71には第2半導体層40を貫通する役割を担わせ、第2部分72には接続対象と接続する役割を担わせることができる。そのため、第1部分71と第2部分72とを、それぞれの役割に適した構成とすることができる。これにより、第1部分71の幅方向の寸法が大きくなるのを抑制でき、貫通導体70を含む配線や素子のレイアウトの自由度が下がることを抑制できる。そして、第1部分71の幅方向の寸法が大きくなるのを抑制できるので、貫通導体70と他の配線等との間の寄生容量が大きくなるのを抑制でき、配線遅延が大きくなるのを抑制できる。そして、第1部分71の幅方向の寸法が大きくなるのを抑制できるので、貫通導体70とトランジスタとの間の距離が小さくなり過ぎるのを抑制できるので、貫通導体70のバイアスの影響が大きくなるのを抑制でき、トランジスタの特性変動が大きくなることを抑制できる。また、第2部分72とその接続対象との接続が劣化するのを抑制できるので、第2部分72とその接続対象との間の電気抵抗が大きくなるのを抑制できる。 In contrast, the photodetector 1 according to the first embodiment of the present technology includes a laminated structure in which a first semiconductor layer 20 having a photoelectric conversion region 20a, a first wiring layer 30, a second semiconductor layer 40, a second wiring layer 50, and a third semiconductor layer 60 are laminated in this order, and includes a through conductor 70 extending along the thickness direction, and the through conductor 70 has a first portion 71 and a second portion 72 that overlap in a plan view, the first portion 71 penetrates the second semiconductor layer 40 along the thickness direction, and the first end 71a, which is one end in the extension direction, protrudes into the first wiring layer 30, and the second portion 72 is provided in the first wiring layer 30, and the second end 72a, which is one end in the extension direction, is directly connected to the first end 71a. Since the through conductor 70 includes a multi-stage structure of the first portion 71 and the second portion 72, the role of the through conductor 70 can be divided and shared between the first portion 71 and the second portion 72. More specifically, the first portion 71 can be made to play a role of penetrating the second semiconductor layer 40, and the second portion 72 can be made to play a role of connecting to a connection target. Therefore, the first portion 71 and the second portion 72 can be configured to be suitable for their respective roles. This can prevent the width dimension of the first portion 71 from increasing, and can prevent the degree of freedom of layout of wiring and elements including the through conductor 70 from decreasing. Since the width dimension of the first portion 71 can be prevented from increasing, the parasitic capacitance between the through conductor 70 and other wiring, etc. can be prevented from increasing, and the wiring delay can be prevented from increasing. Since the width dimension of the first portion 71 can be prevented from increasing, the distance between the through conductor 70 and the transistor can be prevented from becoming too small, and the influence of the bias of the through conductor 70 can be prevented from increasing, and the characteristic fluctuation of the transistor can be prevented from increasing. Furthermore, since the connection between the second portion 72 and its connection target can be prevented from deteriorating, the electrical resistance between the second portion 72 and its connection target can be prevented from increasing.
 また、貫通導体70を、第1部分71と第2部分72とに分けて形成しているので、歩留まり及び信頼性が低下するのを抑制できる。より具体的には、貫通導体70を埋め込むための穴を形成するエッチングを、第1部分71と第2部分72とに分けて行うので、穴の延在方向の寸法が不足し難くなる。これにより、貫通導体70の歩留まり及び信頼性が低下するのを抑制できる。また、形成された穴にバリアメタルBMを埋め込む工程において、バリアメタルBMの成膜を第1部分71と第2部分72とに分けて行うので、穴の底面に対するバリアメタルBMの被覆性が劣化するのを抑制できる。これにより、貫通導体70とその接続対象との間の電気抵抗が高くなることを抑制でき、信頼性が低下するのを抑制できる。また、形成された穴に貫通導体70を構成する材料を埋め込む工程において、材料の埋込を第1部分71と第2部分72とに分けて行うので、貫通導体70内にボイドが生じ難くすることができる。これにより、信頼性が低下するのを抑制できる。 In addition, since the through conductor 70 is formed in a first portion 71 and a second portion 72, the yield and reliability can be prevented from decreasing. More specifically, since the etching for forming the hole for embedding the through conductor 70 is performed in a first portion 71 and a second portion 72, the dimension in the extension direction of the hole is less likely to be insufficient. This can prevent the yield and reliability of the through conductor 70 from decreasing. In addition, in the process of embedding the barrier metal BM in the formed hole, the film formation of the barrier metal BM is performed in a first portion 71 and a second portion 72, so the coverage of the barrier metal BM on the bottom surface of the hole can be prevented from deteriorating. This can prevent the electrical resistance between the through conductor 70 and its connection target from increasing, and can prevent the reliability from decreasing. In addition, in the process of embedding the material constituting the through conductor 70 in the formed hole, the embedding of the material is performed in a first portion 71 and a second portion 72, so that voids are less likely to occur in the through conductor 70. This can prevent the reliability from decreasing.
 また、本技術の第1実施形態に係る光検出装置1では、第1部分71の延在方向に沿った寸法を第1部分71の幅方向に沿った寸法で除して得られた値を、第1部分71のアスペクト比である第1アスペクト比とし、第2部分72の延在方向に沿った寸法を第2部分72の幅方向に沿った寸法で除して得られた値を、第2部分72のアスペクト比である第2アスペクト比とし、貫通導体70の延在方向に沿った寸法を第1部分71の幅方向に沿った寸法で除して得られた値を、貫通導体70のアスペクト比である第3アスペクト比とした場合に、第1アスペクト比及び第2アスペクト比の両方とも、第3アスペクト比より低い構成である。そのため、貫通導体70の歩留まり及び信頼性が低下するのを抑制し、第2部分72とその接続対象との間の電気抵抗が大きくなるのを抑制しながら、貫通導体70の第3アスペクト比を高くすることができる。 In addition, in the photodetector 1 according to the first embodiment of the present technology, when the value obtained by dividing the dimension of the first portion 71 along the extension direction by the dimension of the first portion 71 along the width direction is defined as the first aspect ratio, the value obtained by dividing the dimension of the second portion 72 along the extension direction by the dimension of the second portion 72 along the width direction is defined as the second aspect ratio, and the value obtained by dividing the dimension of the through conductor 70 along the extension direction by the dimension of the first portion 71 along the width direction is defined as the third aspect ratio, both the first aspect ratio and the second aspect ratio are lower than the third aspect ratio. Therefore, the third aspect ratio of the through conductor 70 can be increased while suppressing a decrease in the yield and reliability of the through conductor 70 and suppressing an increase in the electrical resistance between the second portion 72 and its connection target.
 また、本技術の第1実施形態に係る光検出装置1では、第2アスペクト比は、第1アスペクト比より低い構成である。そのため、第2部分72が設けられる穴の底面に対するバリアメタルBMの被覆性が不十分になり難い。これにより、第2部分72とその接続対象との間の電気抵抗が高くなることを抑制できる。 Furthermore, in the photodetector 1 according to the first embodiment of the present technology, the second aspect ratio is configured to be lower than the first aspect ratio. Therefore, the barrier metal BM is less likely to provide insufficient coverage of the bottom surface of the hole in which the second portion 72 is provided. This makes it possible to prevent the electrical resistance between the second portion 72 and its connection target from increasing.
 また、本技術の第1実施形態に係る光検出装置1では、第3アスペクト比は、10以上、15以上、20以上、25以上、又は30以上である。このようなアスペクト比を有する貫通導体70を、それより低いアスペクト比を有する第1部分71と第2部分72とで構成することにより、第1部分71の幅方向の寸法が大きくなるのを抑制でき、貫通導体70を含む配線や素子のレイアウトの自由度が下がることを抑制できる。また、第2部分72とその接続対象との間の電気抵抗が大きくなるのを抑制でき、貫通導体70の歩留まり及び信頼性が低下するのを抑制できる。 Furthermore, in the photodetector 1 according to the first embodiment of the present technology, the third aspect ratio is 10 or more, 15 or more, 20 or more, 25 or more, or 30 or more. By configuring the through conductor 70 having such an aspect ratio with a first portion 71 and a second portion 72 having a lower aspect ratio, it is possible to prevent the width dimension of the first portion 71 from increasing, and to prevent a decrease in the degree of freedom in the layout of the wiring and elements including the through conductor 70. It is also possible to prevent an increase in the electrical resistance between the second portion 72 and its connection target, and to prevent a decrease in the yield and reliability of the through conductor 70.
 また、本技術の第1実施形態に係る光検出装置1では、第1アスペクト比及び第2アスペクト比は、それぞれ25以下又は16以下であり、また、10以上である。第1アスペクト比及び第2アスペクト比をそのような値にすることにより、第2半導体層40及び第1配線層30の厚みが大きくなった場合であっても、貫通導体70を得ることができる。 Furthermore, in the photodetector 1 according to the first embodiment of the present technology, the first aspect ratio and the second aspect ratio are 25 or less or 16 or less, and 10 or more, respectively. By setting the first aspect ratio and the second aspect ratio to such values, it is possible to obtain the through conductor 70 even if the thicknesses of the second semiconductor layer 40 and the first wiring layer 30 are large.
 また、本技術の第1実施形態に係る光検出装置1では、第1端部71aと第2端部72aとの幅方向に沿った寸法の差は、20nm以上である。そのため、第1端部71aと第2端部72aとの重ね合わせマージンが減少するのを抑制でき、第1部分71と第2部分72との電気的接続性が劣化するのを抑制できる。 Furthermore, in the photodetector 1 according to the first embodiment of the present technology, the difference in dimension along the width direction between the first end 71a and the second end 72a is 20 nm or more. Therefore, it is possible to prevent a reduction in the overlap margin between the first end 71a and the second end 72a, and to prevent a deterioration in the electrical connectivity between the first portion 71 and the second portion 72.
 また、本技術の第1実施形態に係る光検出装置1では、第1端部71aと第2端部72aとの接合位置は、第2半導体層40の第1配線層30側の面(第4の面S4)から厚み方向に5nm以上100nm以下の位置にある。これにより、厚み方向において、第2端部72aを、第4の面S4とは異なる位置に設けることができるので、第2端部72aと第2半導体層40との間の絶縁性を担保できる。また、第2端部72aに平坦化膜として機能する絶縁膜m1と、第2半導体層40側との接合膜として機能する絶縁膜m2とを積層しているので、第1配線層30側と第2半導体層40側との接合性が劣化するのを抑制できる。 In addition, in the photodetector 1 according to the first embodiment of the present technology, the junction position between the first end 71a and the second end 72a is at a position 5 nm to 100 nm in the thickness direction from the surface (fourth surface S4) of the second semiconductor layer 40 on the first wiring layer 30 side. This allows the second end 72a to be provided at a position different from the fourth surface S4 in the thickness direction, ensuring insulation between the second end 72a and the second semiconductor layer 40. In addition, the insulating film m1 functioning as a planarizing film and the insulating film m2 functioning as a bonding film with the second semiconductor layer 40 side are laminated on the second end 72a, so that deterioration of the bonding between the first wiring layer 30 side and the second semiconductor layer 40 side can be suppressed.
 なお、図4Bにおいて、第1端部71aと第2端部72aとの接合位置と、第4の面S4と、の間の厚み方向に沿った距離を距離d1としたが、接合位置と貼り合わせ面Sとの間の厚み方向に沿った距離を、距離d1としても良い。そして、距離d1は5nm以上100nm以下である。 In FIG. 4B, the distance in the thickness direction between the joining position of the first end 71a and the second end 72a and the fourth surface S4 is set to distance d1, but the distance in the thickness direction between the joining position and the bonding surface S may also be set to distance d1. Distance d1 is 5 nm or more and 100 nm or less.
 ≪第1実施形態の変形例≫
 以下、第1実施形態の変形例について、説明する。
<Modification of the First Embodiment>
Modifications of the first embodiment will now be described.
 <変形例1>
 第1実施形態に係る光検出装置1では、第2端部72aの幅方向に沿った寸法w72aが第1端部71aの幅方向に沿った寸法w71aより大きく設けられていたが、本技術はこれには限定されない。第1実施形態の変形例1に係る光検出装置1では、図7に示すように、第1端部71aの幅方向に沿った寸法w71aが第2端部72aの幅方向に沿った寸法w72aより大きく設けられていても良い。また、互いに直に接続された第1端部71aと第2端部72aとの幅方向に沿った寸法の差は、重ね合わせマージンを確保するために、20nm以上に設けられている。より具体的には、第1端部71aの幅方向に沿った寸法w71aは、第2端部72aの幅方向に沿った寸法w72aより、20nm以上大きく設けられている(w71a-w72a≧20nm)。
<Modification 1>
In the photodetector 1 according to the first embodiment, the dimension w72a along the width direction of the second end 72a is set to be larger than the dimension w71a along the width direction of the first end 71a, but the present technology is not limited to this. In the photodetector 1 according to the first modification of the first embodiment, as shown in FIG. 7, the dimension w71a along the width direction of the first end 71a may be set to be larger than the dimension w72a along the width direction of the second end 72a. In addition, the difference in the dimension along the width direction between the first end 71a and the second end 72a directly connected to each other is set to 20 nm or more in order to ensure an overlap margin. More specifically, the dimension w71a along the width direction of the first end 71a is set to be 20 nm or more larger than the dimension w72a along the width direction of the second end 72a (w71a-w72a≧20 nm).
 この第1実施形態の変形例1に係る光検出装置1であっても、上述の第1実施形態に係る光検出装置1と同様の効果が得られる。 Even with the light detection device 1 according to this modified example 1 of the first embodiment, the same effects as those of the light detection device 1 according to the first embodiment described above can be obtained.
 <変形例2>
 第1実施形態に係る光検出装置1では、第1部分71及び第2部分72をテーパ形状に設けられていたが、本技術はこれには限定されない。第1実施形態の変形例2に係る光検出装置1では、図8に示すように、第1部分71及び第2部分72は、それぞれ厚み方向に沿って幅方向の寸法がほぼ同じに設けられている。より具体的には、第1部分71の幅方向の寸法は寸法w71aであり、第2部分72の幅方向の寸法は寸法w72aである。
<Modification 2>
In the photodetector 1 according to the first embodiment, the first portion 71 and the second portion 72 are provided in a tapered shape, but the present technology is not limited to this. In the photodetector 1 according to the second modification of the first embodiment, as shown in Fig. 8, the first portion 71 and the second portion 72 are provided with approximately the same widthwise dimension along the thickness direction. More specifically, the widthwise dimension of the first portion 71 is dimension w71a, and the widthwise dimension of the second portion 72 is dimension w72a.
 この第1実施形態の変形例2に係る光検出装置1であっても、上述の第1実施形態に係る光検出装置1と同様の効果が得られる。 Even with the light detection device 1 according to the second modification of the first embodiment, the same effects as those of the light detection device 1 according to the first embodiment described above can be obtained.
 <変形例3>
 本変形例3は、上述の変形例1と変形例2との組み合わせである。第1実施形態の変形例3に係る光検出装置1では、図9に示すように、第1端部71aの幅方向に沿った寸法w71aが第2端部72aの幅方向に沿った寸法w72aより大きく設けられている。そして、第1部分71及び第2部分72は、それぞれ厚み方向に沿って幅方向の寸法がほぼ同じに設けられている。より具体的には、第1部分71の幅方向の寸法は寸法w71aであり、第2部分72の幅方向の寸法は寸法w72aである。
<Modification 3>
This modified example 3 is a combination of the modified examples 1 and 2. In the light detection device 1 according to the modified example 3 of the first embodiment, as shown in Fig. 9, the dimension w71a along the width direction of the first end 71a is set to be larger than the dimension w72a along the width direction of the second end 72a. The first portion 71 and the second portion 72 are each set to have approximately the same width dimension along the thickness direction. More specifically, the width direction dimension of the first portion 71 is dimension w71a, and the width direction dimension of the second portion 72 is dimension w72a.
 この第1実施形態の変形例3に係る光検出装置1であっても、上述の第1実施形態に係る光検出装置1と同様の効果が得られる。 Even with the light detection device 1 according to the third modification of the first embodiment, the same effects as those of the light detection device 1 according to the first embodiment described above can be obtained.
 <変形例4>
 第1実施形態に係る光検出装置1では、第2部分72の第4端部72bは、第1半導体層20の第1配線層30側の面(第1の面S1)の近傍にあったが、本技術はこれには限定されない。第1実施形態の変形例4に係る光検出装置1では、図10に示すように、第2部分72の延在方向の他方の端部である第4端部72bは、第1配線層30に設けられ且水平方向に沿って延在する横配線である配線32に接続されている。第1配線層30は、配線32と、縦配線であるビア(図示を省略する)とを有している。配線32は絶縁膜31を介して積層されている。
<Modification 4>
In the photodetector 1 according to the first embodiment, the fourth end 72b of the second portion 72 is located near the surface (first surface S1) of the first semiconductor layer 20 on the first wiring layer 30 side, but the present technology is not limited to this. In the photodetector 1 according to the fourth modification of the first embodiment, as shown in FIG. 10, the fourth end 72b, which is the other end in the extension direction of the second portion 72, is connected to the wiring 32, which is a horizontal wiring provided in the first wiring layer 30 and extends along the horizontal direction. The first wiring layer 30 has the wiring 32 and a via (not shown) which is a vertical wiring. The wiring 32 is stacked via an insulating film 31.
 この第1実施形態の変形例4に係る光検出装置1であっても、上述の第1実施形態に係る光検出装置1と同様の効果が得られる。なお、この変形例4を、変形例1から変形例3のいずれかと組み合わせても良い。 Even with the light detection device 1 according to this modification 4 of the first embodiment, the same effects as those of the light detection device 1 according to the first embodiment described above can be obtained. Note that this modification 4 may be combined with any of modifications 1 to 3.
 [第2実施形態]
 <1.電子機器への応用例>
 次に、図11に示す電子機器100について説明する。電子機器100は、固体撮像装置101と、光学レンズ102と、シャッタ装置103と、駆動回路104と、信号処理回路105とを備えている。電子機器100は、これに限定されないが、例えば、カメラ等の電子機器である。また、電子機器100は、固体撮像装置101として、上述の光検出装置1を備えている。
[Second embodiment]
<1. Application examples to electronic devices>
Next, an electronic device 100 shown in Fig. 11 will be described. The electronic device 100 includes a solid-state imaging device 101, an optical lens 102, a shutter device 103, a drive circuit 104, and a signal processing circuit 105. The electronic device 100 is, for example, an electronic device such as a camera, but is not limited thereto. The electronic device 100 also includes the above-mentioned photodetector 1 as the solid-state imaging device 101.
 光学レンズ(光学系)102は、被写体からの像光(入射光106)を固体撮像装置101の撮像面上に結像させる。これにより、固体撮像装置101内に一定期間にわたって信号電荷が蓄積される。シャッタ装置103は、固体撮像装置101への光照射期間及び遮光期間を制御する。駆動回路104は、固体撮像装置101の転送動作及びシャッタ装置103のシャッタ動作を制御する駆動信号を供給する。駆動回路104から供給される駆動信号(タイミング信号)により、固体撮像装置101の信号転送を行う。信号処理回路105は、固体撮像装置101から出力される信号(画素信号)に各種信号処理を行う。信号処理が行われた映像信号は、メモリ等の記憶媒体に記憶され、或いはモニタに出力される。 The optical lens (optical system) 102 focuses image light (incident light 106) from the subject onto the imaging surface of the solid-state imaging device 101. This causes signal charges to accumulate in the solid-state imaging device 101 for a certain period of time. The shutter device 103 controls the light irradiation period and light blocking period for the solid-state imaging device 101. The drive circuit 104 supplies a drive signal that controls the transfer operation of the solid-state imaging device 101 and the shutter operation of the shutter device 103. The drive signal (timing signal) supplied from the drive circuit 104 transfers signals from the solid-state imaging device 101. The signal processing circuit 105 performs various signal processing on signals (pixel signals) output from the solid-state imaging device 101. The video signals that have undergone signal processing are stored in a storage medium such as a memory, or output to a monitor.
 このような構成により、電子機器100では、固体撮像装置101において貫通導体70が多段構成を有しているので、貫通導体70のうち第2半導体層40を貫通する部分の幅方向の寸法が大きくなるのを抑制でき、貫通導体70を含む配線や素子のレイアウトの自由度が下がることを抑制できる。そして、貫通導体70とその接続対象との間の電気抵抗が大きくなるのを抑制できる。 With this configuration, in the electronic device 100, the through conductors 70 have a multi-stage configuration in the solid-state imaging device 101, so that the width dimension of the portion of the through conductors 70 that penetrates the second semiconductor layer 40 can be prevented from increasing, and the degree of freedom in the layout of the wiring and elements including the through conductors 70 can be prevented from decreasing. Also, the electrical resistance between the through conductors 70 and their connection targets can be prevented from increasing.
 なお、電子機器100は、カメラに限られるものではなく、他の電子機器であっても良い。例えば、携帯電話機等のモバイル機器向けカメラモジュール等の撮像装置であっても良い。 The electronic device 100 is not limited to a camera, but may be other electronic devices. For example, it may be an imaging device such as a camera module for a mobile device such as a mobile phone.
 また、電子機器100は、固体撮像装置101として、第1実施形態及びその変形例のいずれかに係る光検出装置1、又は第1実施形態及びその変形例のうちの少なくとも2つの組み合わせに係る光検出装置1を備えることができる。 The electronic device 100 may also include, as the solid-state imaging device 101, a photodetector 1 according to either the first embodiment or its modified examples, or a photodetector 1 according to a combination of at least two of the first embodiment and its modified examples.
 <2.移動体への応用例>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。
<2. Examples of applications to moving objects>
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the technology according to the present disclosure may be realized as a device mounted on any type of moving body such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility device, an airplane, a drone, a ship, or a robot.
 図12は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。 FIG. 12 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile object control system to which the technology disclosed herein can be applied.
 車両制御システム12000は、通信ネットワーク12001を介して接続された複数の電子制御ユニットを備える。図12に示した例では、車両制御システム12000は、駆動系制御ユニット12010、ボディ系制御ユニット12020、車外情報検出ユニット12030、車内情報検出ユニット12040、及び統合制御ユニット12050を備える。また、統合制御ユニット12050の機能構成として、マイクロコンピュータ12051、音声画像出力部12052、及び車載ネットワークI/F(interface)12053が図示されている。 The vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001. In the example shown in FIG. 12, the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside vehicle information detection unit 12030, an inside vehicle information detection unit 12040, and an integrated control unit 12050. Also shown as functional components of the integrated control unit 12050 are a microcomputer 12051, an audio/video output unit 12052, and an in-vehicle network I/F (interface) 12053.
 駆動系制御ユニット12010は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット12010は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。 The drive system control unit 12010 controls the operation of devices related to the drive system of the vehicle according to various programs. For example, the drive system control unit 12010 functions as a control device for a drive force generating device for generating the drive force of the vehicle, such as an internal combustion engine or a drive motor, a drive force transmission mechanism for transmitting the drive force to the wheels, a steering mechanism for adjusting the steering angle of the vehicle, and a braking device for generating a braking force for the vehicle.
 ボディ系制御ユニット12020は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット12020は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット12020には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット12020は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 The body system control unit 12020 controls the operation of various devices installed in the vehicle body according to various programs. For example, the body system control unit 12020 functions as a control device for a keyless entry system, a smart key system, a power window device, or various lamps such as headlamps, tail lamps, brake lamps, turn signals, and fog lamps. In this case, radio waves or signals from various switches transmitted from a portable device that replaces a key can be input to the body system control unit 12020. The body system control unit 12020 accepts the input of these radio waves or signals and controls the vehicle's door lock device, power window device, lamps, etc.
 車外情報検出ユニット12030は、車両制御システム12000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット12030には、撮像部12031が接続される。車外情報検出ユニット12030は、撮像部12031に車外の画像を撮像させるとともに、撮像された画像を受信する。車外情報検出ユニット12030は、受信した画像に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。 The outside-vehicle information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000. For example, the image capturing unit 12031 is connected to the outside-vehicle information detection unit 12030. The outside-vehicle information detection unit 12030 causes the image capturing unit 12031 to capture images outside the vehicle and receives the captured images. The outside-vehicle information detection unit 12030 may perform object detection processing or distance detection processing for people, cars, obstacles, signs, or characters on the road surface based on the received images.
 撮像部12031は、光を受光し、その光の受光量に応じた電気信号を出力する光センサである。撮像部12031は、電気信号を画像として出力することもできるし、測距の情報として出力することもできる。また、撮像部12031が受光する光は、可視光であっても良いし、赤外線等の非可視光であっても良い。 The imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal according to the amount of light received. The imaging unit 12031 can output the electrical signal as an image, or as distance measurement information. The light received by the imaging unit 12031 may be visible light, or may be invisible light such as infrared light.
 車内情報検出ユニット12040は、車内の情報を検出する。車内情報検出ユニット12040には、例えば、運転者の状態を検出する運転者状態検出部12041が接続される。運転者状態検出部12041は、例えば運転者を撮像するカメラを含み、車内情報検出ユニット12040は、運転者状態検出部12041から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。 The in-vehicle information detection unit 12040 detects information inside the vehicle. To the in-vehicle information detection unit 12040, for example, a driver state detection unit 12041 that detects the state of the driver is connected. The driver state detection unit 12041 includes, for example, a camera that captures an image of the driver, and the in-vehicle information detection unit 12040 may calculate the driver's degree of fatigue or concentration based on the detection information input from the driver state detection unit 12041, or may determine whether the driver is dozing off.
 マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット12010に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行うことができる。 The microcomputer 12051 can calculate the control target values of the driving force generating device, steering mechanism, or braking device based on the information inside and outside the vehicle acquired by the outside vehicle information detection unit 12030 or the inside vehicle information detection unit 12040, and output a control command to the drive system control unit 12010. For example, the microcomputer 12051 can perform cooperative control aimed at realizing the functions of an ADAS (Advanced Driver Assistance System), including avoiding or mitigating vehicle collisions, following based on the distance between vehicles, maintaining vehicle speed, vehicle collision warning, or vehicle lane departure warning.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 The microcomputer 12051 can also control the driving force generating device, steering mechanism, braking device, etc. based on information about the surroundings of the vehicle acquired by the outside vehicle information detection unit 12030 or the inside vehicle information detection unit 12040, thereby performing cooperative control aimed at automatic driving, which allows the vehicle to travel autonomously without relying on the driver's operation.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030で取得される車外の情報に基づいて、ボディ系制御ユニット12020に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車外情報検出ユニット12030で検知した先行車又は対向車の位置に応じてヘッドランプを制御し、ハイビームをロービームに切り替える等の防眩を図ることを目的とした協調制御を行うことができる。 The microcomputer 12051 can also output control commands to the body system control unit 12020 based on information outside the vehicle acquired by the outside-vehicle information detection unit 12030. For example, the microcomputer 12051 can control the headlamps according to the position of a preceding vehicle or an oncoming vehicle detected by the outside-vehicle information detection unit 12030, and perform cooperative control aimed at preventing glare, such as switching high beams to low beams.
 音声画像出力部12052は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図12の例では、出力装置として、オーディオスピーカ12061、表示部12062及びインストルメントパネル12063が例示されている。表示部12062は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。 The audio/image output unit 12052 transmits at least one output signal of audio and image to an output device capable of visually or audibly notifying the occupants of the vehicle or the outside of the vehicle of information. In the example of FIG. 12, an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices. The display unit 12062 may include, for example, at least one of an on-board display and a head-up display.
 図13は、撮像部12031の設置位置の例を示す図である。 FIG. 13 shows an example of the installation position of the imaging unit 12031.
 図13では、車両12100は、撮像部12031として、撮像部12101,12102,12103,12104,12105を有する。 In FIG. 13, the vehicle 12100 has imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
 撮像部12101,12102,12103,12104,12105は、例えば、車両12100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部等の位置に設けられる。フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105は、主として車両12100の前方の画像を取得する。サイドミラーに備えられる撮像部12102,12103は、主として車両12100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部12104は、主として車両12100の後方の画像を取得する。撮像部12101及び12105で取得される前方の画像は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 The imaging units 12101, 12102, 12103, 12104, and 12105 are provided, for example, at the front nose, side mirrors, rear bumper, back door, and the top of the windshield inside the vehicle cabin of the vehicle 12100. The imaging unit 12101 provided at the front nose and the imaging unit 12105 provided at the top of the windshield inside the vehicle cabin mainly acquire images of the front of the vehicle 12100. The imaging units 12102 and 12103 provided at the side mirrors mainly acquire images of the sides of the vehicle 12100. The imaging unit 12104 provided at the rear bumper or back door mainly acquires images of the rear of the vehicle 12100. The images of the front acquired by the imaging units 12101 and 12105 are mainly used to detect preceding vehicles, pedestrians, obstacles, traffic lights, traffic signs, lanes, etc.
 なお、図13には、撮像部12101ないし12104の撮影範囲の一例が示されている。撮像範囲12111は、フロントノーズに設けられた撮像部12101の撮像範囲を示し、撮像範囲12112,12113は、それぞれサイドミラーに設けられた撮像部12102,12103の撮像範囲を示し、撮像範囲12114は、リアバンパ又はバックドアに設けられた撮像部12104の撮像範囲を示す。例えば、撮像部12101ないし12104で撮像された画像データが重ね合わせられることにより、車両12100を上方から見た俯瞰画像が得られる。 Note that FIG. 13 shows an example of the imaging ranges of the imaging units 12101 to 12104. Imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose, imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively, and imaging range 12114 indicates the imaging range of the imaging unit 12104 provided on the rear bumper or back door. For example, an overhead image of the vehicle 12100 viewed from above is obtained by superimposing the image data captured by the imaging units 12101 to 12104.
 撮像部12101ないし12104の少なくとも1つは、距離情報を取得する機能を有していてもよい。例えば、撮像部12101ないし12104の少なくとも1つは、複数の撮像素子からなるステレオカメラであってもよいし、位相差検出用の画素を有する撮像素子であってもよい。 At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the imaging units 12101 to 12104 may be a stereo camera consisting of multiple imaging elements, or an imaging element having pixels for detecting phase differences.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を基に、撮像範囲12111ないし12114内における各立体物までの距離と、この距離の時間的変化(車両12100に対する相対速度)を求めることにより、特に車両12100の進行路上にある最も近い立体物で、車両12100と略同じ方向に所定の速度(例えば、0km/h以上)で走行する立体物を先行車として抽出することができる。さらに、マイクロコンピュータ12051は、先行車の手前に予め確保すべき車間距離を設定し、自動ブレーキ制御(追従停止制御も含む)や自動加速制御(追従発進制御も含む)等を行うことができる。このように運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 For example, the microcomputer 12051 can obtain the distance to each solid object within the imaging ranges 12111 to 12114 and the change in this distance over time (relative speed with respect to the vehicle 12100) based on the distance information obtained from the imaging units 12101 to 12104, and can extract as a preceding vehicle, in particular, the closest solid object on the path of the vehicle 12100 that is traveling in approximately the same direction as the vehicle 12100 at a predetermined speed (e.g., 0 km/h or faster). Furthermore, the microcomputer 12051 can set the inter-vehicle distance that should be maintained in advance in front of the preceding vehicle, and perform automatic braking control (including follow-up stop control) and automatic acceleration control (including follow-up start control). In this way, cooperative control can be performed for the purpose of automatic driving, which runs autonomously without relying on the driver's operation.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を元に、立体物に関する立体物データを、2輪車、普通車両、大型車両、歩行者、電柱等その他の立体物に分類して抽出し、障害物の自動回避に用いることができる。例えば、マイクロコンピュータ12051は、車両12100の周辺の障害物を、車両12100のドライバが視認可能な障害物と視認困難な障害物とに識別する。そして、マイクロコンピュータ12051は、各障害物との衝突の危険度を示す衝突リスクを判断し、衝突リスクが設定値以上で衝突可能性がある状況であるときには、オーディオスピーカ12061や表示部12062を介してドライバに警報を出力することや、駆動系制御ユニット12010を介して強制減速や回避操舵を行うことで、衝突回避のための運転支援を行うことができる。 For example, the microcomputer 12051 classifies and extracts three-dimensional object data on three-dimensional objects, such as two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects, based on the distance information obtained from the imaging units 12101 to 12104, and can use the data to automatically avoid obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that are visible to the driver of the vehicle 12100 and obstacles that are difficult to see. The microcomputer 12051 then determines the collision risk, which indicates the risk of collision with each obstacle, and when the collision risk is equal to or exceeds a set value and there is a possibility of a collision, it can provide driving assistance for collision avoidance by outputting an alarm to the driver via the audio speaker 12061 or the display unit 12062, or by forcibly decelerating or steering to avoid a collision via the drive system control unit 12010.
 撮像部12101ないし12104の少なくとも1つは、赤外線を検出する赤外線カメラであってもよい。例えば、マイクロコンピュータ12051は、撮像部12101ないし12104の撮像画像中に歩行者が存在するか否かを判定することで歩行者を認識することができる。かかる歩行者の認識は、例えば赤外線カメラとしての撮像部12101ないし12104の撮像画像における特徴点を抽出する手順と、物体の輪郭を示す一連の特徴点にパターンマッチング処理を行って歩行者か否かを判別する手順によって行われる。マイクロコンピュータ12051が、撮像部12101ないし12104の撮像画像中に歩行者が存在すると判定し、歩行者を認識すると、音声画像出力部12052は、当該認識された歩行者に強調のための方形輪郭線を重畳表示するように、表示部12062を制御する。また、音声画像出力部12052は、歩行者を示すアイコン等を所望の位置に表示するように表示部12062を制御してもよい。 At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104. The recognition of such a pedestrian is performed, for example, by a procedure of extracting feature points in the captured image of the imaging units 12101 to 12104 as infrared cameras, and a procedure of performing pattern matching processing on a series of feature points that indicate the contour of an object to determine whether or not it is a pedestrian. When the microcomputer 12051 determines that a pedestrian is present in the captured image of the imaging units 12101 to 12104 and recognizes a pedestrian, the audio/image output unit 12052 controls the display unit 12062 to superimpose a rectangular contour line for emphasis on the recognized pedestrian. The audio/image output unit 12052 may also control the display unit 12062 to display an icon or the like indicating a pedestrian at a desired position.
 以上、本開示に係る技術が適用され得る車両制御システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、例えば、撮像部12031に適用され得る。具体的には、貫通導体70を有する光検出装置1を、撮像部12031に適用することができる。撮像部12031に本開示に係る技術を適用することにより、より見やすい撮影画像を得ることができるため、ドライバの疲労を軽減することが可能になる。 Above, an example of a vehicle control system to which the technology of the present disclosure can be applied has been described. Of the configurations described above, the technology of the present disclosure can be applied to, for example, the imaging unit 12031. Specifically, a light detection device 1 having a through conductor 70 can be applied to the imaging unit 12031. By applying the technology of the present disclosure to the imaging unit 12031, a captured image that is easier to see can be obtained, thereby reducing driver fatigue.
 <3.内視鏡手術システムへの応用例>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、内視鏡手術システムに適用されてもよい。
<3. Application example to endoscopic surgery system>
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the technology according to the present disclosure may be applied to an endoscopic surgery system.
 図14は、本開示に係る技術(本技術)が適用され得る内視鏡手術システムの概略的な構成の一例を示す図である。 FIG. 14 is a diagram showing an example of the general configuration of an endoscopic surgery system to which the technology disclosed herein (the present technology) can be applied.
 図14では、術者(医師)11131が、内視鏡手術システム11000を用いて、患者ベッド11133上の患者11132に手術を行っている様子が図示されている。図示するように、内視鏡手術システム11000は、内視鏡11100と、気腹チューブ11111やエネルギー処置具11112等の、その他の術具11110と、内視鏡11100を支持する支持アーム装置11120と、内視鏡下手術のための各種の装置が搭載されたカート11200と、から構成される。 In FIG. 14, an operator (doctor) 11131 is shown using an endoscopic surgery system 11000 to perform surgery on a patient 11132 on a patient bed 11133. As shown in the figure, the endoscopic surgery system 11000 is composed of an endoscope 11100, other surgical tools 11110 such as an insufflation tube 11111 and an energy treatment tool 11112, a support arm device 11120 that supports the endoscope 11100, and a cart 11200 on which various devices for endoscopic surgery are mounted.
 内視鏡11100は、先端から所定の長さの領域が患者11132の体腔内に挿入される鏡筒11101と、鏡筒11101の基端に接続されるカメラヘッド11102と、から構成される。図示する例では、硬性の鏡筒11101を有するいわゆる硬性鏡として構成される内視鏡11100を図示しているが、内視鏡11100は、軟性の鏡筒を有するいわゆる軟性鏡として構成されてもよい。 The endoscope 11100 is composed of a lens barrel 11101, the tip of which is inserted into the body cavity of the patient 11132 at a predetermined length, and a camera head 11102 connected to the base end of the lens barrel 11101. In the illustrated example, the endoscope 11100 is configured as a so-called rigid scope having a rigid lens barrel 11101, but the endoscope 11100 may also be configured as a so-called flexible scope having a flexible lens barrel.
 鏡筒11101の先端には、対物レンズが嵌め込まれた開口部が設けられている。内視鏡11100には光源装置11203が接続されており、当該光源装置11203によって生成された光が、鏡筒11101の内部に延設されるライトガイドによって当該鏡筒の先端まで導光され、対物レンズを介して患者11132の体腔内の観察対象に向かって照射される。なお、内視鏡11100は、直視鏡であってもよいし、斜視鏡又は側視鏡であってもよい。 The tip of the tube 11101 has an opening into which an objective lens is fitted. A light source device 11203 is connected to the endoscope 11100, and light generated by the light source device 11203 is guided to the tip of the tube by a light guide extending inside the tube 11101, and is irradiated via the objective lens towards an object to be observed inside the body cavity of the patient 11132. The endoscope 11100 may be a direct-viewing endoscope, an oblique-viewing endoscope, or a side-viewing endoscope.
 カメラヘッド11102の内部には光学系及び撮像素子が設けられており、観察対象からの反射光(観察光)は当該光学系によって当該撮像素子に集光される。当該撮像素子によって観察光が光電変換され、観察光に対応する電気信号、すなわち観察像に対応する画像信号が生成される。当該画像信号は、RAWデータとしてカメラコントロールユニット(CCU: Camera Control Unit)11201に送信される。 An optical system and an image sensor are provided inside the camera head 11102, and the reflected light (observation light) from the object of observation is focused on the image sensor by the optical system. The observation light is photoelectrically converted by the image sensor to generate an electrical signal corresponding to the observation light, i.e., an image signal corresponding to the observed image. The image signal is sent to the camera control unit (CCU: Camera Control Unit) 11201 as RAW data.
 CCU11201は、CPU(Central Processing Unit)やGPU(Graphics Processing Unit)等によって構成され、内視鏡11100及び表示装置11202の動作を統括的に制御する。さらに、CCU11201は、カメラヘッド11102から画像信号を受け取り、その画像信号に対して、例えば現像処理(デモザイク処理)等の、当該画像信号に基づく画像を表示するための各種の画像処理を施す。 The CCU 11201 is composed of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), etc., and controls the overall operation of the endoscope 11100 and the display device 11202. Furthermore, the CCU 11201 receives an image signal from the camera head 11102, and performs various image processing on the image signal, such as development processing (demosaic processing), in order to display an image based on the image signal.
 表示装置11202は、CCU11201からの制御により、当該CCU11201によって画像処理が施された画像信号に基づく画像を表示する。 The display device 11202, under the control of the CCU 11201, displays an image based on the image signal that has been subjected to image processing by the CCU 11201.
 光源装置11203は、例えばLED(Light Emitting Diode)等の光源から構成され、術部等を撮影する際の照射光を内視鏡11100に供給する。 The light source device 11203 is composed of a light source such as an LED (Light Emitting Diode) and supplies irradiation light to the endoscope 11100 when photographing the surgical site, etc.
 入力装置11204は、内視鏡手術システム11000に対する入力インタフェースである。ユーザは、入力装置11204を介して、内視鏡手術システム11000に対して各種の情報の入力や指示入力を行うことができる。例えば、ユーザは、内視鏡11100による撮像条件(照射光の種類、倍率及び焦点距離等)を変更する旨の指示等を入力する。 The input device 11204 is an input interface for the endoscopic surgery system 11000. A user can input various information and instructions to the endoscopic surgery system 11000 via the input device 11204. For example, the user inputs an instruction to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) of the endoscope 11100.
 処置具制御装置11205は、組織の焼灼、切開又は血管の封止等のためのエネルギー処置具11112の駆動を制御する。気腹装置11206は、内視鏡11100による視野の確保及び術者の作業空間の確保の目的で、患者11132の体腔を膨らめるために、気腹チューブ11111を介して当該体腔内にガスを送り込む。レコーダ11207は、手術に関する各種の情報を記録可能な装置である。プリンタ11208は、手術に関する各種の情報を、テキスト、画像又はグラフ等各種の形式で印刷可能な装置である。 The treatment tool control device 11205 controls the operation of the energy treatment tool 11112 for cauterizing tissue, incising, sealing blood vessels, etc. The insufflation device 11206 sends gas into the body cavity of the patient 11132 via the insufflation tube 11111 to inflate the body cavity in order to ensure a clear field of view for the endoscope 11100 and to ensure a working space for the surgeon. The recorder 11207 is a device capable of recording various types of information related to the surgery. The printer 11208 is a device capable of printing various types of information related to the surgery in various formats such as text, images, or graphs.
 なお、内視鏡11100に術部を撮影する際の照射光を供給する光源装置11203は、例えばLED、レーザ光源又はこれらの組み合わせによって構成される白色光源から構成することができる。RGBレーザ光源の組み合わせにより白色光源が構成される場合には、各色(各波長)の出力強度及び出力タイミングを高精度に制御することができるため、光源装置11203において撮像画像のホワイトバランスの調整を行うことができる。また、この場合には、RGBレーザ光源それぞれからのレーザ光を時分割で観察対象に照射し、その照射タイミングに同期してカメラヘッド11102の撮像素子の駆動を制御することにより、RGBそれぞれに対応した画像を時分割で撮像することも可能である。当該方法によれば、当該撮像素子にカラーフィルタを設けなくても、カラー画像を得ることができる。 The light source device 11203 that supplies illumination light to the endoscope 11100 when photographing the surgical site can be composed of a white light source composed of, for example, an LED, a laser light source, or a combination of these. When the white light source is composed of a combination of RGB laser light sources, the output intensity and output timing of each color (each wavelength) can be controlled with high precision, so that the white balance of the captured image can be adjusted in the light source device 11203. In this case, it is also possible to capture images corresponding to each of the RGB colors in a time-division manner by irradiating the object of observation with laser light from each of the RGB laser light sources in a time-division manner and controlling the drive of the image sensor of the camera head 11102 in synchronization with the irradiation timing. According to this method, a color image can be obtained without providing a color filter to the image sensor.
 また、光源装置11203は、出力する光の強度を所定の時間ごとに変更するようにその駆動が制御されてもよい。その光の強度の変更のタイミングに同期してカメラヘッド11102の撮像素子の駆動を制御して時分割で画像を取得し、その画像を合成することにより、いわゆる黒つぶれ及び白とびのない高ダイナミックレンジの画像を生成することができる。 The light source device 11203 may be controlled to change the intensity of the light it outputs at predetermined time intervals. The image sensor of the camera head 11102 may be controlled to acquire images in a time-division manner in synchronization with the timing of the change in the light intensity, and the images may be synthesized to generate an image with a high dynamic range that is free of so-called blackout and whiteout.
 また、光源装置11203は、特殊光観察に対応した所定の波長帯域の光を供給可能に構成されてもよい。特殊光観察では、例えば、体組織における光の吸収の波長依存性を利用して、通常の観察時における照射光(すなわち、白色光)に比べて狭帯域の光を照射することにより、粘膜表層の血管等の所定の組織を高コントラストで撮影する、いわゆる狭帯域光観察(Narrow Band Imaging)が行われる。あるいは、特殊光観察では、励起光を照射することにより発生する蛍光により画像を得る蛍光観察が行われてもよい。蛍光観察では、体組織に励起光を照射し当該体組織からの蛍光を観察すること(自家蛍光観察)、又はインドシアニングリーン(ICG)等の試薬を体組織に局注するとともに当該体組織にその試薬の蛍光波長に対応した励起光を照射し蛍光像を得ること等を行うことができる。光源装置11203は、このような特殊光観察に対応した狭帯域光及び/又は励起光を供給可能に構成され得る。 The light source device 11203 may be configured to supply light of a predetermined wavelength band corresponding to special light observation. In special light observation, for example, by utilizing the wavelength dependency of light absorption in body tissue, a narrow band of light is irradiated compared to the light irradiated during normal observation (i.e., white light), and a specific tissue such as blood vessels on the surface of the mucosa is photographed with high contrast, so-called narrow band imaging is performed. Alternatively, in special light observation, fluorescence observation may be performed in which an image is obtained by fluorescence generated by irradiating excitation light. In fluorescence observation, excitation light is irradiated to body tissue and fluorescence from the body tissue is observed (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected into the body tissue and excitation light corresponding to the fluorescence wavelength of the reagent is irradiated to the body tissue to obtain a fluorescent image. The light source device 11203 may be configured to supply narrow band light and/or excitation light corresponding to such special light observation.
 図15は、図14に示すカメラヘッド11102及びCCU11201の機能構成の一例を示すブロック図である。 FIG. 15 is a block diagram showing an example of the functional configuration of the camera head 11102 and CCU 11201 shown in FIG. 14.
 カメラヘッド11102は、レンズユニット11401と、撮像部11402と、駆動部11403と、通信部11404と、カメラヘッド制御部11405と、を有する。CCU11201は、通信部11411と、画像処理部11412と、制御部11413と、を有する。カメラヘッド11102とCCU11201とは、伝送ケーブル11400によって互いに通信可能に接続されている。 The camera head 11102 has a lens unit 11401, an imaging unit 11402, a drive unit 11403, a communication unit 11404, and a camera head control unit 11405. The CCU 11201 has a communication unit 11411, an image processing unit 11412, and a control unit 11413. The camera head 11102 and the CCU 11201 are connected to each other via a transmission cable 11400 so that they can communicate with each other.
 レンズユニット11401は、鏡筒11101との接続部に設けられる光学系である。鏡筒11101の先端から取り込まれた観察光は、カメラヘッド11102まで導光され、当該レンズユニット11401に入射する。レンズユニット11401は、ズームレンズ及びフォーカスレンズを含む複数のレンズが組み合わされて構成される。 The lens unit 11401 is an optical system provided at the connection with the lens barrel 11101. Observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102 and enters the lens unit 11401. The lens unit 11401 is composed of a combination of multiple lenses including a zoom lens and a focus lens.
 撮像部11402は、撮像素子で構成される。撮像部11402を構成する撮像素子は、1つ(いわゆる単板式)であってもよいし、複数(いわゆる多板式)であってもよい。撮像部11402が多板式で構成される場合には、例えば各撮像素子によってRGBそれぞれに対応する画像信号が生成され、それらが合成されることによりカラー画像が得られてもよい。あるいは、撮像部11402は、3D(Dimensional)表示に対応する右目用及び左目用の画像信号をそれぞれ取得するための1対の撮像素子を有するように構成されてもよい。3D表示が行われることにより、術者11131は術部における生体組織の奥行きをより正確に把握することが可能になる。なお、撮像部11402が多板式で構成される場合には、各撮像素子に対応して、レンズユニット11401も複数系統設けられ得る。 The imaging unit 11402 is composed of an imaging element. The imaging element constituting the imaging unit 11402 may be one (so-called single-plate type) or multiple (so-called multi-plate type). When the imaging unit 11402 is composed of a multi-plate type, for example, each imaging element may generate an image signal corresponding to each of RGB, and a color image may be obtained by combining these. Alternatively, the imaging unit 11402 may be configured to have a pair of imaging elements for acquiring image signals for the right eye and the left eye corresponding to 3D (dimensional) display. By performing 3D display, the surgeon 11131 can more accurately grasp the depth of the biological tissue in the surgical site. Note that when the imaging unit 11402 is composed of a multi-plate type, multiple lens units 11401 may be provided corresponding to each imaging element.
 また、撮像部11402は、必ずしもカメラヘッド11102に設けられなくてもよい。例えば、撮像部11402は、鏡筒11101の内部に、対物レンズの直後に設けられてもよい。 Furthermore, the imaging unit 11402 does not necessarily have to be provided in the camera head 11102. For example, the imaging unit 11402 may be provided inside the lens barrel 11101, immediately after the objective lens.
 駆動部11403は、アクチュエータによって構成され、カメラヘッド制御部11405からの制御により、レンズユニット11401のズームレンズ及びフォーカスレンズを光軸に沿って所定の距離だけ移動させる。これにより、撮像部11402による撮像画像の倍率及び焦点が適宜調整され得る。 The driving unit 11403 is composed of an actuator, and moves the zoom lens and focus lens of the lens unit 11401 a predetermined distance along the optical axis under the control of the camera head control unit 11405. This allows the magnification and focus of the image captured by the imaging unit 11402 to be adjusted appropriately.
 通信部11404は、CCU11201との間で各種の情報を送受信するための通信装置によって構成される。通信部11404は、撮像部11402から得た画像信号をRAWデータとして伝送ケーブル11400を介してCCU11201に送信する。 The communication unit 11404 is configured with a communication device for transmitting and receiving various information to and from the CCU 11201. The communication unit 11404 transmits the image signal obtained from the imaging unit 11402 as RAW data to the CCU 11201 via the transmission cable 11400.
 また、通信部11404は、CCU11201から、カメラヘッド11102の駆動を制御するための制御信号を受信し、カメラヘッド制御部11405に供給する。当該制御信号には、例えば、撮像画像のフレームレートを指定する旨の情報、撮像時の露出値を指定する旨の情報、並びに/又は撮像画像の倍率及び焦点を指定する旨の情報等、撮像条件に関する情報が含まれる。 The communication unit 11404 also receives control signals for controlling the operation of the camera head 11102 from the CCU 11201, and supplies them to the camera head control unit 11405. The control signals include information on the imaging conditions, such as information specifying the frame rate of the captured image, information specifying the exposure value during imaging, and/or information specifying the magnification and focus of the captured image.
 なお、上記のフレームレートや露出値、倍率、焦点等の撮像条件は、ユーザによって適宜指定されてもよいし、取得された画像信号に基づいてCCU11201の制御部11413によって自動的に設定されてもよい。後者の場合には、いわゆるAE(Auto Exposure)機能、AF(Auto Focus)機能及びAWB(Auto White Balance)機能が内視鏡11100に搭載されていることになる。 The above-mentioned frame rate, exposure value, magnification, focus, and other imaging conditions may be appropriately specified by the user, or may be automatically set by the control unit 11413 of the CCU 11201 based on the acquired image signal. In the latter case, the endoscope 11100 is equipped with so-called AE (Auto Exposure) function, AF (Auto Focus) function, and AWB (Auto White Balance) function.
 カメラヘッド制御部11405は、通信部11404を介して受信したCCU11201からの制御信号に基づいて、カメラヘッド11102の駆動を制御する。 The camera head control unit 11405 controls the operation of the camera head 11102 based on a control signal from the CCU 11201 received via the communication unit 11404.
 通信部11411は、カメラヘッド11102との間で各種の情報を送受信するための通信装置によって構成される。通信部11411は、カメラヘッド11102から、伝送ケーブル11400を介して送信される画像信号を受信する。 The communication unit 11411 is configured with a communication device for transmitting and receiving various information to and from the camera head 11102. The communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
 また、通信部11411は、カメラヘッド11102に対して、カメラヘッド11102の駆動を制御するための制御信号を送信する。画像信号や制御信号は、電気通信や光通信等によって送信することができる。 The communication unit 11411 also transmits to the camera head 11102 a control signal for controlling the operation of the camera head 11102. The image signal and the control signal can be transmitted by electrical communication, optical communication, etc.
 画像処理部11412は、カメラヘッド11102から送信されたRAWデータである画像信号に対して各種の画像処理を施す。 The image processing unit 11412 performs various image processing operations on the image signal, which is the RAW data transmitted from the camera head 11102.
 制御部11413は、内視鏡11100による術部等の撮像、及び、術部等の撮像により得られる撮像画像の表示に関する各種の制御を行う。例えば、制御部11413は、カメラヘッド11102の駆動を制御するための制御信号を生成する。 The control unit 11413 performs various controls related to the imaging of the surgical site, etc. by the endoscope 11100, and the display of the captured images obtained by imaging the surgical site, etc. For example, the control unit 11413 generates a control signal for controlling the driving of the camera head 11102.
 また、制御部11413は、画像処理部11412によって画像処理が施された画像信号に基づいて、術部等が映った撮像画像を表示装置11202に表示させる。この際、制御部11413は、各種の画像認識技術を用いて撮像画像内における各種の物体を認識してもよい。例えば、制御部11413は、撮像画像に含まれる物体のエッジの形状や色等を検出することにより、鉗子等の術具、特定の生体部位、出血、エネルギー処置具11112の使用時のミスト等を認識することができる。制御部11413は、表示装置11202に撮像画像を表示させる際に、その認識結果を用いて、各種の手術支援情報を当該術部の画像に重畳表示させてもよい。手術支援情報が重畳表示され、術者11131に提示されることにより、術者11131の負担を軽減することや、術者11131が確実に手術を進めることが可能になる。 The control unit 11413 also causes the display device 11202 to display the captured image showing the surgical site, etc., based on the image signal that has been image-processed by the image processing unit 11412. At this time, the control unit 11413 may recognize various objects in the captured image using various image recognition techniques. For example, the control unit 11413 can recognize surgical tools such as forceps, specific body parts, bleeding, mist generated when the energy treatment tool 11112 is used, etc., by detecting the shape and color of the edges of objects included in the captured image. When the control unit 11413 causes the display device 11202 to display the captured image, it may use the recognition result to superimpose various types of surgical support information on the image of the surgical site. By superimposing the surgical support information and presenting it to the surgeon 11131, the burden on the surgeon 11131 can be reduced and the surgeon 11131 can proceed with the surgery reliably.
 カメラヘッド11102及びCCU11201を接続する伝送ケーブル11400は、電気信号の通信に対応した電気信号ケーブル、光通信に対応した光ファイバ、又はこれらの複合ケーブルである。 The transmission cable 11400 that connects the camera head 11102 and the CCU 11201 is an electrical signal cable that supports electrical signal communication, an optical fiber that supports optical communication, or a composite cable of these.
 ここで、図示する例では、伝送ケーブル11400を用いて有線で通信が行われていたが、カメラヘッド11102とCCU11201との間の通信は無線で行われてもよい。 In the illustrated example, communication is performed wired using a transmission cable 11400, but communication between the camera head 11102 and the CCU 11201 may also be performed wirelessly.
 以上、本開示に係る技術が適用され得る内視鏡手術システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、カメラヘッド11102の撮像部11402に適用され得る。具体的には、貫通導体70を有する光検出装置1を、撮像部11402に適用することができる。撮像部11402に本開示に係る技術を適用することにより、より鮮明な術部画像を得ることができるため、術者が術部を確実に確認することが可能になる。 Above, an example of an endoscopic surgery system to which the technology of the present disclosure can be applied has been described. Of the configurations described above, the technology of the present disclosure can be applied to the imaging section 11402 of the camera head 11102. Specifically, a light detection device 1 having a through conductor 70 can be applied to the imaging section 11402. By applying the technology of the present disclosure to the imaging section 11402, a clearer image of the surgical site can be obtained, allowing the surgeon to reliably confirm the surgical site.
 なお、ここでは、一例として内視鏡手術システムについて説明したが、本開示に係る技術は、その他、例えば、顕微鏡手術システム等に適用されてもよい。 Note that although an endoscopic surgery system has been described here as an example, the technology disclosed herein may also be applied to other systems, such as a microsurgery system.
 [その他の実施形態]
 上記のように、本技術は第1実施形態から第2実施形態までによって記載したが、この開示の一部をなす論述及び図面は本技術を限定するものであると理解すべきではない。この開示から当業者には様々な代替の実施形態、実施例及び運用技術が明らかとなろう。
[Other embodiments]
As described above, the present technology has been described by the first and second embodiments, but the descriptions and drawings forming a part of this disclosure should not be understood as limiting the present technology. Various alternative embodiments, examples, and operating techniques will become apparent to those skilled in the art from this disclosure.
 例えば、第1実施形態から第2実施形態までにおいて説明したそれぞれの技術的思想を互いに組み合わせることも可能である。それぞれの技術的思想に沿った種々の組み合わせが可能である。 For example, it is possible to combine the technical ideas described in the first and second embodiments. Various combinations are possible in accordance with the respective technical ideas.
 また、本技術は、上述したイメージセンサとしての固体撮像装置の他、ToF(Time of Flight)センサともよばれる距離を測定する測距センサなども含む光検出装置全般に適用することができる。測距センサは、物体に向かって照射光を発光し、その照射光が物体の表面で反射され返ってくる反射光を検出し、照射光が発光されてから反射光が受光されるまでの飛行時間に基づいて物体までの距離を算出するセンサである。この測距センサの構造として、上述した貫通導体70の構造を採用することができる。 Furthermore, this technology can be applied to light detection devices in general, including not only the solid-state imaging devices as image sensors described above, but also distance measurement sensors that measure distance, also known as ToF (Time of Flight) sensors. Distance measurement sensors emit light toward an object, detect the reflected light that is reflected back from the surface of the object, and calculate the distance to the object based on the flight time from when the light is emitted to when the reflected light is received. The structure of the through conductor 70 described above can be used as the structure of this distance measurement sensor.
 また、例えば、上述の構成要素を構成するとして挙げられた材料は、添加物や不純物等を含んでいても良い。また、例えば、図5Aから図5Eにおいて、分離領域20bは絶縁膜であったが、本技術はこれには限定されない。分離領域20bは、公知の構成を有していれば良く、例えば、トレンチの第2の面S2側には絶縁膜を介してポリシリコンを埋め込み、トレンチの第1の面S1側には絶縁膜を埋め込んだ構成であっても良い。 Also, for example, the materials cited as constituting the above-mentioned components may contain additives, impurities, etc. Also, for example, in Figures 5A to 5E, the isolation region 20b is an insulating film, but the present technology is not limited to this. Isolation region 20b may have any known configuration, and may, for example, be configured such that polysilicon is embedded via an insulating film on the second surface S2 side of the trench, and an insulating film is embedded on the first surface S1 side of the trench.
 このように、本技術はここでは記載していない様々な実施形態等を含むことは勿論である。したがって、本技術の技術的範囲は上記の説明から妥当な特許請求の範囲に記載された発明特定事項によってのみ定められるものである。 As such, it goes without saying that this technology includes various embodiments not described here. Therefore, the technical scope of this technology is determined only by the invention-specific matters described in the claims that are appropriate from the above explanation.
 また、本明細書に記載された効果はあくまでも例示であって限定されるものでは無く、また他の効果があっても良い。 Furthermore, the effects described in this specification are merely examples and are not limiting, and other effects may also exist.
 なお、本技術は、以下のような構成としてもよい。
(1)
 光電変換領域を有する第1半導体層と、第1配線層と、第2半導体層と、第2配線層と、第3半導体層とを、この順で積層した積層構造を含み、
 厚み方向に沿って延在する貫通導体を備え、
 前記貫通導体は、平面視で重なる第1部分と第2部分とを有し、
 前記第1部分は、前記第2半導体層を厚み方向に沿って貫通し、且つ延在方向の一方の端部である第1端部は前記第1配線層内に突出し、
 前記第2部分は、前記第1配線層内に設けられ、且つ延在方向の一方の端部である第2端部が、前記第1端部に直に接続されている、
 光検出装置。
(2)
 前記第1部分の延在方向に沿った寸法を前記第1部分の幅方向に沿った寸法で除して得られた値を、前記第1部分のアスペクト比である第1アスペクト比とし、
 前記第2部分の延在方向に沿った寸法を前記第2部分の幅方向に沿った寸法で除して得られた値を、前記第2部分のアスペクト比である第2アスペクト比とし、
 前記貫通導体の延在方向に沿った寸法を前記第1部分の幅方向に沿った寸法で除して得られた値を、前記貫通導体のアスペクト比である第3アスペクト比とした場合に、
 前記第1アスペクト比及び前記第2アスペクト比の両方とも、前記第3アスペクト比より低い構成である、(1)に記載の光検出装置。
(3)
 前記第2アスペクト比は、前記第1アスペクト比より低い構成である、(2)に記載の光検出装置。
(4)
 前記第1部分の幅方向に沿った寸法は、前記第1部分の延在方向の他方の端部である第3端部の幅方向に沿った寸法である、(2)又は(3)に記載の光検出装置。
(5)
 前記第3アスペクト比は、10以上である、(2)から(4)のいずれかに記載の光検出装置。
(6)
 前記第3アスペクト比は、15以上である、(2)から(4)のいずれかに記載の光検出装置。
(7)
 前記第3アスペクト比は、20以上である、(2)から(4)のいずれかに記載の光検出装置。
(8)
 前記第3アスペクト比は、25以上である、(2)から(4)のいずれかに記載の光検出装置。
(9)
 前記第3アスペクト比は、30以上である、(2)から(4)のいずれかに記載の光検出装置。
(10)
 前記第1部分の延在方向に沿った寸法を前記第1部分の幅方向に沿った寸法で除して得られた値を、前記第1部分のアスペクト比である第1アスペクト比とし、
 前記第2部分の延在方向に沿った寸法を前記第2部分の幅方向に沿った寸法で除して得られた値を、前記第2部分のアスペクト比である第2アスペクト比とした場合に、
 前記第1アスペクト比及び前記第2アスペクト比は、それぞれ10以上25以下である、(1)から(4)のいずれかに記載の光検出装置。
(11)
 前記第1アスペクト比及び前記第2アスペクト比は、それぞれ16以下である、(10)に記載の光検出装置。
(12)
 前記第1端部と前記第2端部との幅方向に沿った寸法の差は、20nm以上である、(1)から(11)のいずれかに記載の光検出装置。
(13)
 前記第1端部と前記第2端部との接合位置は、前記第2半導体層の前記第1配線層側の面から厚み方向に5nm以上100nm以下の位置にある、(1)から(12)のいずれかに記載の光検出装置。
(14)
 前記貫通導体は、平面視で、前記光電変換領域が設けられた画素領域と前記画素領域を囲むようにして設けられた周辺領域とのうちの前記画素領域に位置している、(1)から(13)のいずれかに記載の光検出装置。
(15)
 前記第2部分の延在方向の他方の端部である第4端部は、前記第1半導体層の前記第1配線層側の面の近傍にある、(1)から(14)のいずれかに記載の光検出装置。
(16)
 前記第1配線層に設けられ且つ水平方向に沿って延在する横配線を有し、
 前記第2部分の延在方向の他方の端部である第4端部は、前記横配線に接続されている、(1)から(14)のいずれかに記載の光検出装置。
(17)
 前記第1部分及び前記第2部分のそれぞれは、タングステン、銅、アルミニウム、アルミニウム-銅合金、コバルト、金、ニッケル、チタン、窒化チタン、モリブデン、タンタル、窒化タンタル、プラチナ、又はルテニウム製である、(1)から(16)のいずれかに記載の光検出装置。
(18)
 光検出装置と、前記光検出装置に被写体からの像光を結像させる光学系と、を備え、
 前記光検出装置は、
 光電変換領域を有する第1半導体層と、第1配線層と、第2半導体層と、第2配線層と、第3半導体層とを、この順で積層した積層構造を含み、
 厚み方向に沿って延在する貫通導体を備え、
 前記貫通導体は、平面視で重なる第1部分と第2部分とを有し、
 前記第1部分は、前記第2半導体層を厚み方向に沿って貫通し、且つ延在方向の一方の端部である第1端部は前記第1配線層内に突出し、
 前記第2部分は、前記第1配線層内に設けられ、且つ延在方向の一方の端部である第2端部が、前記第1端部に直に接続されている、
 電子機器。
The present technology may be configured as follows.
(1)
a laminated structure in which a first semiconductor layer having a photoelectric conversion region, a first wiring layer, a second semiconductor layer, a second wiring layer, and a third semiconductor layer are laminated in this order;
A through conductor extending along a thickness direction is provided,
the through conductor has a first portion and a second portion that overlap in a plan view,
the first portion penetrates the second semiconductor layer along a thickness direction, and a first end portion which is one end portion in an extending direction protrudes into the first wiring layer;
the second portion is provided in the first wiring layer, and a second end portion which is one end portion in an extending direction is directly connected to the first end portion;
Light detection device.
(2)
a value obtained by dividing a dimension of the first portion along an extension direction by a dimension of the first portion along a width direction is defined as a first aspect ratio, which is an aspect ratio of the first portion;
a value obtained by dividing a dimension of the second portion along an extension direction by a dimension of the second portion along a width direction is defined as a second aspect ratio that is an aspect ratio of the second portion;
When a value obtained by dividing a dimension of the through conductor along an extension direction by a dimension of the first portion along a width direction is defined as a third aspect ratio which is an aspect ratio of the through conductor,
The optical detection device according to (1), wherein both of the first aspect ratio and the second aspect ratio are lower than the third aspect ratio.
(3)
The optical detection device according to (2), wherein the second aspect ratio is lower than the first aspect ratio.
(4)
The optical detection device according to (2) or (3), wherein the dimension along the width direction of the first portion is the dimension along the width direction of a third end portion, which is the other end portion of the first portion in the extension direction.
(5)
The optical detection device according to any one of (2) to (4), wherein the third aspect ratio is 10 or more.
(6)
The optical detection device according to any one of (2) to (4), wherein the third aspect ratio is 15 or more.
(7)
The optical detection device according to any one of (2) to (4), wherein the third aspect ratio is 20 or more.
(8)
The optical detection device according to any one of (2) to (4), wherein the third aspect ratio is 25 or more.
(9)
The optical detection device according to any one of (2) to (4), wherein the third aspect ratio is 30 or more.
(10)
a value obtained by dividing a dimension of the first portion along an extension direction by a dimension of the first portion along a width direction is defined as a first aspect ratio, which is an aspect ratio of the first portion;
When a value obtained by dividing a dimension of the second portion along an extension direction by a dimension of the second portion along a width direction is defined as a second aspect ratio which is an aspect ratio of the second portion,
The optical detection device according to any one of (1) to (4), wherein the first aspect ratio and the second aspect ratio are each equal to or greater than 10 and equal to or less than 25.
(11)
The optical detection device according to (10), wherein the first aspect ratio and the second aspect ratio are each 16 or less.
(12)
The light detection device according to any one of (1) to (11), wherein a difference in dimension between the first end and the second end along the width direction is 20 nm or more.
(13)
A photodetector device according to any one of (1) to (12), wherein a junction position between the first end and the second end is located at a position that is 5 nm to 100 nm in a thickness direction from a surface of the second semiconductor layer facing the first wiring layer.
(14)
A photodetection device according to any one of (1) to (13), wherein, in a planar view, the through conductor is located in the pixel region of a pixel region in which the photoelectric conversion region is provided and a peripheral region surrounding the pixel region.
(15)
A photodetector device described in any one of (1) to (14), wherein a fourth end, which is the other end in the extension direction of the second portion, is located near the surface of the first semiconductor layer on the first wiring layer side.
(16)
A horizontal wiring is provided in the first wiring layer and extends along a horizontal direction,
The photodetector according to any one of (1) to (14), wherein a fourth end portion, which is the other end portion in the extending direction of the second portion, is connected to the horizontal wiring.
(17)
The optical detection device according to any one of (1) to (16), wherein each of the first portion and the second portion is made of tungsten, copper, aluminum, an aluminum-copper alloy, cobalt, gold, nickel, titanium, titanium nitride, molybdenum, tantalum, tantalum nitride, platinum, or ruthenium.
(18)
a light detection device and an optical system that forms an image of light from a subject on the light detection device;
The light detection device includes:
a laminated structure in which a first semiconductor layer having a photoelectric conversion region, a first wiring layer, a second semiconductor layer, a second wiring layer, and a third semiconductor layer are laminated in this order;
A through conductor extending along a thickness direction is provided,
the through conductor has a first portion and a second portion that overlap in a plan view,
the first portion penetrates the second semiconductor layer along a thickness direction, and a first end portion which is one end portion in an extending direction protrudes into the first wiring layer;
the second portion is provided in the first wiring layer, and a second end portion which is one end portion in an extending direction is directly connected to the first end portion;
Electronics.
 本技術の範囲は、図示され記載された例示的な実施形態に限定されるものではなく、本技術が目的とするものと均等な効果をもたらす全ての実施形態をも含む。さらに、本技術の範囲は、請求項により画される発明の特徴の組み合わせに限定されるものではなく、全ての開示されたそれぞれの特徴のうち特定の特徴のあらゆる所望する組み合わせによって画されうる。 The scope of the present technology is not limited to the exemplary embodiments shown and described, but includes all embodiments that achieve the same effect as the intended purpose of the present technology. Furthermore, the scope of the present technology is not limited to the combination of the features of the invention defined by the claims, but may be defined by any desired combination of specific features among all the respective features disclosed.
 1 光検出装置
 2 半導体チップ
 2A 画素領域
 2B 周辺領域
 3 画素
 4 垂直駆動回路
 5 カラム信号処理回路
 6 水平駆動回路
 7 出力回路
 8 制御回路
 10 画素駆動線
 11 垂直信号線
 12 水平信号線
 13 ロジック回路
 14 ボンディングパッド
 15 読出し回路
 20 第1半導体層
 20a 光電変換領域
 30 第1配線層
 32 配線
 40 第2半導体層
 50 第2配線層
 60 第3半導体層
 70 貫通導体
 71 第1部分
 71a 第1端部
 71b 第3端部
 72 第2部分
 72a 第2端部
 72b 第4端部
 80 光入射面側積層体
 100 電子機器
 101 固体撮像装置
 102 光学系(光学レンズ)
 103 シャッタ装置
 104 駆動回路
 105 信号処理回路
 106 入射光
LIST OF SYMBOLS 1 Photodetector 2 Semiconductor chip 2A Pixel region 2B Peripheral region 3 Pixel 4 Vertical drive circuit 5 Column signal processing circuit 6 Horizontal drive circuit 7 Output circuit 8 Control circuit 10 Pixel drive line 11 Vertical signal line 12 Horizontal signal line 13 Logic circuit 14 Bonding pad 15 Readout circuit 20 First semiconductor layer 20a Photoelectric conversion region 30 First wiring layer 32 Wiring 40 Second semiconductor layer 50 Second wiring layer 60 Third semiconductor layer 70 Through conductor 71 First portion 71a First end 71b Third end 72 Second portion 72a Second end 72b Fourth end 80 Light-incident surface side laminate 100 Electronic device 101 Solid-state imaging device 102 Optical system (optical lens)
103 Shutter device 104 Drive circuit 105 Signal processing circuit 106 Incident light

Claims (18)

  1.  光電変換領域を有する第1半導体層と、第1配線層と、第2半導体層と、第2配線層と、第3半導体層とを、この順で積層した積層構造を含み、
     厚み方向に沿って延在する貫通導体を備え、
     前記貫通導体は、平面視で重なる第1部分と第2部分とを有し、
     前記第1部分は、前記第2半導体層を厚み方向に沿って貫通し、且つ延在方向の一方の端部である第1端部は前記第1配線層内に突出し、
     前記第2部分は、前記第1配線層内に設けられ、且つ延在方向の一方の端部である第2端部が、前記第1端部に直に接続されている、
     光検出装置。
    a laminated structure in which a first semiconductor layer having a photoelectric conversion region, a first wiring layer, a second semiconductor layer, a second wiring layer, and a third semiconductor layer are laminated in this order;
    A through conductor extending along a thickness direction is provided,
    the through conductor has a first portion and a second portion that overlap in a plan view,
    the first portion penetrates the second semiconductor layer along a thickness direction, and a first end portion which is one end portion in an extending direction protrudes into the first wiring layer;
    the second portion is provided in the first wiring layer, and a second end portion which is one end portion in an extending direction is directly connected to the first end portion;
    Light detection device.
  2.  前記第1部分の延在方向に沿った寸法を前記第1部分の幅方向に沿った寸法で除して得られた値を、前記第1部分のアスペクト比である第1アスペクト比とし、
     前記第2部分の延在方向に沿った寸法を前記第2部分の幅方向に沿った寸法で除して得られた値を、前記第2部分のアスペクト比である第2アスペクト比とし、
     前記貫通導体の延在方向に沿った寸法を前記第1部分の幅方向に沿った寸法で除して得られた値を、前記貫通導体のアスペクト比である第3アスペクト比とした場合に、
     前記第1アスペクト比及び前記第2アスペクト比の両方とも、前記第3アスペクト比より低い構成である、請求項1に記載の光検出装置。
    a value obtained by dividing a dimension of the first portion along an extension direction by a dimension of the first portion along a width direction is defined as a first aspect ratio, which is an aspect ratio of the first portion;
    a value obtained by dividing a dimension of the second portion along an extension direction by a dimension of the second portion along a width direction is defined as a second aspect ratio that is an aspect ratio of the second portion;
    When a value obtained by dividing a dimension of the through conductor along an extension direction by a dimension of the first portion along a width direction is defined as a third aspect ratio which is an aspect ratio of the through conductor,
    The optical detection device of claim 1 , wherein both the first aspect ratio and the second aspect ratio are lower than the third aspect ratio.
  3.  前記第2アスペクト比は、前記第1アスペクト比より低い構成である、請求項2に記載の光検出装置。 The optical detection device of claim 2, wherein the second aspect ratio is lower than the first aspect ratio.
  4.  前記第1部分の幅方向に沿った寸法は、前記第1部分の延在方向の他方の端部である第3端部の幅方向に沿った寸法である、請求項2に記載の光検出装置。 The optical detection device according to claim 2, wherein the dimension along the width direction of the first portion is the dimension along the width direction of a third end portion, which is the other end portion in the extension direction of the first portion.
  5.  前記第3アスペクト比は、10以上である、請求項2に記載の光検出装置。 The optical detection device of claim 2, wherein the third aspect ratio is 10 or greater.
  6.  前記第3アスペクト比は、15以上である、請求項2に記載の光検出装置。 The optical detection device of claim 2, wherein the third aspect ratio is 15 or greater.
  7.  前記第3アスペクト比は、20以上である、請求項2に記載の光検出装置。 The optical detection device of claim 2, wherein the third aspect ratio is 20 or greater.
  8.  前記第3アスペクト比は、25以上である、請求項2に記載の光検出装置。 The optical detection device of claim 2, wherein the third aspect ratio is 25 or greater.
  9.  前記第3アスペクト比は、30以上である、請求項2に記載の光検出装置。 The optical detection device of claim 2, wherein the third aspect ratio is 30 or greater.
  10.  前記第1部分の延在方向に沿った寸法を前記第1部分の幅方向に沿った寸法で除して得られた値を、前記第1部分のアスペクト比である第1アスペクト比とし、
     前記第2部分の延在方向に沿った寸法を前記第2部分の幅方向に沿った寸法で除して得られた値を、前記第2部分のアスペクト比である第2アスペクト比とした場合に、
     前記第1アスペクト比及び前記第2アスペクト比は、それぞれ10以上25以下である、請求項1に記載の光検出装置。
    a value obtained by dividing a dimension of the first portion along an extension direction by a dimension of the first portion along a width direction is defined as a first aspect ratio, which is an aspect ratio of the first portion;
    When a value obtained by dividing a dimension of the second portion along an extension direction by a dimension of the second portion along a width direction is defined as a second aspect ratio which is an aspect ratio of the second portion,
    The photodetector according to claim 1 , wherein the first aspect ratio and the second aspect ratio are each equal to or greater than 10 and equal to or less than 25.
  11.  前記第1アスペクト比及び前記第2アスペクト比は、それぞれ16以下である、請求項10に記載の光検出装置。 The optical detection device of claim 10, wherein the first aspect ratio and the second aspect ratio are each 16 or less.
  12.  前記第1端部と前記第2端部との幅方向に沿った寸法の差は、20nm以上である、請求項1に記載の光検出装置。 The optical detection device of claim 1, wherein the difference in dimension between the first end and the second end along the width direction is 20 nm or more.
  13.  前記第1端部と前記第2端部との接合位置は、前記第2半導体層の前記第1配線層側の面から厚み方向に5nm以上100nm以下の位置にある、請求項1に記載の光検出装置。 The photodetector according to claim 1, wherein the junction position between the first end and the second end is at a position 5 nm to 100 nm in the thickness direction from the surface of the second semiconductor layer facing the first wiring layer.
  14.  前記貫通導体は、平面視で、前記光電変換領域が設けられた画素領域と前記画素領域を囲むようにして設けられた周辺領域とのうちの前記画素領域に位置している、請求項1に記載の光検出装置。 The photodetector according to claim 1, wherein the through conductor is located in the pixel region, in a plan view, of a pixel region in which the photoelectric conversion region is provided and a peripheral region provided so as to surround the pixel region.
  15.  前記第2部分の延在方向の他方の端部である第4端部は、前記第1半導体層の前記第1配線層側の面の近傍にある、請求項1に記載の光検出装置。 The photodetector according to claim 1, wherein the fourth end, which is the other end of the second portion in the extension direction, is located near the surface of the first semiconductor layer facing the first wiring layer.
  16.  前記第1配線層に設けられ且つ水平方向に沿って延在する横配線を有し、
     前記第2部分の延在方向の他方の端部である第4端部は、前記横配線に接続されている、請求項1に記載の光検出装置。
    A horizontal wiring is provided in the first wiring layer and extends along a horizontal direction,
    The photodetector according to claim 1 , wherein a fourth end portion which is the other end portion in the extending direction of the second portion is connected to the horizontal wiring.
  17.  前記第1部分及び前記第2部分のそれぞれは、タングステン、銅、アルミニウム、アルミニウム-銅合金、コバルト、金、ニッケル、チタン、窒化チタン、モリブデン、タンタル、窒化タンタル、プラチナ、又はルテニウム製である、請求項1に記載の光検出装置。 The optical detection device of claim 1, wherein each of the first and second parts is made of tungsten, copper, aluminum, an aluminum-copper alloy, cobalt, gold, nickel, titanium, titanium nitride, molybdenum, tantalum, tantalum nitride, platinum, or ruthenium.
  18.  光検出装置と、前記光検出装置に被写体からの像光を結像させる光学系と、を備え、
     前記光検出装置は、
     光電変換領域を有する第1半導体層と、第1配線層と、第2半導体層と、第2配線層と、第3半導体層とを、この順で積層した積層構造を含み、
     厚み方向に沿って延在する貫通導体を備え、
     前記貫通導体は、平面視で重なる第1部分と第2部分とを有し、
     前記第1部分は、前記第2半導体層を厚み方向に沿って貫通し、且つ延在方向の一方の端部である第1端部は前記第1配線層内に突出し、
     前記第2部分は、前記第1配線層内に設けられ、且つ延在方向の一方の端部である第2端部が、前記第1端部に直に接続されている、
     電子機器。
    a light detection device and an optical system that forms an image of light from a subject on the light detection device;
    The light detection device includes:
    a laminated structure in which a first semiconductor layer having a photoelectric conversion region, a first wiring layer, a second semiconductor layer, a second wiring layer, and a third semiconductor layer are laminated in this order;
    A through conductor extending along a thickness direction is provided,
    the through conductor has a first portion and a second portion that overlap in a plan view,
    the first portion penetrates the second semiconductor layer along a thickness direction, and a first end portion which is one end portion in an extending direction protrudes into the first wiring layer;
    the second portion is provided in the first wiring layer, and a second end portion which is one end portion in an extending direction is directly connected to the first end portion;
    Electronics.
PCT/JP2023/037431 2022-11-01 2023-10-16 Light-detecting device and electronic apparatus WO2024095751A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-175713 2022-11-01
JP2022175713 2022-11-01

Publications (1)

Publication Number Publication Date
WO2024095751A1 true WO2024095751A1 (en) 2024-05-10

Family

ID=90930296

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2023/037431 WO2024095751A1 (en) 2022-11-01 2023-10-16 Light-detecting device and electronic apparatus

Country Status (1)

Country Link
WO (1) WO2024095751A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014051061A1 (en) * 2012-09-28 2014-04-03 田中貴金属工業株式会社 Substrate processing method for supporting a catalyst particle for plating process
WO2020262582A1 (en) * 2019-06-26 2020-12-30 ソニーセミコンダクタソリューションズ株式会社 Semiconductor apparatus and method for manufacturing same
JP2022095515A (en) * 2020-12-16 2022-06-28 サムソン エレクトロ-メカニックス カンパニーリミテッド. Substrate with built-in connecting structure

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2014051061A1 (en) * 2012-09-28 2014-04-03 田中貴金属工業株式会社 Substrate processing method for supporting a catalyst particle for plating process
WO2020262582A1 (en) * 2019-06-26 2020-12-30 ソニーセミコンダクタソリューションズ株式会社 Semiconductor apparatus and method for manufacturing same
JP2022095515A (en) * 2020-12-16 2022-06-28 サムソン エレクトロ-メカニックス カンパニーリミテッド. Substrate with built-in connecting structure

Similar Documents

Publication Publication Date Title
WO2018194030A1 (en) Semiconductor element, method for producing same, and electronic device
WO2020189534A1 (en) Image capture element and semiconductor element
JP7291148B2 (en) semiconductor equipment
TWI831883B (en) Solid-state imaging device, electronic device, and manufacturing method of solid-state imaging device
US20230052040A1 (en) Semiconductor device, imaging device, and manufacturing apparatus
JP7419476B2 (en) Semiconductor devices and their manufacturing methods, and electronic equipment
WO2020179494A1 (en) Semiconductor device and imaging device
KR20210075075A (en) Imaging devices and electronic devices
WO2021124974A1 (en) Imaging device
WO2022172711A1 (en) Photoelectric conversion element and electronic device
US20230095332A1 (en) Imaging element and semiconductor chip
WO2024095751A1 (en) Light-detecting device and electronic apparatus
WO2024111457A1 (en) Photodetection device, method for producing same, and electronic device
WO2024101203A1 (en) Light detection device and multilayer substrate
WO2024101204A1 (en) Light detection device and multilayer substrate
WO2023112520A1 (en) Semiconductor device, electronic device, and wafer
WO2023248926A1 (en) Imaging element and electronic device
WO2024057814A1 (en) Light-detection device and electronic instrument
WO2024116928A1 (en) Semiconductor device and electronic apparatus
WO2024004431A1 (en) Semiconductor device, method for manufacturing same, and electronic apparatus
WO2024111280A1 (en) Light detection device and electronic equipment
WO2023243669A1 (en) Semiconductor device and imaging device
WO2023042462A1 (en) Light detecting device, method for manufacturing light detecting device, and electronic instrument
WO2024024573A1 (en) Imaging device and electronic appliance
WO2023210238A1 (en) Light detecting device, and electronic apparatus