WO2022209365A1 - Imaging device, method for manufacturing imaging device, and electronic apparatus - Google Patents

Imaging device, method for manufacturing imaging device, and electronic apparatus Download PDF

Info

Publication number
WO2022209365A1
WO2022209365A1 PCT/JP2022/005777 JP2022005777W WO2022209365A1 WO 2022209365 A1 WO2022209365 A1 WO 2022209365A1 JP 2022005777 W JP2022005777 W JP 2022005777W WO 2022209365 A1 WO2022209365 A1 WO 2022209365A1
Authority
WO
WIPO (PCT)
Prior art keywords
semiconductor substrate
imaging device
pixel
pixels
imaging
Prior art date
Application number
PCT/JP2022/005777
Other languages
French (fr)
Japanese (ja)
Inventor
孝好 本多
正真 塩山
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2022209365A1 publication Critical patent/WO2022209365A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L21/00Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
    • H01L21/70Manufacture or treatment of devices consisting of a plurality of solid state components formed in or on a common substrate or of parts thereof; Manufacture of integrated circuit devices or of parts thereof
    • H01L21/71Manufacture of specific parts of devices defined in group H01L21/70
    • H01L21/76Making of isolation regions between components
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L21/00Processes or apparatus adapted for the manufacture or treatment of semiconductor or solid state devices or of parts thereof
    • H01L21/70Manufacture or treatment of devices consisting of a plurality of solid state components formed in or on a common substrate or of parts thereof; Manufacture of integrated circuit devices or of parts thereof
    • H01L21/71Manufacture of specific parts of devices defined in group H01L21/70
    • H01L21/76Making of isolation regions between components
    • H01L21/762Dielectric regions, e.g. EPIC dielectric isolation, LOCOS; Trench refilling techniques, SOI technology, use of channel stoppers
    • HELECTRICITY
    • H01ELECTRIC ELEMENTS
    • H01LSEMICONDUCTOR DEVICES NOT COVERED BY CLASS H10
    • H01L27/00Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate
    • H01L27/14Devices consisting of a plurality of semiconductor or other solid-state components formed in or on a common substrate including semiconductor components sensitive to infrared radiation, light, electromagnetic radiation of shorter wavelength or corpuscular radiation and specially adapted either for the conversion of the energy of such radiation into electrical energy or for the control of electrical energy by such radiation
    • H01L27/144Devices controlled by radiation
    • H01L27/146Imager structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith

Definitions

  • the present disclosure relates to imaging devices, imaging device manufacturing methods, and electronic devices.
  • the present disclosure proposes an imaging device and an electronic device that can avoid deterioration of captured images while improving the accuracy of phase difference detection.
  • an imaging device comprising: a semiconductor substrate; and a plurality of imaging elements arranged in a matrix on the semiconductor substrate along the row direction and the column direction and performing photoelectric conversion on incident light. and each of the imaging devices includes a plurality of pixels provided adjacent to each other in the semiconductor substrate, and an element isolation provided so as to surround the plurality of pixels and penetrate the semiconductor substrate. a wall, an on-chip lens provided above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels, and a first pixel separation section for separating the plurality of pixels, When viewed from above the second surface of the semiconductor substrate facing the first surface, the first pixel isolation portion has a narrow portion having a width narrower than the width of the element isolation wall. Further, the imaging device is provided in which the narrow portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate.
  • each of the image pickup elements includes a plurality of pixels provided adjacent to each other in the semiconductor substrate and above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels. and one or a plurality of second pixel isolation portions for isolating the plurality of pixels, above a second surface of the semiconductor substrate facing the first surface.
  • the imaging device is provided in which the second pixel separation section has a substantially circular cross section.
  • the imaging includes etching a semiconductor substrate using a mask having rectangular openings with varying widths, and embedding an insulating film in grooves formed by the etching.
  • a method of manufacturing a device is provided.
  • a semiconductor substrate and a plurality of imaging elements arranged in a matrix along the row direction and the column direction on the semiconductor substrate and performing photoelectric conversion on incident light
  • each of the imaging elements includes: a plurality of pixels provided adjacent to each other in the semiconductor substrate; and the plurality of pixels surrounding the plurality of pixels and passing through the semiconductor substrate.
  • an on-chip lens provided above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels; and a first pixel separation separating the plurality of pixels.
  • the first pixel isolation portion When viewed from above a second surface of the semiconductor substrate facing the first surface, the first pixel isolation portion has a width narrower than the width of the element isolation wall. and the narrow portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate.
  • FIG. 1 is an explanatory diagram showing a planar configuration example of an imaging device 1 according to an embodiment of the present disclosure
  • FIG. 1 is an explanatory diagram showing a cross section of an imaging device 100 according to a first embodiment of the present disclosure and a comparative example
  • FIG. FIG. 11 is an explanatory diagram (part 1) showing a plane of an imaging device 100 according to a comparative example
  • 1 is an explanatory diagram showing a plane of an image sensor 100 according to a first embodiment of the present disclosure
  • FIG. 1 is an explanatory diagram showing a cross section of an imaging device 100 according to a first embodiment of the present disclosure
  • FIG. FIG. 4 is an explanatory diagram (Part 1) showing the method for manufacturing the imaging device 100 according to the first embodiment of the present disclosure
  • FIG. 1 is an explanatory diagram showing a cross section of an imaging device 100 according to a first embodiment of the present disclosure and a comparative example
  • FIG. 11 is an explanatory diagram (part 1) showing a plane of an imaging device 100 according to
  • FIG. 10 is an explanatory diagram (part 2) showing the method for manufacturing the imaging device 100 according to the first embodiment of the present disclosure
  • FIG. 10 is an explanatory diagram (Part 3) showing the method for manufacturing the imaging device 100 according to the first embodiment of the present disclosure
  • FIG. 5 is an explanatory diagram showing a plane of an imaging device 100 according to Modification 1 of the first embodiment of the present disclosure
  • FIG. 10 is an explanatory diagram (part 1) showing a cross section of the imaging device 100 according to Modification 1 of the first embodiment of the present disclosure
  • FIG. 10 is an explanatory diagram (Part 2) showing a cross section of the imaging element 100 according to Modification 1 of the first embodiment of the present disclosure
  • FIG. 10 is an explanatory diagram (part 1) showing a plane of the imaging element 100 according to Modification 2 of the first embodiment of the present disclosure
  • FIG. 11 is an explanatory diagram (part 1) showing a cross section of an imaging element 100 according to Modification 2 of the first embodiment of the present disclosure
  • FIG. 10 is an explanatory diagram (Part 2) showing a plane of the imaging element 100 according to Modification 2 of the first embodiment of the present disclosure
  • FIG. 10 is an explanatory diagram (Part 2) showing a cross section of the imaging element 100 according to Modification 2 of the first embodiment of the present disclosure
  • FIG. 11 is an explanatory diagram (part 3) showing a plane of the imaging element 100 according to Modification 2 of the first embodiment of the present disclosure
  • FIG. 5 is an explanatory diagram showing a part of a cross section of an imaging device 100 according to a comparative example
  • FIG. 11 is an explanatory diagram (part 2) showing a plane of the imaging element 100 according to the comparative example
  • FIG. 10 is an explanatory diagram showing a plane of an imaging device 100 according to a second embodiment of the present disclosure
  • FIG. 10 is an explanatory diagram (Part 1) showing a method for manufacturing the imaging element 100 according to the second embodiment of the present disclosure
  • FIG. 11 is an explanatory diagram (part 2) showing the method for manufacturing the imaging element 100 according to the second embodiment of the present disclosure
  • FIG. 11 is an explanatory diagram (part 3) showing the method for manufacturing the imaging element 100 according to the second embodiment of the present disclosure
  • FIG. 10 is an explanatory diagram (part 1) showing a plane of the imaging element 100 according to the modification of the second embodiment of the present disclosure
  • FIG. 11 is an explanatory diagram (part 2) showing a plane of the imaging element 100 according to the modification of the second embodiment of the present disclosure
  • It is an explanatory view showing an example of a rough functional composition of a camera.
  • 1 is a block diagram showing an example of a schematic functional configuration of a smart phone
  • FIG. 1 is a diagram showing an example of a schematic configuration of an endoscopic surgery system
  • FIG. 3 is a block diagram showing an example of functional configurations of a camera head and a CCU
  • FIG. 1 is a block diagram showing an example of a schematic configuration of a vehicle control system
  • FIG. FIG. 4 is an explanatory diagram showing an example of installation positions of an outside information detection unit and an imaging unit;
  • the drawings referred to in the following description are drawings for describing one embodiment of the present disclosure and for facilitating understanding thereof. may differ from Furthermore, the imaging device shown in the drawings can be appropriately modified in design in consideration of the following description and known techniques. Further, in the description using the cross-sectional view of the imaging device, the vertical direction of the layered structure of the imaging device corresponds to the relative direction when the light receiving surface on which the incident light enters the imaging device is up. It may differ from the vertical direction according to the actual gravitational acceleration.
  • the dimensions and shapes expressed in the following descriptions refer not only to the dimensions and shapes defined mathematically or geometrically, but also to the degree of difference allowed in the operation of the imaging device and the manufacturing process of the imaging device. It means that dimensions including (errors and strains) and similar shapes are also included. Furthermore, “substantially the same” used for specific dimensions in the following description does not only mean that they are perfectly matched mathematically or geometrically, but rather that the operation of the imaging device and the imaging It also includes cases where there is an allowable degree of difference (error/distortion) in the manufacturing process of the device.
  • electrically connecting means connecting a plurality of elements directly or indirectly through other elements.
  • sharing means using one other element (eg, on-chip lens, etc.) between mutually different elements (eg, pixels, etc.).
  • FIG. 1 is an explanatory diagram showing a planar configuration example of an imaging device 1 according to an embodiment of the present disclosure.
  • an imaging device 1 according to the embodiment of the present disclosure includes a pixel array section 30 in which a plurality of imaging elements 100 are arranged in a matrix on a semiconductor substrate 10 made of silicon, for example, and the pixels and a peripheral circuit section provided to surround the array section 30 .
  • the imaging apparatus 1 includes a vertical drive circuit section 32, a column signal processing circuit section 34, a horizontal drive circuit section 36, an output circuit section 38, a control circuit section 40, and the like as peripheral circuit sections. Details of each block of the imaging device 1 will be described below.
  • the pixel array section 30 has a plurality of imaging elements 100 arranged two-dimensionally in a matrix on the semiconductor substrate 10 along the row direction (X direction in FIG. 1) and the column direction (Y direction in FIG. 1). .
  • Each imaging element 100 is an element that performs photoelectric conversion on incident light, and includes a photoelectric conversion unit (not shown) and a plurality of pixel transistors (for example, MOS (Metal-Oxide-Semiconductor) transistors) (not shown). ).
  • the pixel transistor includes four MOS transistors, for example, a transfer transistor, a selection transistor, a reset transistor, and an amplification transistor.
  • a plurality of imaging elements 100 are arranged two-dimensionally according to, for example, the Bayer arrangement.
  • the Bayer arrangement means that the imaging elements 100 that generate electric charge by absorbing light having a green wavelength (for example, wavelength 495 nm to 570 nm) are arranged in a checkered pattern, and the remaining part is arranged in a red wavelength (for example, wavelength 620 nm). 750 nm), and the image sensor 100 that absorbs light with a blue wavelength (for example, a wavelength of 450 nm to 495 nm) and generates charges alternately for each row. It is an array pattern that looks like they are lined up. A detailed structure of the imaging device 100 will be described later.
  • the vertical drive circuit section 32 is formed of, for example, a shift register, selects the pixel drive wiring 42, supplies the selected pixel drive wiring 42 with a pulse for driving the imaging device 100, and drives the imaging device 100 in row units. drive. That is, the vertical drive circuit section 32 sequentially selectively scans the image sensors 100 of the pixel array section 30 in the vertical direction (the Y direction in FIG. 1) row by row, and scans the photoelectric conversion units (not shown) of the image sensors 100. A pixel signal based on the signal charge generated according to the amount of light received is supplied to the column signal processing circuit section 34 to be described later through the vertical signal line 44 .
  • the column signal processing circuit unit 34 is arranged for each column of the image sensor 100, and performs signal processing such as noise removal on pixel signals output from the image sensor 100 for one row for each pixel column.
  • the column signal processing circuit section 34 performs signal processing such as CDS (Correlated Double Sampling) and AD (Analog-Digital) conversion to remove pixel-specific fixed pattern noise.
  • the horizontal drive circuit section 36 is formed of, for example, a shift register, and sequentially outputs horizontal scanning pulses to sequentially select each of the column signal processing circuit sections 34 described above, and to output pixels from each of the column signal processing circuit sections 34 .
  • a signal is output on the horizontal signal line 46 .
  • the output circuit section 38 performs signal processing on the pixel signals sequentially supplied from each of the column signal processing circuit sections 34 through the horizontal signal line 46 and outputs the processed signal.
  • the output circuit section 38 may function, for example, as a functional section that performs buffering, or may perform processing such as black level adjustment, column variation correction, and various digital signal processing. Note that buffering refers to temporarily storing pixel signals in order to compensate for differences in processing speed and transfer speed when exchanging pixel signals.
  • the input/output terminal 48 is a terminal for exchanging signals with an external device.
  • Control circuit unit 40 receives an input clock and data for instructing an operation mode and the like, and outputs data such as internal information of the imaging device 1 . That is, the control circuit unit 40 generates a clock signal and a clock signal that serve as a reference for the operation of the vertical drive circuit unit 32, the column signal processing circuit unit 34, the horizontal drive circuit unit 36, and the like based on the vertical synchronization signal, the horizontal synchronization signal, and the master clock. Generate control signals. The control circuit section 40 then outputs the generated clock signal and control signal to the vertical drive circuit section 32, the column signal processing circuit section 34, the horizontal drive circuit section 36, and the like.
  • the inventors of the present invention have found that the whole surface of the pixel array section 30 of the imaging device 1 is covered with a phase difference sensor.
  • the provision of detection pixels has been earnestly studied. Under such circumstances, it has been considered to provide an image sensor that functions as one image sensor during imaging and as a pair of phase difference detection pixels during phase difference detection on the entire surface of the pixel array section 30 ( dual photodiode structure).
  • all-pixel phase difference detection since phase difference detection pixels are provided on the entire surface, it is possible to improve the accuracy of phase difference detection, and furthermore, it is possible to perform imaging with all the imaging elements. Therefore, deterioration of the captured image can be avoided.
  • the inventors of the present invention have proposed a phase difference detection method for avoiding mixing of the outputs of a pair of phase difference detection pixels during phase difference detection in order to improve the accuracy of phase difference detection in all-pixel phase difference detection.
  • the idea was to provide an element that physically and electrically separates the phase detection pixels.
  • the present inventors came up with the idea of providing an overflow path between a pair of phase difference detection pixels in order to avoid degradation of captured images in all-pixel phase difference detection. Specifically, during normal imaging, when the charge of one of the phase difference detection pixels is about to be saturated, the charge is transferred to the other pixel via the overflow path, thereby Pixel saturation can be avoided. By providing such an overflow path, it is possible to ensure the linearity of the pixel signal output from the image sensor and prevent deterioration of the captured image.
  • the present inventors have created an embodiment according to the present disclosure that makes it possible to avoid deterioration of a captured image while improving the accuracy of phase difference detection. rice field. Details of embodiments according to the present disclosure created by the present inventors will be sequentially described below.
  • FIG. 2 is an explanatory diagram showing a cross section of the image sensor 100 according to the first embodiment of the present disclosure and a comparative example
  • FIG. 3 is an explanatory diagram showing a plane of the image sensor 100 according to the comparative example.
  • FIG. 3 corresponds to a cross section of the imaging device 100 taken along line AA' shown in FIG.
  • the comparative example means the imaging element 100 that the inventors of the present invention repeatedly studied before making the embodiment of the present disclosure.
  • the imaging device 100 has an on-chip lens 200, a color filter 202, a light shielding section 204, a semiconductor substrate 10, and transfer gates 400a and 400b.
  • the semiconductor substrate 10 has a pair of adjacent pixels 300a and 300b each having a photoelectric conversion unit 302 .
  • the semiconductor substrate 10 also includes a pixel isolation portion 304 that isolates the pair of pixels 300a and 300b, an element isolation wall 310 that surrounds the pixels 300a and 300b, and a pixel isolation portion 304 and an element isolation wall 310 provided around the pixel isolation portion 304 and the element isolation wall 310. Diffusion regions 306, 308 are included.
  • the layered structure of the imaging element 100 according to the present embodiment will be described below.
  • the layers are described in the order from the upper side (light receiving surface 10a side) to the lower side in FIG. Note that the area 500 in FIG. 2 will be described in the second embodiment of the present disclosure.
  • the imaging element 100 is provided above the light receiving surface (first surface) 10a of the semiconductor substrate 10, and has one on-chip lens 200 that collects incident light onto the photoelectric conversion unit 302. .
  • the imaging device 100 has a structure in which a pair of pixels 300 a and 300 b are provided for one on-chip lens 200 . That is, the on-chip lens 200 is shared by the two pixels 300a and 300b.
  • the on-chip lens 200 can be made of, for example, a silicon nitride film (SiN), or a resin material such as a styrene resin, an acrylic resin, a styrene-acrylic copolymer resin, or a siloxane resin. can.
  • the incident light condensed by the on-chip lens 200 passes through the color filter 202 provided below the on-chip lens 200 and irradiates each of the photoelectric conversion units 302 of the pair of pixels 300a and 300b.
  • the color filter 202 is either a color filter that transmits a red wavelength component, a color filter that transmits a green wavelength component, or a color filter that transmits a blue wavelength component.
  • color filter 202 may be formed from a material having pigments or dyes dispersed in a transparent binder such as silicone.
  • a light shielding portion 204 is provided on the light receiving surface 10a of the semiconductor substrate 10 so as to surround the color filters 202 .
  • the light shielding portion 204 is provided between the adjacent imaging elements 100 to suppress crosstalk between the adjacent imaging elements 100 and to improve the accuracy of phase difference detection. Light shielding can be performed between The light shielding portion 204 can be made of, for example, a metal material containing tungsten (W), aluminum (Al), copper (Cu), titanium (Ti), molybdenum (Mo), nickel (Ni), or the like.
  • a photoelectric conversion section 302 having a first conductivity type (eg, n-type) impurity in a second conductivity type (eg, p-type) semiconductor substrate 10. are provided for adjacent pixels 300a and 300b.
  • the photoelectric conversion unit 302 absorbs light having a red wavelength component, a green wavelength component, or a blue wavelength component that is incident through the color filter 202, and generates charges. do.
  • the photoelectric conversion unit 302 of the pixel 300a and the photoelectric conversion unit 302 of the pixel 300b can function as a pair of phase difference detection pixels during phase difference detection. That is, in the image sensor 100, the phase difference can be detected by detecting the difference between the pixel signals based on the charges generated by the photoelectric conversion units 302 of the pixels 300a and 300b.
  • the photoelectric conversion unit 302 changes the amount of charge generated, that is, the sensitivity, depending on the incident angle of light with respect to its own optical axis (the axis perpendicular to the light receiving surface). For example, the photoelectric conversion unit 302 has the highest sensitivity when the incident angle is 0 degrees. It has a line-symmetrical relationship. Therefore, in the photoelectric conversion unit 302 of the pixel 300a and the photoelectric conversion unit 302 of the pixel 300b, light from the same point is incident at different angles of incidence, and an amount of charge corresponding to the angle of incidence is generated. A shift (phase difference) occurs in the image to be projected.
  • the phase difference can be detected by detecting the difference between pixel signals based on the amount of charge generated by the photoelectric conversion unit 302 of the pixel 300a and the photoelectric conversion unit 302 of the pixel 300b. Therefore, such a difference (phase difference) between pixel signals is detected as a difference signal in a detection unit (not shown) of the output circuit unit 38, for example, and the defocus amount is calculated based on the detected phase difference, Autofocus can be achieved by adjusting (moving) an imaging lens (not shown).
  • the phase difference is detected as the difference between the pixel signals of the photoelectric conversion unit 302 of the pixel 300a and the photoelectric conversion unit 302 of the pixel 300b, but the present embodiment is limited to this. Instead, for example, the phase difference may be detected as a ratio of pixel signals of the photoelectric conversion unit 302 of the pixel 300a and the photoelectric conversion unit 302 of the pixel 300b.
  • FIG. The pixel separation section 304 is configured as a DTI (Deep Trench Isolation) by separating the semiconductor substrate 10 from the surface (second surface) 10b opposite to the light receiving surface of the semiconductor substrate 10 along the thickness direction of the semiconductor substrate 10.
  • a trench (not shown) provided to penetrate partway through the semiconductor substrate 10, and a silicon oxide film (SiO), silicon nitride film, amorphous silicon, polycrystalline silicon, and titanium oxide embedded in the trench. It consists of a film (TiO), an oxide film such as aluminum or tungsten, or a material consisting of a metal film.
  • the pixel separation unit 304 can effectively and physically separate the pair of pixels 300a and 300b. can be improved.
  • a second conductivity type (for example, p-type) impurity is diffused around the pixel separation section 304 to form a diffusion region 306 . Since the diffusion region 306 further improves the accuracy of phase difference detection, the pair of pixels 300a and 300b can be electrically isolated and the occurrence of color mixture can be suppressed.
  • p-type impurity for example, p-type
  • a diffusion region 320 containing impurities of a first conductivity type (for example, n-type) is provided above the pixel separation section 304 (on the side of the light receiving surface 10a).
  • the diffusion region 320 serves as an overflow path through which the charge generated between the pixels 300a, 300b can be exchanged. Specifically, when the charge of one of the pixels 300a and 300b is about to be saturated during normal imaging, the charge is transferred to the other pixel via the overflow path, thereby pixel saturation can be avoided. By providing such an overflow path, it is possible to ensure the linearity of pixel signals output from the image sensor 100 and prevent degradation of captured images.
  • an element isolation wall 310 is provided in the semiconductor substrate 10 to surround the pixels 300 a and 300 b and physically separate the adjacent imaging devices 100 .
  • the element isolation wall 310 includes a trench (not shown) provided to penetrate the semiconductor substrate 10 along the thickness direction of the semiconductor substrate 10, and a silicon oxide film, a silicon oxide film, and a silicon oxide film embedded in the trench. It is made of a material comprising an oxide film such as a nitride film, amorphous silicon, polycrystalline silicon, titanium oxide film, aluminum, tungsten, or a metal film.
  • charges generated by the photoelectric conversion units 302 of the pixels 300a and the photoelectric conversion units 302 of the pixels 300b are provided on the surface 10b of the semiconductor substrate 10 opposite to the light receiving surface 10a. are transferred through the transfer gates 400a and 400b of the transfer transistors (one type of pixel transistor described above).
  • the transfer gates 400a and 400b can be formed, for example, from metal films.
  • the charges may be accumulated, for example, in a floating diffusion portion (not shown) provided in a semiconductor region having a first conductivity type (for example, n-type) provided in the semiconductor substrate 10 .
  • the floating diffusion portion is not limited to being provided in the semiconductor substrate 10.
  • the floating diffusion portion is provided in another substrate (not shown) laminated on the semiconductor substrate 10. may be provided.
  • a plurality of pixel transistors (not shown) other than the transfer transistors described above, which are used for reading charges as pixel signals, etc., may be provided. Furthermore, in the imaging device 100, the pixel transistor may be provided on the semiconductor substrate 10, or may be provided on another substrate (not shown) stacked on the semiconductor substrate 10.
  • the pixel separation portion 304 that physically separates the pair of pixels 300a and 300b and the diffusion region 306 that electrically separates the pair of pixels 300a and 300b are provided. Phase difference detection using 300a and 300b is possible. Furthermore, since the image sensor 100 according to the comparative example is provided with an overflow path, when the charge of one of the pixels 300a and 300b is about to be saturated during normal imaging, the overflow By moving charge through the path to the other pixel, saturation of one pixel can be avoided. That is, in the imaging device 100 according to the comparative example, by providing such an overflow path, the linearity of the pixel signals output from the imaging device 100 can be ensured and deterioration of the captured image can be prevented.
  • the inventors of the present invention have found that even if the imaging device 100 is further miniaturized as described above, the charge that can be generated by photoelectric conversion in the imaging device 100 can be detected without saturating strong light. I wanted to increase the maximum value of the amount (saturated charge amount Qs). However, as a result of studies by the present inventors, there is a limit to increasing the saturation charge amount Qs in the configuration of the imaging device 100 according to the comparative example described above.
  • the diffusion region 320 functioning as an overflow path is provided above the pixel separation section 304 (on the side of the light receiving surface 10a), as described above. That is, the diffusion region 320 functions as an overflow path and does not function as a photoelectric conversion section. Therefore, in the imaging device 100, the presence of the overflow path limits the volume of the photoelectric conversion unit 302 that absorbs light and generates electric charges. had a limit.
  • the step of forming the pixel isolation portion 304 penetrating halfway through the film thickness of the semiconductor substrate 10 is performed in a separate step from the formation of the element isolation wall 310 penetrating the semiconductor substrate 10 .
  • the layout of the pixel isolation portion 304, the diffusion region 306 around the pixel isolation portion 304, and the diffusion region 320 serving as an overflow path should be given some margin. is required.
  • the volume of the photoelectric conversion unit 302 is reduced by the amount of margin in the layout, and similarly to the above, there is a limit to increasing the saturated charge amount Qs.
  • the present inventors diligently studied the imaging device 100 according to the comparative example, narrowed the width of a part of the pixel separation section 304, and functioned as an overflow path. It was conceived to increase the volume of the photoelectric conversion section 302 by reducing the volume of the region where the light is applied. Further, the present inventors independently found that by narrowing the width of a part of the pixel isolation portion 304, it is possible to utilize the microloading effect and the self-stop phenomenon. (Note that the microloading effect and the self-stopping phenomenon will be described later). Details of the first embodiment of the present disclosure created by the present inventors will be sequentially described below.
  • FIG. 4 is an explanatory diagram showing a plan view of the imaging element 100 according to the present embodiment, and in detail corresponds to a cross section of the imaging element 100 taken along line AA' shown in FIG.
  • the description of the configuration common to the image sensor 100 according to the comparative example described with reference to FIGS. 2 and 3 will be omitted.
  • pixels 300a and 300b adjacent to each other are separated by a pixel isolation portion (first pixel isolation portion) 304 integrally formed with an element isolation wall 310.
  • the pixel separation section 304 has a width narrower than the width a of the element separation wall 310 at the center of the image pickup element 100. It has a narrowed portion 304a with a width c.
  • the pixel isolation portion 304 further has a wide portion 304b sandwiching the narrow portion 304a, and the width b of the wide portion 304b is substantially the same as the width a of the element isolation wall 310, for example.
  • a diffusion region 306 containing impurities of the second conductivity type (for example, p-type) is formed around the pixel isolation portion 304 .
  • an overflow path (see FIG. 4) consisting of a diffusion region 320 containing impurities of the first conductivity type (for example, n-type). indicated by arrows) are provided.
  • the width c of the narrow portion 304a is larger than the width a of the element isolation wall 310.
  • the length e of the narrow portion 304a is preferably e ⁇ d ⁇ 0.8 with respect to the length d of the imaging device 100 .
  • the narrow portion 304a having a narrow width is provided as part of the pixel separation portion 304. As shown in FIG. Therefore, according to the present embodiment, since the width of the narrow portion 304a is narrow, the area of the overflow path becomes small, so even if the overflow path exists, the volume of the photoelectric conversion section 302 can be increased. In addition, the saturated charge amount Qs can be increased.
  • the light is reflected or scattered by the end face on the upper side (light receiving surface 10a side) of the pixel separation section 304, and is incident on the adjacent image sensor 100.
  • color mixture or the like occurred (see FIGS. 2 and 3).
  • the narrow width portion 304a as a part of the pixel separation portion 304, the upper end surface of the pixel separation portion 304 becomes narrower, so that light is reflected or scattered on the end surface. can be suppressed. Therefore, according to this embodiment, it is possible to suppress the occurrence of color mixture.
  • FIG. 5 is an explanatory diagram showing a cross section of the image sensor 100 according to the present embodiment, and in detail corresponds to a cross section of the image sensor 100 taken along line BB' shown in FIG.
  • FIG. 4 when the imaging element 100 according to the present embodiment is cut along a line perpendicular to the line BB, the cross section shown in FIG. 2 is obtained. omitted. That is, in the following description, the description of the configuration common to the image sensor 100 according to the comparative example described with reference to FIGS. 2 and 3 will be omitted.
  • the narrow portion 304a located at the center of the image pickup device 100 extends the semiconductor substrate 10 along the thickness direction of the semiconductor substrate 10 on the side opposite to the light-receiving surface of the semiconductor substrate 10 (the second side). 2) 10b and extends halfway through the semiconductor substrate 10.
  • the narrow portion 304a is provided to penetrate the semiconductor substrate 10 along the thickness direction of the semiconductor substrate 10 from the surface 10b opposite to the light receiving surface of the semiconductor substrate 10 halfway through the semiconductor substrate 10. and a material such as an oxide film or a metal film embedded in the trench (not shown).
  • the wide portion 304 b is provided so as to penetrate the semiconductor substrate 10 along the thickness direction of the semiconductor substrate 10 .
  • an overflow path is provided between the narrow portion 304a and the light receiving surface 10a of the semiconductor substrate 10, and is composed of a diffusion region 320 containing impurities of the first conductivity type (for example, n-type). ing.
  • a gate electrode 402 of a transistor and the like are provided on the surface 10b of the semiconductor substrate 10 opposite to the light receiving surface 10a.
  • the length (depth) g of the narrow portion 304a in the film thickness direction of the semiconductor substrate 10 is 0.5 ⁇ h with respect to the length (depth) h of the wide portion 304b. It is preferred that ⁇ g ⁇ 0.95 ⁇ h.
  • the narrow portion 304a having a narrow width is provided as part of the pixel separation portion 304. As shown in FIG. Therefore, according to the present embodiment, since the width of the narrow portion 304a is narrow, the area of the overflow path becomes small, so even if the overflow path exists, the volume of the photoelectric conversion section 302 can be increased. In addition, the saturated charge amount Qs can be increased.
  • FIGS. 6A to 6C are explanatory diagrams showing the manufacturing method of the imaging device 100 according to the present embodiment.
  • the upper part of each drawing corresponds to a plan view obtained by rotating the plane shown in FIG. 4 to the right by 90 degrees.
  • the lower part of each drawing corresponds to a cross-sectional view in which the cross-section shown in FIG. 5 is turned upside down.
  • a resist is patterned using lithography to form a mask 600 having rectangular openings with different widths at positions to be the pixel isolation section 304 and the element isolation wall 310 .
  • dry etching is used to process the semiconductor substrate 10 according to the pattern of the mask 600 to form a trench (groove) 610 .
  • etching in the narrow opening corresponding to the narrow portion 304a is slower than etching in the wide opening corresponding to the wide portion 304b. Furthermore, the etching in the narrow opening corresponding to the narrow width portion 304a eventually stops (self-stop phenomenon).
  • the trenches 610 have depths corresponding to the widths of the corresponding pattern openings. can be dug. In other words, in this embodiment, if the width of the trench 610 is narrowed, it can only be dug shallowly, while if the width of the trench 610 is widened, it can be dug deeply. Formation of trenches 610 corresponding to walls 310 can be performed at the same time.
  • the trench 610 is shallow in the region corresponding to the narrow portion 304a and deep in the region corresponding to the wide portion 304b. That is, in this embodiment, by utilizing the microloading phenomenon and the self-stopping phenomenon, the narrow portion 304a penetrating halfway through the film thickness of the semiconductor substrate 10, the wide portion 304b penetrating the semiconductor substrate 10, and the element isolation walls 310 are formed. can be formed simultaneously (in the same process).
  • the etching conditions and the like are appropriately selected according to the material and film thickness of the semiconductor substrate 10, the width c of the narrow portion 304a, the width b of the wide portion 304b, the width a of the isolation wall 310, and the like. preferably.
  • the pixel isolation portion 304 (narrow portion 304a, wide portion 304b) and the element isolation wall 310 can be formed by filling the trench 610 with a buried film such as an insulating film.
  • the region of the semiconductor substrate 10 located below the narrow portion 304a serves as an overflow path.
  • a diffusion process, a transistor and wiring formation process, a rear surface processing process, and the like are performed.
  • the region of the semiconductor substrate 10 remaining at the bottom in the drawing shown in the lower part of FIG. 6C is removed, leaving only the overflow path region.
  • the separation wall 310 can be formed simultaneously (in the same process). Therefore, in the present embodiment, since these are formed in the same process, there is a possibility that the pixel isolation portion 304 (more specifically, the narrow portion 304a and the wide portion 304b) may be misaligned with respect to the element isolation wall 310. Therefore, it is not necessary to provide some leeway in the layout of the pixel separation section 304, diffusion region 306 and diffusion region 320.
  • FIG. As a result, according to this embodiment, the volume of the photoelectric conversion unit 302 can be increased, and the saturated charge amount Qs can be increased.
  • the narrow width portion 304a, the wide width portion 304b, and the element isolation wall 310 can be formed at the same time. can be suppressed. Further, according to the present embodiment, the pixel isolation portion 304 (more specifically, the narrow portion 304a and the wide portion 304b) and the element isolation wall 310 can be formed to be connected to each other. and the element isolation wall 310, and the problem due to the presence of the seam is less likely to occur. More specifically, if such a joint exists, misalignment between the pixel isolation portion 304 and the element isolation wall 310 may cause the overlapped portion to become deeper (the length in the thickness direction of the semiconductor substrate 10 to become longer).
  • the narrow portion 304a and the wide portion 304b have different lengths (depths) in the thickness direction of the semiconductor substrate 10, so they are formed in separate processes.
  • the pixel separation unit 304 that physically separates the pair of pixels 300a and 300b, the diffusion region 306 that electrically separates the pair of pixels 300a and 300b, Since the diffusion region 320 electrically isolates 300b is provided, the accuracy of phase difference detection is improved. Furthermore, in this embodiment, since an overflow path is provided, when the charge of one of the pixels 300a and 300b is about to be saturated during normal imaging, By transferring charge to the other pixel, saturation of one pixel can be avoided. Therefore, according to the present embodiment, by providing such an overflow path, it is possible to ensure the linearity of the pixel signals output from the image sensor 100 and prevent deterioration of the captured image.
  • the narrow width portion 304a as a part of the pixel separation portion 304, the area of the overflow path is reduced.
  • the volume of the unit 302 can be increased, and the saturation charge amount Qs of the image sensor 100 can be increased.
  • the upper end surface of the narrow portion 304a becomes narrower, so that the reflection and scattering of light by the end surface can be suppressed. As a result, the occurrence of color mixture can be suppressed.
  • the narrow portion 304a penetrating halfway through the film thickness of the semiconductor substrate 10, the wide portion 304b penetrating the semiconductor substrate 10, and the element isolation walls are formed. 310 can be formed simultaneously (in the same process). Therefore, in the present embodiment, since these are performed in the same process, there is no possibility that the pixel isolation portion 304 will be misaligned with respect to the element isolation wall 310. Therefore, the pixel isolation portion 304, the diffusion region 306, and the diffusion region 306 are eliminated. It is no longer necessary to provide some leeway in the layout of area 320 . As a result, according to this embodiment, the volume of the photoelectric conversion unit 302 can be increased, and the saturated charge amount Qs can be increased.
  • the narrow width portion 304a, the wide width portion 304b, and the element isolation wall 310 can be formed at the same time. can be suppressed. Further, according to the present embodiment, the pixel isolation portion 304 (more specifically, the narrow portion 304a and the wide portion 304b) and the element isolation wall 310 can be formed to be connected to each other. Problems due to joints between (specifically, the narrow portion 304a and the wide portion 304b) and the element isolation wall 310 are less likely to occur.
  • FIG. 7 is an explanatory diagram showing a plane of the imaging device 100 according to Modification 1 of the present embodiment
  • FIGS. 8 and 9 are explanatory diagrams showing cross sections of the imaging device 100 according to Modification 1 of the present embodiment. Specifically, it corresponds to a cross section of the imaging device 100 taken along line CC' shown in FIG.
  • the pixel separation section 304 can have a plurality of narrow width sections 304a and a plurality of wide width sections 304b that are alternately arranged.
  • an overflow path is provided between each narrow portion 304a and the light-receiving surface 10a of the semiconductor substrate 10.
  • the overflow path is composed of a diffusion region 320 containing impurities of a first conductivity type (for example, n-type).
  • each narrow portion 304a may be substantially the same.
  • each narrow width in the film thickness direction of the semiconductor substrate 10 is controlled according to the width c of each narrow width portion 304a by utilizing the microloading effect and the self-stop phenomenon. Since it is easy to control the length (depth) g of the portion 304a, the configuration shown in FIG. 9 can be easily formed.
  • the pixel separating portion 304 is not limited to having a rectangular shape in the plane of the semiconductor substrate 10, and may have various shapes such as a trapezoidal shape, a wavy shape, and a rectangular wavy shape. Therefore, Modification 2 as described below is also possible. Details of Modification 2 of the present embodiment will be described below.
  • FIG. 10 is an explanatory diagram showing a plan view of the imaging element 100 according to Modification 2 of the present embodiment
  • FIG. 11 is an explanatory diagram showing a cross section of the imaging element 100 according to Modification 2 of the present embodiment.
  • FIG. 12 is an explanatory diagram showing a plan view of the imaging element 100 according to Modification 2 of the present embodiment
  • FIG. 13 is an explanatory diagram showing a cross section of the imaging element 100 according to Modification 2 of the present embodiment.
  • it corresponds to a cross section of the imaging element 100 taken along line EE' shown in FIG.
  • FIG. 14 is an explanatory diagram showing a plane of an imaging device 100 according to Modification 2 of the present embodiment.
  • Modified Example 2 for example, as shown in FIG. It may have a width c that gradually narrows towards .
  • the cross section of the imaging element 100 is formed as shown in FIG. Specifically, as shown in FIG. 11, the length (depth) g of the pixel isolation portion 304 in the film thickness direction of the semiconductor substrate 10 increases as the width c of the pixel isolation portion 304 increases.
  • a diffusion region 320 serving as an overflow path is provided above the portion where the length g of the pixel separation portion 304 is the shortest.
  • Modification 2 for example, as shown in FIG. may have a width c that gradually narrows along the column direction (the Y direction in FIG. 12).
  • Modification 2 in order to separate the pixels 300a and 300b positioned above and below in FIG. It may have a narrowing width c.
  • the cross section of the imaging element 100 is formed as shown in FIG. Specifically, as shown in FIG. 13, the length (depth) g of the pixel isolation portion 304 in the film thickness direction of the semiconductor substrate 10 increases as the width c of the pixel isolation portion 304 increases.
  • a diffusion region 320 serving as an overflow path is provided above the portion where the length g of the pixel separation portion 304 is the shortest.
  • the maximum value cmax and the minimum value cmin of the width of the pixel isolation portion 304 are equal to the width of the isolation wall
  • width a of 310 preferably cmin ⁇ cmax ⁇ a, cmin ⁇ 1 ⁇ cmax ⁇ cmin ⁇ cmax ⁇ 0.8.
  • the maximum value gmax and the minimum value gmin of the length (depth) of the pixel isolation portion 304 in the film thickness direction of the semiconductor substrate 10 are equal to the length (depth) of the element isolation wall 310. ), preferably gmin ⁇ gmax ⁇ n and gmin ⁇ 0.05 ⁇ gmax ⁇ gmin ⁇ n ⁇ 0.7.
  • Modification 2 may be combined with Modification 1.
  • the pixel separation section 304 may be formed in a chevron shape that repeats a width c that gradually narrows and widens along the column direction (the Y direction in FIG. 14) of the image sensor 100. .
  • FIG. 15 is an explanatory view showing a part of the cross section of the imaging device 100 according to the comparative example, and more specifically, an enlarged view of the region 500 shown in FIG.
  • FIG. 16 is an explanatory diagram showing a plane of the imaging device 100 according to the comparative example, and corresponds to a cross section when the imaging device 100 is cut along line AA' shown in FIG.
  • the comparative example means the imaging element 100 that the inventors of the present invention repeatedly studied before making the embodiment of the present disclosure.
  • the pixel separation section 304 includes a central portion 330 made of polysilicon and an outer peripheral portion made of a silicon oxide film covering the outer periphery of the central portion 330. 332.
  • the outer peripheral portion 332 made of the silicon oxide film and the semiconductor substrate 10 made of silicon have different refractive indices, incident light is reflected at the interface between the outer peripheral portion 332 and the semiconductor substrate 10 . . Then, the reflected light enters another imaging element 100 adjacent to the imaging element 100 that should be absorbed.
  • the other image pickup device 100 is an image pickup device 100 for detecting light of a color different from that of the one image pickup device 100, color mixture occurs in the other image pickup device 100, resulting in color noise and phenomena. blurring (deterioration of MTF (Modulation Transfer Function)), etc. will occur.
  • MTF Modulation Transfer Function
  • the central portion 330 of the pixel separating portion 304 is formed of polysilicon having the same refractive index as that of the semiconductor substrate 10.
  • the portion 330 is also designed to absorb light, thereby suppressing reflection at the interface as described above.
  • the inventors of the present invention have studied the configuration of the imaging device 100 as described above in more detail. It has been found that there is a limit to improving the photoelectric conversion efficiency (Qe). Specifically, the light absorbed by the central portion 330 of the pixel separating portion 304 cannot be used as a pixel signal because it is not photoelectrically converted. Therefore, part of the light incident on the image pickup device 100 is absorbed by the central portion 330, so the amount of light incident on the photoelectric conversion unit 302 is reduced, and the amount of charge generated in the photoelectric conversion unit 302 is reduced. Therefore, there is a limit to improvement in photoelectric conversion efficiency (Qe).
  • FIG. 17 is an explanatory diagram showing a plan view of the imaging device 100 according to the present embodiment, and in detail corresponds to a cross section of the imaging device 100 taken along line AA' shown in FIG.
  • the description of the configuration common to the image sensor 100 according to the comparative example described with reference to FIGS. 2 and 3 will be omitted.
  • a plurality of pixel separators (second pixel separators) 350 arranged along the column direction (the Y direction in FIG. 17) in the center of the image sensor 100 separates the pixels 300a. and pixel 300b are separated.
  • the pixel separating section 350 has a substantially circular central portion 352 and an outer peripheral portion 354 covering the outer periphery of the central portion 352.
  • the pixel separating portion 350 is a substantially circular column penetrating halfway through the semiconductor substrate 10 from the surface (second surface) 10b opposite to the light receiving surface of the semiconductor substrate 10 along the thickness direction of the semiconductor substrate 10. formed as Between the pixel separation section 350 and the light-receiving surface 10a of the semiconductor substrate 10 located above the pixel separation section 350, an overflow region 320 including a first conductivity type (for example, n-type) impurity is provided. A path will be provided. Furthermore, as shown in FIG. 17, a diffusion region 306 containing impurities of the second conductivity type (for example, p-type) is formed around the pixel isolation portion 350 .
  • a first conductivity type for example, n-type
  • the central portion 352 can be formed of, for example, polysilicon, silicon oxide film, silicon nitride film, titanium nitride film, or the like. preferably formed.
  • the peripheral portion 354 is not essential to be provided, but if provided, it can be formed of a silicon oxide film, a silicon nitride film, a titanium nitride film, or the like.
  • the diameter R of the central portion 352 is preferably small in order to suppress light absorption.
  • the diameter R of the central portion 352 and the width of the pixel separating portion 350 shown in FIG. It is preferable that the distance (pitch) P from the center to the center of the adjacent pixel separation section 350 and the length (diffusion length) D of the diffusion region 306 satisfy the relationship P ⁇ R+D ⁇ 2.
  • it is preferable to appropriately select the number and pitch P of the pixel separating portions 350 and the diameter R of the central portion 352 so as to satisfy such conditions. etc. are also preferably selected as appropriate.
  • the plurality of pixel separation units 350 are not limited to being arranged along the column direction (the Y direction in FIG. 17), but are arranged along the row direction (the X direction in FIG. 17). You can line up. Furthermore, in the present embodiment, the number of pixel separating sections 350 is not limited to three, and may be two or more.
  • the pixels 300a and 300b are separated by the pixel separating section 304 having a linear configuration.
  • the pixels 300a and the pixels 300b are separated by arranging a plurality of substantially circular pixel separating portions 350.
  • the peripheral portion 354 formed of the silicon oxide film can be reduced as compared with the comparative example. It is possible to suppress the reflection of light at the interface between the two and, in turn, suppress the occurrence of color mixture and the like.
  • the central portion 352 made of silicon can be reduced compared to the comparative example. Absorption can be suppressed. Therefore, according to this embodiment, the photoelectric conversion efficiency (Qe) of the image sensor 100 can be further improved.
  • FIGS. 18A to 18C are explanatory diagrams showing the manufacturing method of the imaging device 100 according to this embodiment. Specifically, the upper stage of each drawing corresponds to the plan view shown in FIG. , corresponds to the cross section of the imaging device 100 taken along line FF' shown in FIG.
  • lithography and dry etching are used to process the semiconductor substrate 10 to form a trench (groove) 610 .
  • the trenches 610 corresponding to the element isolation walls 310 are deep and rectangular, and the trenches 610 corresponding to the pixel isolation portions 650 are shallow and substantially circular.
  • a film of BSG (Bron Silicate Glass) 362 is formed in the trench 610 and heat is applied to diffuse boron from the BSG 362 into the semiconductor substrate 10 .
  • the trenches 610 are filled with polysilicon 352, and a diffusion process is further performed. After that, a transistor and wiring forming process, a back surface processing process, a wafer bonding process, and the like are performed.
  • the pixel separation unit 304 that physically separates the pair of pixels 300a and 300b, the diffusion region 306 that electrically separates the pair of pixels 300a and 300b, Since the diffusion region 320 and the like are provided for electrically separating 300b, the accuracy of phase difference detection is improved. Furthermore, in this embodiment, since an overflow path is provided, when the charge of one of the pixels 300a and 300b is about to be saturated during normal imaging, By transferring charge to the other pixel, saturation of one pixel can be avoided. Therefore, according to the present embodiment, by providing such an overflow path, it is possible to ensure the linearity of the pixel signals output from the image sensor 100 and prevent deterioration of the captured image.
  • the outer peripheral portion 354 formed of the silicon oxide film can be reduced, reflection of light at the interface between the outer peripheral portion 354 and the semiconductor substrate 10 can be suppressed. It is possible to suppress the occurrence of color mixture and the like. Furthermore, according to the present embodiment, since the central portion 352 made of silicon can be reduced, absorption of light by the central portion 352 can be suppressed. Therefore, according to this embodiment, the photoelectric conversion efficiency (Qe) of the image sensor 100 can be further improved.
  • FIGS. 19 and 20 are explanatory diagrams showing the plane of the imaging device 100 according to the modified example of the present embodiment.
  • the pixel separation section (third pixel separation section) 350 may have a substantially rectangular cross section as well as a substantially circular cross section. Furthermore, in this modification, the plurality of pixel separating sections 350 may be arranged along the row direction (the X direction in FIG. 19).
  • the plurality of pixel separation sections 350 may be arranged along the row direction (X direction in FIG. It is not necessary for the line to pass through the center of the imaging device 100 .
  • an element that separates the pair of pixels 300a and 300b during phase difference detection and an element that functions as an overflow path during normal imaging are provided. Degradation of the captured image can be avoided while improving the accuracy of phase difference detection.
  • the first conductivity type is the n-type
  • the second conductivity type is the p-type
  • the imaging device 100 using electrons as signal charges has been described.
  • Embodiments are not limited to such examples.
  • this embodiment can be applied to the imaging device 100 in which the first conductivity type is p-type, the second conductivity type is n-type, and holes are used as signal charges.
  • the semiconductor substrate 10 does not necessarily have to be a silicon substrate, and may be another substrate (for example, an SOI (Silicon On Insulator) substrate, a SiGe substrate, etc.). Also, the semiconductor substrate 10 may be one in which a semiconductor structure or the like is formed on such various substrates.
  • SOI Silicon On Insulator
  • SiGe substrate SiGe substrate
  • the imaging device 1 is not limited to an imaging device that detects the distribution of the amount of incident visible light and captures an image.
  • the present embodiment includes an imaging device that captures an image of the distribution of incident amounts of infrared rays, X-rays, particles, etc., and a fingerprint that detects the distribution of other physical quantities such as pressure and capacitance and captures the image. It can be applied to an imaging device (physical quantity distribution detection device) such as a detection sensor.
  • the imaging device 1 according to the embodiment of the present disclosure can be manufactured by using methods, devices, and conditions that are used for manufacturing general semiconductor devices. That is, the imaging device 1 according to the present embodiment can be manufactured using the existing manufacturing process of semiconductor devices.
  • PVD Physical Vapor Deposition
  • CVD Chemical Vapor Deposition
  • ALD Atomic Layer Deposition
  • PVD methods include vacuum deposition, EB (electron beam) deposition, various sputtering methods (magnetron sputtering, RF (Radio Frequency)-DC (Direct Current) combined bias sputtering, and ECR (Electron Cyclotron Resonance) sputtering.
  • CVD methods include plasma CVD, thermal CVD, metal-organic (MO) CVD, and optical CVD.
  • other methods include electrolytic plating method, electroless plating method, spin coating method; immersion method; casting method; microcontact printing method; drop casting method; screen printing method, inkjet printing method, offset printing method, gravure printing.
  • Various printing methods such as printing method, flexographic printing method; stamp method; spray method; air doctor coater method, blade coater method, rod coater method, knife coater method, squeeze coater method, reverse roll coater method, transfer roll coater method, gravure coater method , kiss coater method, cast coater method, spray coater method, slit orifice coater method and calendar coater method.
  • patterning methods include shadow masking, laser transfer, chemical etching such as photolithography, and physical etching using ultraviolet rays, lasers, and the like.
  • planarization techniques include a CMP (Chemical Mechanical Polishing) method, a laser planarization method, a reflow method, and the like.
  • FIG. 21 is an explanatory diagram showing an example of a schematic functional configuration of a camera 700 to which the technology according to the present disclosure (the present technology) can be applied.
  • the camera 700 has an imaging device 702, an optical lens 710, a shutter mechanism 712, a drive circuit unit 714, and a signal processing circuit unit 716.
  • the optical lens 710 forms an image of image light (incident light) from a subject on the imaging surface of the imaging device 702 .
  • signal charges are accumulated in the imaging element 100 of the imaging device 702 for a certain period of time.
  • the shutter mechanism 712 controls a light irradiation period and a light shielding period for the imaging device 702 by opening and closing.
  • the drive circuit unit 714 supplies drive signals for controlling the signal transfer operation of the imaging device 702, the shutter operation of the shutter mechanism 712, and the like.
  • the imaging device 702 performs signal transfer based on the drive signal (timing signal) supplied from the drive circuit unit 714 .
  • the signal processing circuit unit 716 performs various signal processing. For example, the signal processing circuit unit 716 outputs the signal-processed video signal to a storage medium (not shown) such as a memory or to a display unit (not shown).
  • FIG. 22 is a block diagram showing an example of a schematic functional configuration of a smartphone 900 to which the technology (the present technology) according to the present disclosure can be applied.
  • the smartphone 900 includes a CPU (Central Processing Unit) 901, a ROM (Read Only Memory) 902, and a RAM (Random Access Memory) 903.
  • Smartphone 900 also includes storage device 904 , communication module 905 , and sensor module 907 .
  • Smartphone 900 further includes imaging device 909 , display device 910 , speaker 911 , microphone 912 , input device 913 and bus 914 .
  • the smartphone 900 may have a processing circuit such as a DSP (Digital Signal Processor) in place of the CPU 901 or together with it.
  • DSP Digital Signal Processor
  • the CPU 901 functions as an arithmetic processing device and a control device, and controls all or part of the operations within the smartphone 900 according to various programs recorded in the ROM 902, RAM 903, storage device 904, or the like.
  • a ROM 902 stores programs and calculation parameters used by the CPU 901 .
  • a RAM 903 temporarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like.
  • the CPU 901 , ROM 902 and RAM 903 are interconnected by a bus 914 .
  • the storage device 904 is a data storage device configured as an example of a storage unit of the smartphone 900 .
  • the storage device 904 is composed of, for example, a magnetic storage device such as a HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or the like.
  • the storage device 904 stores programs executed by the CPU 901, various data, and various data acquired from the outside.
  • the communication module 905 is, for example, a communication interface configured with a communication device for connecting to the communication network 906.
  • the communication module 905 can be, for example, a communication card for wired or wireless LAN (Local Area Network), Bluetooth (registered trademark), or WUSB (Wireless USB).
  • the communication module 905 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), a modem for various types of communication, or the like.
  • a communication network 906 connected to the communication module 905 is a wired or wireless network, such as the Internet, home LAN, infrared communication, or satellite communication.
  • the sensor module 907 is, for example, a motion sensor (eg, an acceleration sensor, a gyro sensor, a geomagnetic sensor, etc.), a biological information sensor (eg, a pulse sensor, a blood pressure sensor, a fingerprint sensor, etc.), or a position sensor (eg, GNSS (Global Navigation Satellite system) receiver, etc.) and various sensors.
  • a motion sensor eg, an acceleration sensor, a gyro sensor, a geomagnetic sensor, etc.
  • a biological information sensor eg, a pulse sensor, a blood pressure sensor, a fingerprint sensor, etc.
  • GNSS Global Navigation Satellite system
  • the imaging device 909 is provided on the surface of the smartphone 900 and can image an object or the like positioned on the back side or the front side of the smartphone 900 .
  • the imaging device 909 includes an imaging device (not shown) such as a CMOS (Complementary MOS) image sensor to which the technology according to the present disclosure (this technology) can be applied, and a signal photoelectrically converted by the imaging device. and a signal processing circuit (not shown) that performs imaging signal processing.
  • the imaging device 909 further includes an optical system mechanism (not shown) composed of an imaging lens, a zoom lens, a focus lens, and the like, and a drive system mechanism (not shown) for controlling the operation of the optical system mechanism. can be done.
  • the image sensor collects incident light from an object as an optical image
  • the signal processing circuit photoelectrically converts the formed optical image pixel by pixel, and reads the signal of each pixel as an image signal. , a captured image can be acquired by performing image processing.
  • the display device 910 is provided on the surface of the smartphone 900 and can be, for example, a display device such as an LCD (Liquid Crystal Display) or an organic EL (Electro Luminescence) display.
  • the display device 910 can display an operation screen, captured images acquired by the imaging device 909 described above, and the like.
  • the speaker 911 can output, for example, the voice of a call, the voice accompanying the video content displayed by the display device 910 described above, and the like to the user.
  • the microphone 912 can collect, for example, the user's call voice, voice including commands for activating functions of the smartphone 900 , and ambient environment voice of the smartphone 900 .
  • the input device 913 is, for example, a device operated by a user, such as a button, keyboard, touch panel, or mouse.
  • the input device 913 includes an input control circuit that generates an input signal based on information input by the user and outputs the signal to the CPU 901 .
  • the user can input various data to the smartphone 900 and instruct processing operations.
  • a configuration example of the smartphone 900 has been shown above.
  • Each component described above may be configured using general-purpose members, or may be configured by hardware specialized for the function of each component. Such a configuration can be changed as appropriate according to the technical level of implementation.
  • Example of application to an endoscopic surgery system>> The technology (this technology) according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure may be applied to an endoscopic surgery system.
  • FIG. 23 is a diagram showing an example of a schematic configuration of an endoscopic surgery system to which the technology (this technology) according to the present disclosure can be applied.
  • FIG. 23 shows how an operator (physician) 11131 is performing surgery on a patient 11132 on a patient bed 11133 using an endoscopic surgery system 11000 .
  • an endoscopic surgery system 11000 includes an endoscope 11100, other surgical instruments 11110 such as a pneumoperitoneum tube 11111 and an energy treatment instrument 11112, and a support arm device 11120 for supporting the endoscope 11100. , and a cart 11200 loaded with various devices for endoscopic surgery.
  • An endoscope 11100 is composed of a lens barrel 11101 whose distal end is inserted into the body cavity of a patient 11132 and a camera head 11102 connected to the proximal end of the lens barrel 11101 .
  • an endoscope 11100 configured as a so-called rigid scope having a rigid lens barrel 11101 is illustrated, but the endoscope 11100 may be configured as a so-called flexible scope having a flexible lens barrel. good.
  • the tip of the lens barrel 11101 is provided with an opening into which the objective lens is fitted.
  • a light source device 11203 is connected to the endoscope 11100, and light generated by the light source device 11203 is guided to the tip of the lens barrel 11101 by a light guide extending inside the lens barrel 11101, where it reaches the objective. Through the lens, the light is irradiated toward the observation object inside the body cavity of the patient 11132 .
  • the endoscope 11100 may be a straight scope, a perspective scope, or a side scope.
  • An optical system and an imaging element are provided inside the camera head 11102, and the reflected light (observation light) from the observation target is focused on the imaging element by the optical system.
  • the imaging element photoelectrically converts the observation light to generate an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image.
  • the image signal is transmitted to a camera control unit (CCU: Camera Control Unit) 11201 as RAW data.
  • CCU Camera Control Unit
  • the CCU 11201 is composed of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), etc., and controls the operations of the endoscope 11100 and the display device 11202 in an integrated manner. Further, the CCU 11201 receives an image signal from the camera head 11102 and performs various image processing such as development processing (demosaicing) for displaying an image based on the image signal.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the display device 11202 displays an image based on an image signal subjected to image processing by the CCU 11201 under the control of the CCU 11201 .
  • the light source device 11203 is composed of a light source such as an LED (Light Emitting Diode), for example, and supplies the endoscope 11100 with irradiation light for photographing a surgical site or the like.
  • a light source such as an LED (Light Emitting Diode), for example, and supplies the endoscope 11100 with irradiation light for photographing a surgical site or the like.
  • the input device 11204 is an input interface for the endoscopic surgery system 11000.
  • the user can input various information and instructions to the endoscopic surgery system 11000 via the input device 11204 .
  • the user inputs an instruction or the like to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) by the endoscope 11100 .
  • the treatment instrument control device 11205 controls driving of the energy treatment instrument 11112 for tissue cauterization, incision, blood vessel sealing, or the like.
  • the pneumoperitoneum device 11206 inflates the body cavity of the patient 11132 for the purpose of securing the visual field of the endoscope 11100 and securing the operator's working space, and injects gas into the body cavity through the pneumoperitoneum tube 11111. send in.
  • the recorder 11207 is a device capable of recording various types of information regarding surgery.
  • the printer 11208 is a device capable of printing various types of information regarding surgery in various formats such as text, images, and graphs.
  • the light source device 11203 that supplies the endoscope 11100 with irradiation light for imaging the operative site can be composed of, for example, a white light source composed of an LED, a laser light source, or a combination thereof.
  • a white light source is configured by a combination of RGB laser light sources
  • the output intensity and output timing of each color (each wavelength) can be controlled with high accuracy. It can be carried out.
  • laser light from each of the RGB laser light sources is irradiated to the observation target in a time division manner, and the drive of the imaging device of the camera head 11102 is controlled in synchronization with the irradiation timing, thereby corresponding to each of RGB. It is also possible to pick up images by time division. According to this method, a color image can be obtained without providing a color filter in the imaging device.
  • the driving of the light source device 11203 may be controlled so as to change the intensity of the output light every predetermined time.
  • the drive of the imaging device of the camera head 11102 in synchronism with the timing of the change in the intensity of the light to obtain an image in a time-division manner and synthesizing the images, a high dynamic A range of images can be generated.
  • the light source device 11203 may be configured to be able to supply light in a predetermined wavelength band corresponding to special light observation.
  • special light observation for example, the wavelength dependence of light absorption in body tissues is used to irradiate a narrower band of light than the irradiation light (i.e., white light) used during normal observation, thereby observing the mucosal surface layer.
  • narrow band imaging in which a predetermined tissue such as a blood vessel is imaged with high contrast, is performed.
  • fluorescence observation may be performed in which an image is obtained from fluorescence generated by irradiation with excitation light.
  • the body tissue is irradiated with excitation light and the fluorescence from the body tissue is observed (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected into the body tissue and the body tissue is A fluorescence image can be obtained by irradiating excitation light corresponding to the fluorescence wavelength of the reagent.
  • the light source device 11203 can be configured to supply narrowband light and/or excitation light corresponding to such special light observation.
  • FIG. 24 is a block diagram showing an example of functional configurations of the camera head 11102 and CCU 11201 shown in FIG.
  • the camera head 11102 has a lens unit 11401, an imaging section 11402, a drive section 11403, a communication section 11404, and a camera head control section 11405.
  • the CCU 11201 has a communication section 11411 , an image processing section 11412 and a control section 11413 .
  • the camera head 11102 and the CCU 11201 are communicably connected to each other via a transmission cable 11400 .
  • a lens unit 11401 is an optical system provided at a connection with the lens barrel 11101 . Observation light captured from the tip of the lens barrel 11101 is guided to the camera head 11102 and enters the lens unit 11401 .
  • a lens unit 11401 is configured by combining a plurality of lenses including a zoom lens and a focus lens.
  • the imaging unit 11402 is composed of an imaging device.
  • the imaging device constituting the imaging unit 11402 may be one (so-called single-plate type) or plural (so-called multi-plate type).
  • image signals corresponding to RGB may be generated by each image pickup element, and a color image may be obtained by synthesizing the image signals.
  • the imaging unit 11402 may be configured to have a pair of imaging elements for respectively acquiring right-eye and left-eye image signals corresponding to 3D (Dimensional) display.
  • the 3D display enables the operator 11131 to more accurately grasp the depth of the living tissue in the surgical site.
  • a plurality of systems of lens units 11401 may be provided corresponding to each imaging element.
  • the imaging unit 11402 does not necessarily have to be provided in the camera head 11102 .
  • the imaging unit 11402 may be provided inside the lens barrel 11101 immediately after the objective lens.
  • the drive unit 11403 is configured by an actuator, and moves the zoom lens and focus lens of the lens unit 11401 by a predetermined distance along the optical axis under control from the camera head control unit 11405 . Thereby, the magnification and focus of the image captured by the imaging unit 11402 can be appropriately adjusted.
  • the communication unit 11404 is composed of a communication device for transmitting and receiving various information to and from the CCU 11201.
  • the communication unit 11404 transmits the image signal obtained from the imaging unit 11402 as RAW data to the CCU 11201 via the transmission cable 11400 .
  • the communication unit 11404 receives a control signal for controlling driving of the camera head 11102 from the CCU 11201 and supplies it to the camera head control unit 11405 .
  • the control signal includes, for example, information to specify the frame rate of the captured image, information to specify the exposure value at the time of imaging, and/or information to specify the magnification and focus of the captured image. Contains information about conditions.
  • the imaging conditions such as the frame rate, exposure value, magnification, and focus may be appropriately designated by the user, or may be automatically set by the control unit 11413 of the CCU 11201 based on the acquired image signal. good.
  • the endoscope 11100 is equipped with so-called AE (Auto Exposure) function, AF (Auto Focus) function, and AWB (Auto White Balance) function.
  • the camera head control unit 11405 controls driving of the camera head 11102 based on the control signal from the CCU 11201 received via the communication unit 11404.
  • the communication unit 11411 is composed of a communication device for transmitting and receiving various information to and from the camera head 11102 .
  • the communication unit 11411 receives image signals transmitted from the camera head 11102 via the transmission cable 11400 .
  • the communication unit 11411 transmits a control signal for controlling driving of the camera head 11102 to the camera head 11102 .
  • Image signals and control signals can be transmitted by electric communication, optical communication, or the like.
  • the image processing unit 11412 performs various types of image processing on the image signal, which is RAW data transmitted from the camera head 11102 .
  • the control unit 11413 performs various controls related to imaging of the surgical site and the like by the endoscope 11100 and display of the captured image obtained by imaging the surgical site and the like. For example, the control unit 11413 generates control signals for controlling driving of the camera head 11102 .
  • control unit 11413 causes the display device 11202 to display a captured image showing the surgical site and the like based on the image signal that has undergone image processing by the image processing unit 11412 .
  • the control unit 11413 may recognize various objects in the captured image using various image recognition techniques. For example, the control unit 11413 detects the shape, color, and the like of the edges of objects included in the captured image, thereby detecting surgical instruments such as forceps, specific body parts, bleeding, mist during use of the energy treatment instrument 11112, and the like. can recognize.
  • the control unit 11413 may use the recognition result to display various types of surgical assistance information superimposed on the image of the surgical site. By superimposing and presenting the surgery support information to the operator 11131, the burden on the operator 11131 can be reduced and the operator 11131 can proceed with the surgery reliably.
  • a transmission cable 11400 connecting the camera head 11102 and the CCU 11201 is an electrical signal cable compatible with electrical signal communication, an optical fiber compatible with optical communication, or a composite cable of these.
  • wired communication is performed using the transmission cable 11400, but communication between the camera head 11102 and the CCU 11201 may be performed wirelessly.
  • the technology according to the present disclosure can be applied to, for example, the endoscope 11100, (the imaging unit 11402 of) the camera head 11102, (the image processing unit 11412 of) the CCU 11201, etc.) among the configurations described above.
  • the technology according to the present disclosure may also be applied to, for example, a microsurgery system.
  • the technology (the present technology) according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure can be realized as a device mounted on any type of moving body such as automobiles, electric vehicles, hybrid electric vehicles, motorcycles, bicycles, personal mobility, airplanes, drones, ships, and robots. may
  • FIG. 25 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile control system to which the technology according to the present disclosure can be applied.
  • a vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an inside information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio/image output unit 12052, and an in-vehicle network I/F (interface) 12053 are illustrated.
  • the drive system control unit 12010 controls the operation of devices related to the drive system of the vehicle according to various programs.
  • the driving system control unit 12010 includes a driving force generator for generating driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism to adjust and a brake device to generate braking force of the vehicle.
  • the body system control unit 12020 controls the operation of various devices equipped on the vehicle body according to various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, winkers or fog lamps.
  • the body system control unit 12020 can receive radio waves transmitted from a portable device that substitutes for a key or signals from various switches.
  • Body system control unit 12020 receives the input of these radio waves or signals and controls the door lock device, power window device, lamps, etc. of the vehicle.
  • the vehicle exterior information detection unit 12030 detects information outside the vehicle in which the vehicle control system 12000 is installed.
  • the vehicle exterior information detection unit 12030 is connected with an imaging section 12031 .
  • the vehicle exterior information detection unit 12030 causes the imaging unit 12031 to capture an image of the exterior of the vehicle, and receives the captured image.
  • the vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as people, vehicles, obstacles, signs, or characters on the road surface based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal according to the amount of received light.
  • the imaging unit 12031 can output the electric signal as an image, and can also output it as distance measurement information.
  • the light received by the imaging unit 12031 may be visible light or non-visible light such as infrared rays.
  • the in-vehicle information detection unit 12040 detects in-vehicle information.
  • the in-vehicle information detection unit 12040 is connected to, for example, a driver state detection section 12041 that detects the state of the driver.
  • the driver state detection unit 12041 includes, for example, a camera that captures an image of the driver, and the in-vehicle information detection unit 12040 detects the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing off.
  • the microcomputer 12051 calculates control target values for the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and controls the drive system control unit.
  • a control command can be output to 12010 .
  • the microcomputer 12051 realizes the functions of ADAS (Advanced Driver Assistance System) including collision avoidance or shock mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, or vehicle lane deviation warning. Cooperative control can be performed for the purpose of ADAS (Advanced Driver Assistance System) including collision avoidance or shock mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, or vehicle lane deviation warning. Cooperative control can be performed for the purpose of ADAS (Advanced Driver Assistance System) including collision avoidance or shock mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, or vehicle
  • the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, etc. based on the information about the vehicle surroundings acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, so that the driver's Cooperative control can be performed for the purpose of autonomous driving, etc., in which vehicles autonomously travel without depending on operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the information detection unit 12030 outside the vehicle.
  • the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the vehicle exterior information detection unit 12030, and performs cooperative control aimed at anti-glare such as switching from high beam to low beam. It can be carried out.
  • the audio/image output unit 12052 transmits at least one of audio and/or image output signals to an output device capable of visually or audibly notifying the passengers of the vehicle or the outside of the vehicle.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices.
  • the display unit 12062 may include at least one of an on-board display and a head-up display, for example.
  • FIG. 26 is a diagram showing an example of the installation position of the imaging unit 12031.
  • the vehicle 12100 has imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
  • the imaging units 12101, 12102, 12103, 12104, and 12105 are provided at positions such as the front nose of the vehicle 12100, the side mirrors, the rear bumper, the back door, and the upper part of the windshield in the vehicle interior, for example.
  • An image pickup unit 12101 provided in the front nose and an image pickup unit 12105 provided above the windshield in the passenger compartment mainly acquire images in front of the vehicle 12100 .
  • Imaging units 12102 and 12103 provided in the side mirrors mainly acquire side images of the vehicle 12100 .
  • An imaging unit 12104 provided in the rear bumper or back door mainly acquires an image behind the vehicle 12100 .
  • Forward images acquired by the imaging units 12101 and 12105 are mainly used for detecting preceding vehicles, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like.
  • FIG. 26 shows an example of the imaging range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided in the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided in the side mirrors, respectively
  • the imaging range 12114 The imaging range of an imaging unit 12104 provided on the rear bumper or back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 viewed from above can be obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the imaging units 12101 to 12104 may be a stereo camera composed of a plurality of imaging elements, or may be an imaging element having pixels for phase difference detection.
  • the microcomputer 12051 determines the distance to each three-dimensional object within the imaging ranges 12111 to 12114 and changes in this distance over time (relative velocity with respect to the vehicle 12100). , it is possible to extract, as the preceding vehicle, the closest three-dimensional object on the course of the vehicle 12100, which runs at a predetermined speed (for example, 0 km/h or more) in substantially the same direction as the vehicle 12100. can. Furthermore, the microcomputer 12051 can set the inter-vehicle distance to be secured in advance in front of the preceding vehicle, and perform automatic brake control (including following stop control) and automatic acceleration control (including following start control). In this way, cooperative control can be performed for the purpose of automatic driving in which the vehicle runs autonomously without relying on the operation of the driver.
  • automatic brake control including following stop control
  • automatic acceleration control including following start control
  • the microcomputer 12051 converts three-dimensional object data related to three-dimensional objects to other three-dimensional objects such as motorcycles, ordinary vehicles, large vehicles, pedestrians, and utility poles. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into those that are visible to the driver of the vehicle 12100 and those that are difficult to see. Then, the microcomputer 12051 judges the collision risk indicating the degree of danger of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, an audio speaker 12061 and a display unit 12062 are displayed. By outputting an alarm to the driver via the drive system control unit 12010 and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be performed.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether or not the pedestrian exists in the captured images of the imaging units 12101 to 12104 .
  • recognition of a pedestrian is performed by, for example, a procedure for extracting feature points in images captured by the imaging units 12101 to 12104 as infrared cameras, and performing pattern matching processing on a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian.
  • the audio image output unit 12052 outputs a rectangular outline for emphasis to the recognized pedestrian. is superimposed on the display unit 12062 . Also, the audio/image output unit 12052 may control the display unit 12062 to display an icon or the like indicating a pedestrian at a desired position.
  • the present technology can also take the following configuration.
  • a semiconductor substrate a plurality of imaging elements arranged in a matrix along the row direction and the column direction on the semiconductor substrate and performing photoelectric conversion on incident light;
  • An imaging device comprising Each of the imaging elements is a plurality of pixels provided adjacent to each other in the semiconductor substrate; an element isolation wall surrounding the plurality of pixels and extending through the semiconductor substrate; an on-chip lens provided above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels; a first pixel separation unit that separates the plurality of pixels; has When viewed from above the second surface of the semiconductor substrate facing the first surface, the first pixel isolation portion has a narrow portion having a width narrower than the width of the element isolation wall.
  • Imaging device (2) The imaging device according to (1) above, wherein an overflow path for exchanging saturated charges between the plurality of pixels is provided between the narrow portion and the first surface. (3) The imaging device according to (1) or (2) above, wherein a diffusion region containing an impurity is provided around the first pixel separation section. (4) The imaging device according to any one of (1) to (3) above, wherein the first pixel separation section has a plurality of narrow width sections. (5) When viewed from above the second surface, the first pixel separation section has a wide section having a width substantially equal to the width of the element separation wall; The wide portion penetrates the semiconductor substrate, The imaging device according to any one of (1) to (4) above.
  • Imaging device (10) a semiconductor substrate; a plurality of imaging elements arranged in a matrix along the row direction and the column direction on the semiconductor substrate and performing photoelectric conversion on incident light;
  • An imaging device comprising Each of the imaging elements is a plurality of pixels provided adjacent to each other in the semiconductor substrate; an on-chip lens provided above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels; one or more second pixel separators for separating the plurality of pixels; has When viewed from above a second surface of the semiconductor substrate facing the first surface, the second pixel separation section has a substantially circular cross section, Imaging device.
  • the second pixel separating portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate; An overflow path is provided between the second pixel separation section and the first surface for exchanging saturation charges between the plurality of pixels.
  • a diffusion region containing an impurity is provided around the second pixel separation section, The imaging device according to (10) or (11) above.
  • the second pixel separation section is made of any one material selected from silicon, a silicon oxide film, a silicon nitride film, and a titanium nitride film.
  • the imaging device according to any one of (10) to (13) above, wherein the second pixel separation section has an outer peripheral portion that covers the outer periphery of the second pixel separation section.
  • the outer peripheral portion is made of any one material selected from a silicon oxide film, a silicon nitride film, and a titanium nitride film.
  • the plurality of second pixel separation sections are provided so as to line up along the row direction and the column direction when viewed from above the second surface. The imaging device according to any one of the above.
  • the imaging device further includes one or more third pixel separation sections having a substantially rectangular cross section when viewed from above the second surface, the third pixel separating portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate;
  • the imaging device according to any one of (10) to (16) above.
  • a method of manufacturing an imaging device comprising: (19) An electronic device including an imaging device having a semiconductor substrate, and a plurality of imaging elements arranged in a matrix on the semiconductor substrate along row and column directions and performing photoelectric conversion on incident light.
  • Each of the imaging elements is a plurality of pixels provided adjacent to each other in the semiconductor substrate; an element isolation wall surrounding the plurality of pixels and extending through the semiconductor substrate; an on-chip lens provided above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels; a first pixel separation unit that separates the plurality of pixels; has When viewed from above the second surface of the semiconductor substrate facing the first surface, the first pixel isolation portion has a narrow portion having a width narrower than the width of the element isolation wall. have the narrow portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate; Electronics.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Power Engineering (AREA)
  • General Physics & Mathematics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Computer Hardware Design (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Manufacturing & Machinery (AREA)
  • Electromagnetism (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Solid State Image Pick-Up Elements (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Element Separation (AREA)

Abstract

Provided is an imaging device (1) comprising a semiconductor substrate (10) and a plurality of imaging elements (100) which are arranged in a matrix along a row direction and a column direction on the semiconductor substrate, and which perform photoelectric conversion of incident light, wherein the imaging elements each have: a plurality of pixels (300) which are provided adjacently to each other in the semiconductor substrate; an element partitioning wall (310) which is provided so as to surround the plurality of pixels and pass through the semiconductor substrate; an on-chip lens (200) which is provided above a first surface of the semiconductor substrate so as to be shared by the plurality of pixels; and a first pixel partitioning part (304) which partitions the plurality of pixels.

Description

撮像装置、撮像装置の製造方法、及び、電子機器IMAGING DEVICE, METHOD FOR MANUFACTURING IMAGING DEVICE, AND ELECTRONIC DEVICE
 本開示は、撮像装置、撮像装置の製造方法、及び、電子機器に関する。 The present disclosure relates to imaging devices, imaging device manufacturing methods, and electronic devices.
 近年、撮像装置においては、オートフォーカス機能として、一対の位相差検出画素を用いて位相差を検出する手法が採用されている。このような例としては、下記の特許文献1に開示されている撮像素子を挙げることができる。当該特許文献1に開示の技術においては、受光面上に、被写体の撮像を行う有効画素と、上述のような位相差を検出する位相差検出画素との両方が、別々に設けられている。 In recent years, in imaging devices, a method of detecting a phase difference using a pair of phase difference detection pixels has been adopted as an autofocus function. As such an example, an imaging device disclosed in Patent Document 1 below can be mentioned. In the technique disclosed in Patent Document 1, both effective pixels for imaging a subject and phase difference detection pixels for detecting a phase difference as described above are separately provided on a light receiving surface.
特開2000-292685号公報Japanese Patent Application Laid-Open No. 2000-292685
 しかしながら、上記特許文献1に開示の技術においては、被写体の撮像画像を取得する際、位相差検出画素で得られる情報は、撮像画素からの情報と同様の情報として利用することが難しい。そのため、位相差検出画素の周辺の有効画素からの情報を用いて、位相差検出画素に対応する画素の画像に対して補間を行い、撮像画像を生成することとなる。すなわち、上記特許文献1に開示の技術においては、位相差検出を行うために位相差検出画素を設けることから、位相差検出画素に対応する撮像画像の情報の欠損による、撮像画像の劣化を避けることが難しい。 However, in the technique disclosed in Patent Document 1, when acquiring a captured image of a subject, it is difficult to use the information obtained by the phase difference detection pixels as the same information as the information from the imaging pixels. Therefore, using information from effective pixels around the phase difference detection pixels, the image of the pixels corresponding to the phase difference detection pixels is interpolated to generate a captured image. That is, in the technique disclosed in Patent Document 1, since the phase difference detection pixels are provided to detect the phase difference, deterioration of the captured image due to loss of information of the captured image corresponding to the phase difference detection pixels is avoided. difficult.
 そこで、本開示では、位相差検出の精度を向上させつつ、撮像画像の劣化を避けることができる撮像装置及び電子機器を提案する。 Therefore, the present disclosure proposes an imaging device and an electronic device that can avoid deterioration of captured images while improving the accuracy of phase difference detection.
 本開示によれば、半導体基板と、前記半導体基板上に行方向及び列方向に沿ってマトリックス状に配列し、入射された光に対して光電変換を行う、複数の撮像素子とを備える撮像装置であって、前記各撮像素子は、前記半導体基板内に互いに隣接するように設けられた複数の画素と、前記複数の画素を取り囲み、且つ、前記半導体基板を貫通するように設けられた素子分離壁と、前記複数の画素が共有するように前記半導体基板の第1の面の上方に設けられたオンチップレンズと、前記複数の画素を分離する第1の画素分離部とを有し、前記半導体基板の、前記第1の面と対向する第2の面の上方から見た場合、前記第1の画素分離部は、前記素子分離壁の幅に比べて狭い幅を持つ幅狭部を有し、前記幅狭部は、前記半導体基板の膜厚方向に沿って、前記第2の面から当該半導体基板の途中まで貫く、撮像装置が提供される。 According to the present disclosure, an imaging device comprising: a semiconductor substrate; and a plurality of imaging elements arranged in a matrix on the semiconductor substrate along the row direction and the column direction and performing photoelectric conversion on incident light. and each of the imaging devices includes a plurality of pixels provided adjacent to each other in the semiconductor substrate, and an element isolation provided so as to surround the plurality of pixels and penetrate the semiconductor substrate. a wall, an on-chip lens provided above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels, and a first pixel separation section for separating the plurality of pixels, When viewed from above the second surface of the semiconductor substrate facing the first surface, the first pixel isolation portion has a narrow portion having a width narrower than the width of the element isolation wall. Further, the imaging device is provided in which the narrow portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate.
 また、本開示によれば、半導体基板と、前記半導体基板上に行方向及び列方向に沿ってマトリックス状に配列し、入射された光に対して光電変換を行う、複数の撮像素子とを備える撮像装置であって、前記各撮像素子は、前記半導体基板内に互いに隣接するように設けられた複数の画素と、前記複数の画素が共有するように前記半導体基板の第1の面の上方に設けられたオンチップレンズと、前記複数の画素を分離する1つ又は複数の第2の画素分離部とを有し、前記半導体基板の、前記第1の面と対向する第2の面の上方から見た場合、前記第2の画素分離部は、略円状の断面を持つ、撮像装置が提供される。 Further, according to the present disclosure, a semiconductor substrate and a plurality of imaging elements arranged in a matrix along the row direction and the column direction on the semiconductor substrate and performing photoelectric conversion on incident light are provided. In the image pickup device, each of the image pickup elements includes a plurality of pixels provided adjacent to each other in the semiconductor substrate and above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels. and one or a plurality of second pixel isolation portions for isolating the plurality of pixels, above a second surface of the semiconductor substrate facing the first surface. When viewed from above, the imaging device is provided in which the second pixel separation section has a substantially circular cross section.
 また、本開示によれば、変化する幅を持つ矩形状の開口部を持つマスクを用いて、半導体基板をエッチングし、前記エッチングにより形成された溝に、絶縁膜を埋め込む、ことを含む、撮像装置の製造方法が提供される。 Further, according to the present disclosure, the imaging includes etching a semiconductor substrate using a mask having rectangular openings with varying widths, and embedding an insulating film in grooves formed by the etching. A method of manufacturing a device is provided.
 さらに、本開示によれば、半導体基板と、前記半導体基板上に行方向及び列方向に沿ってマトリックス状に配列し、入射された光に対して光電変換を行う、複数の撮像素子と、を有する撮像装置を含む電子機器であって、前記各撮像素子は、前記半導体基板内に互いに隣接するように設けられた複数の画素と、前記複数の画素を取り囲み、且つ、前記半導体基板を貫通するように設けられた素子分離壁と、前記複数の画素が共有するように前記半導体基板の第1の面の上方に設けられたオンチップレンズと、前記複数の画素を分離する第1の画素分離部とを有し、前記半導体基板の、前記第1の面と対向する第2の面の上方から見た場合、前記第1の画素分離部は、前記素子分離壁の幅に比べて狭い幅を持つ幅狭部を有し、前記幅狭部は、前記半導体基板の膜厚方向に沿って、前記第2の面から当該半導体基板の途中まで貫く、電子機器が提供される。 Furthermore, according to the present disclosure, a semiconductor substrate, and a plurality of imaging elements arranged in a matrix along the row direction and the column direction on the semiconductor substrate and performing photoelectric conversion on incident light, wherein each of the imaging elements includes: a plurality of pixels provided adjacent to each other in the semiconductor substrate; and the plurality of pixels surrounding the plurality of pixels and passing through the semiconductor substrate. an on-chip lens provided above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels; and a first pixel separation separating the plurality of pixels. When viewed from above a second surface of the semiconductor substrate facing the first surface, the first pixel isolation portion has a width narrower than the width of the element isolation wall. and the narrow portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate.
本開示の実施形態に係る撮像装置1の平面構成例を示す説明図である。1 is an explanatory diagram showing a planar configuration example of an imaging device 1 according to an embodiment of the present disclosure; FIG. 本開示の第1の実施形態及び比較例に係る撮像素子100の断面を示す説明図である。1 is an explanatory diagram showing a cross section of an imaging device 100 according to a first embodiment of the present disclosure and a comparative example; FIG. 比較例に係る撮像素子100の平面を示す説明図(その1)である。FIG. 11 is an explanatory diagram (part 1) showing a plane of an imaging device 100 according to a comparative example; 本開示の第1の実施形態に係る撮像素子100の平面を示す説明図である。1 is an explanatory diagram showing a plane of an image sensor 100 according to a first embodiment of the present disclosure; FIG. 本開示の第1の実施形態に係る撮像素子100の断面を示す説明図である。1 is an explanatory diagram showing a cross section of an imaging device 100 according to a first embodiment of the present disclosure; FIG. 本開示の第1の実施形態に係る撮像素子100の製造方法を示す説明図(その1)である。FIG. 4 is an explanatory diagram (Part 1) showing the method for manufacturing the imaging device 100 according to the first embodiment of the present disclosure; 本開示の第1の実施形態に係る撮像素子100の製造方法を示す説明図(その2)である。FIG. 10 is an explanatory diagram (part 2) showing the method for manufacturing the imaging device 100 according to the first embodiment of the present disclosure; 本開示の第1の実施形態に係る撮像素子100の製造方法を示す説明図(その3)である。FIG. 10 is an explanatory diagram (Part 3) showing the method for manufacturing the imaging device 100 according to the first embodiment of the present disclosure; 本開示の第1の実施形態の変形例1に係る撮像素子100の平面を示す説明図である。FIG. 5 is an explanatory diagram showing a plane of an imaging device 100 according to Modification 1 of the first embodiment of the present disclosure; 本開示の第1の実施形態の変形例1に係る撮像素子100の断面を示す説明図(その1)である。FIG. 10 is an explanatory diagram (part 1) showing a cross section of the imaging device 100 according to Modification 1 of the first embodiment of the present disclosure; 本開示の第1の実施形態の変形例1に係る撮像素子100の断面を示す説明図(その2)である。FIG. 10 is an explanatory diagram (Part 2) showing a cross section of the imaging element 100 according to Modification 1 of the first embodiment of the present disclosure; 本開示の第1の実施形態の変形例2に係る撮像素子100の平面を示す説明図(その1)である。FIG. 10 is an explanatory diagram (part 1) showing a plane of the imaging element 100 according to Modification 2 of the first embodiment of the present disclosure; 本開示の第1の実施形態の変形例2に係る撮像素子100の断面を示す説明図(その1)である。FIG. 11 is an explanatory diagram (part 1) showing a cross section of an imaging element 100 according to Modification 2 of the first embodiment of the present disclosure; 本開示の第1の実施形態の変形例2に係る撮像素子100の平面を示す説明図(その2)である。FIG. 10 is an explanatory diagram (Part 2) showing a plane of the imaging element 100 according to Modification 2 of the first embodiment of the present disclosure; 本開示の第1の実施形態の変形例2に係る撮像素子100の断面を示す説明図(その2)である。FIG. 10 is an explanatory diagram (Part 2) showing a cross section of the imaging element 100 according to Modification 2 of the first embodiment of the present disclosure; 本開示の第1の実施形態の変形例2に係る撮像素子100の平面を示す説明図(その3)である。FIG. 11 is an explanatory diagram (part 3) showing a plane of the imaging element 100 according to Modification 2 of the first embodiment of the present disclosure; 比較例に係る撮像素子100の断面の一部を示す説明図である。FIG. 5 is an explanatory diagram showing a part of a cross section of an imaging device 100 according to a comparative example; 比較例に係る撮像素子100の平面を示す説明図(その2)である。FIG. 11 is an explanatory diagram (part 2) showing a plane of the imaging element 100 according to the comparative example; 本開示の第2の実施形態に係る撮像素子100の平面を示す説明図である。FIG. 10 is an explanatory diagram showing a plane of an imaging device 100 according to a second embodiment of the present disclosure; 本開示の第2の実施形態に係る撮像素子100の製造方法を示す説明図(その1)である。FIG. 10 is an explanatory diagram (Part 1) showing a method for manufacturing the imaging element 100 according to the second embodiment of the present disclosure; 本開示の第2の実施形態に係る撮像素子100の製造方法を示す説明図(その2)である。FIG. 11 is an explanatory diagram (part 2) showing the method for manufacturing the imaging element 100 according to the second embodiment of the present disclosure; 本開示の第2の実施形態に係る撮像素子100の製造方法を示す説明図(その3)である。FIG. 11 is an explanatory diagram (part 3) showing the method for manufacturing the imaging element 100 according to the second embodiment of the present disclosure; 本開示の第2の実施形態の変形例に係る撮像素子100の平面を示す説明図(その1)である。FIG. 10 is an explanatory diagram (part 1) showing a plane of the imaging element 100 according to the modification of the second embodiment of the present disclosure; 本開示の第2の実施形態の変形例に係る撮像素子100の平面を示す説明図(その2)である。FIG. 11 is an explanatory diagram (part 2) showing a plane of the imaging element 100 according to the modification of the second embodiment of the present disclosure; カメラの概略的な機能構成の一例を示す説明図である。It is an explanatory view showing an example of a rough functional composition of a camera. スマートフォンの概略的な機能構成の一例を示すブロック図である。1 is a block diagram showing an example of a schematic functional configuration of a smart phone; FIG. 内視鏡手術システムの概略的な構成の一例を示す図である。1 is a diagram showing an example of a schematic configuration of an endoscopic surgery system; FIG. カメラヘッド及びCCUの機能構成の一例を示すブロック図である。3 is a block diagram showing an example of functional configurations of a camera head and a CCU; FIG. 車両制御システムの概略的な構成の一例を示すブロック図である。1 is a block diagram showing an example of a schematic configuration of a vehicle control system; FIG. 車外情報検出部及び撮像部の設置位置の一例を示す説明図である。FIG. 4 is an explanatory diagram showing an example of installation positions of an outside information detection unit and an imaging unit;
 以下に、添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。また、本明細書及び図面において、実質的に同一又は類似の機能構成を有する複数の構成要素を、同一の符号の後に異なるアルファベットを付して区別する場合がある。ただし、実質的に同一又は類似の機能構成を有する複数の構成要素の各々を特に区別する必要がない場合、同一符号のみを付する。 Preferred embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. In the present specification and drawings, constituent elements having substantially the same functional configuration are denoted by the same reference numerals, thereby omitting redundant description. In addition, in this specification and drawings, a plurality of components having substantially the same or similar functional configuration may be distinguished by attaching different alphabets after the same reference numerals. However, when there is no particular need to distinguish between a plurality of components having substantially the same or similar functional configurations, only the same reference numerals are used.
 また、以下の説明で参照される図面は、本開示の一実施形態の説明とその理解を促すための図面であり、わかりやすくするために、図中に示される形状や寸法、比などは実際と異なる場合がある。さらに、図中に示される撮像装置は、以下の説明と公知の技術を参酌して適宜、設計変更することができる。また、撮像装置の断面図を用いた説明においては、撮像装置の積層構造の上下方向は、撮像装置に対して入射する光が入ってくる受光面を上とした場合の相対方向に対応し、実際の重力加速度に従った上下方向とは異なる場合がある。 In addition, the drawings referred to in the following description are drawings for describing one embodiment of the present disclosure and for facilitating understanding thereof. may differ from Furthermore, the imaging device shown in the drawings can be appropriately modified in design in consideration of the following description and known techniques. Further, in the description using the cross-sectional view of the imaging device, the vertical direction of the layered structure of the imaging device corresponds to the relative direction when the light receiving surface on which the incident light enters the imaging device is up. It may differ from the vertical direction according to the actual gravitational acceleration.
 以下の説明において表現される寸法や形状は、数学的又は幾何学的に定義される寸法や形状だけを意味するだけでなく、撮像装置の動作及び撮像装置の製造工程において許容される程度の違い(誤差・ひずみ)を含む寸法や類似する形状も含むことを意味する。さらに、以下の説明において具体的な寸法に対して使用される「略同一」は、数学的又は幾何学的に完全に一致している場合だけを意味するものではなく、撮像装置の動作及び撮像装置の製造工程において許容される程度の違い(誤差・ひずみ)を有する場合も含まれているものとする。 The dimensions and shapes expressed in the following descriptions refer not only to the dimensions and shapes defined mathematically or geometrically, but also to the degree of difference allowed in the operation of the imaging device and the manufacturing process of the imaging device. It means that dimensions including (errors and strains) and similar shapes are also included. Furthermore, "substantially the same" used for specific dimensions in the following description does not only mean that they are perfectly matched mathematically or geometrically, but rather that the operation of the imaging device and the imaging It also includes cases where there is an allowable degree of difference (error/distortion) in the manufacturing process of the device.
 さらに、以下の説明において、「電気的に接続する」とは、複数の要素の間を、直接的に、もしくは、他の要素を介して間接的に接続することを意味する。 Furthermore, in the following description, "electrically connecting" means connecting a plurality of elements directly or indirectly through other elements.
 また、以下の説明においては、「共有」とは、互いに異なる要素(例えば、画素等)間で1つの他の要素(例えば、オンチップレンズ等)を共に利用することである。 Also, in the following description, "sharing" means using one other element (eg, on-chip lens, etc.) between mutually different elements (eg, pixels, etc.).
 なお、説明は以下の順序で行うものとする。
  1. 撮像装置の概略構成
  2. 本発明者らが本開示に係る実施形態を創作するに至った背景
  3. 第1の実施形態
     3.1 第1の実施形態の背景
     3.2 平面構成
     3.3 断面構成
     3.4 製造方法
     3.5 変形例1
     3.6 変形例2
  4. 第2の実施形態
     4.1 第2の実施形態の背景
     4.2 構成
     4.3 製造方法
     4.4 変形例
  5.まとめ
  6. カメラへの応用例
  7. スマートフォンへの応用例
  8. 内視鏡手術システムへの応用例
  9. 移動体への応用例
  10. 補足
Note that the description will be given in the following order.
1. Schematic configuration of imaging device 2 . 3. Background that led to the creation of the embodiments according to the present disclosure by the present inventors. First Embodiment 3.1 Background of First Embodiment 3.2 Planar Configuration 3.3 Cross-sectional Configuration 3.4 Manufacturing Method 3.5 Modification 1
3.6 Modification 2
4. Second Embodiment 4.1 Background of Second Embodiment 4.2 Configuration 4.3 Manufacturing Method 4.4 Modification 5. Summary 6. Application example to camera 7 . Application example to smart phone 8. Example of application to endoscopic surgery system9. Example of application to a moving object 10. supplement
 <<1.撮像装置の概略構成>>
 まずは、図1を参照して、本開示の実施形態に係る撮像装置1の概略構成について説明する。図1は、本開示の実施形態に係る撮像装置1の平面構成例を示す説明図である。図1に示すように、本開示の実施形態に係る撮像装置1は、例えばシリコンからなる半導体基板10上に、複数の撮像素子100がマトリック状に配置されている画素アレイ部30と、当該画素アレイ部30を取り囲むように設けられた周辺回路部とを有する。さらに、上記撮像装置1には、当該周辺回路部として、垂直駆動回路部32、カラム信号処理回路部34、水平駆動回路部36、出力回路部38、制御回路部40等が含まれる。以下に、撮像装置1の各ブロックの詳細について説明する。
<<1. Schematic Configuration of Imaging Device>>
First, a schematic configuration of an imaging device 1 according to an embodiment of the present disclosure will be described with reference to FIG. FIG. 1 is an explanatory diagram showing a planar configuration example of an imaging device 1 according to an embodiment of the present disclosure. As shown in FIG. 1, an imaging device 1 according to the embodiment of the present disclosure includes a pixel array section 30 in which a plurality of imaging elements 100 are arranged in a matrix on a semiconductor substrate 10 made of silicon, for example, and the pixels and a peripheral circuit section provided to surround the array section 30 . Further, the imaging apparatus 1 includes a vertical drive circuit section 32, a column signal processing circuit section 34, a horizontal drive circuit section 36, an output circuit section 38, a control circuit section 40, and the like as peripheral circuit sections. Details of each block of the imaging device 1 will be described below.
 (画素アレイ部30)
 画素アレイ部30は、半導体基板10上に、行方向(図1中X方向)及び列方向(図1中Y方向)に沿ってマトリックス状に、2次元配置された複数の撮像素子100を有する。各撮像素子100は、入射された光に対して光電変換を行う素子であって、光電変換部(図示省略)と、複数の画素トランジスタ(例えばMOS(Metal-Oxide-Semiconductor)トランジスタ)(図示省略)とを有している。そして、当該画素トランジスタは、例えば、転送トランジスタ、選択トランジスタ、リセットトランジスタ、及び、増幅トランジスタの4つのMOSトランジスタを含む。さらに、画素アレイ部30においては、例えばベイヤー配列に従って、複数の撮像素子100が2次元状に配列している。ここで、ベイヤー配列とは、緑色の波長(例えば波長495nm~570nm)をもつ光を吸収して電荷を発生する撮像素子100が市松状に並び、残りの部分に、赤色の波長(例えば波長620nm~750nm)をもつ光を吸収して電荷を発生する撮像素子100と、青色の波長(例えば波長450nm~495nm)をもつ光を吸収して電荷を発生する撮像素子100とが一列ごとに交互に並ぶような、配列パターンである。なお、撮像素子100の詳細構造については後述する。
(Pixel array section 30)
The pixel array section 30 has a plurality of imaging elements 100 arranged two-dimensionally in a matrix on the semiconductor substrate 10 along the row direction (X direction in FIG. 1) and the column direction (Y direction in FIG. 1). . Each imaging element 100 is an element that performs photoelectric conversion on incident light, and includes a photoelectric conversion unit (not shown) and a plurality of pixel transistors (for example, MOS (Metal-Oxide-Semiconductor) transistors) (not shown). ). The pixel transistor includes four MOS transistors, for example, a transfer transistor, a selection transistor, a reset transistor, and an amplification transistor. Furthermore, in the pixel array section 30, a plurality of imaging elements 100 are arranged two-dimensionally according to, for example, the Bayer arrangement. Here, the Bayer arrangement means that the imaging elements 100 that generate electric charge by absorbing light having a green wavelength (for example, wavelength 495 nm to 570 nm) are arranged in a checkered pattern, and the remaining part is arranged in a red wavelength (for example, wavelength 620 nm). 750 nm), and the image sensor 100 that absorbs light with a blue wavelength (for example, a wavelength of 450 nm to 495 nm) and generates charges alternately for each row. It is an array pattern that looks like they are lined up. A detailed structure of the imaging device 100 will be described later.
 (垂直駆動回路部32)
 垂直駆動回路部32は、例えばシフトレジスタによって形成され、画素駆動配線42を選択し、選択された画素駆動配線42に撮像素子100を駆動するためのパルスを供給し、行単位で撮像素子100を駆動する。すなわち、垂直駆動回路部32は、画素アレイ部30の各撮像素子100を行単位で順次垂直方向(図1中のY方向)に選択走査し、各撮像素子100の光電変換部(図示省略)の受光量に応じて生成された信号電荷に基づく画素信号を、垂直信号線44を通して後述するカラム信号処理回路部34に供給する。
(Vertical drive circuit section 32)
The vertical drive circuit section 32 is formed of, for example, a shift register, selects the pixel drive wiring 42, supplies the selected pixel drive wiring 42 with a pulse for driving the imaging device 100, and drives the imaging device 100 in row units. drive. That is, the vertical drive circuit section 32 sequentially selectively scans the image sensors 100 of the pixel array section 30 in the vertical direction (the Y direction in FIG. 1) row by row, and scans the photoelectric conversion units (not shown) of the image sensors 100. A pixel signal based on the signal charge generated according to the amount of light received is supplied to the column signal processing circuit section 34 to be described later through the vertical signal line 44 .
 (カラム信号処理回路部34)
 カラム信号処理回路部34は、撮像素子100の列ごとに配置されており、1行分の撮像素子100から出力される画素信号に対して画素列ごとにノイズ除去等の信号処理を行う。例えば、カラム信号処理回路部34は、画素固有の固定パターンノイズを除去するためにCDS(Correlated Double Sampling:相関2重サンプリング)およびAD(Analog-Degital)変換等の信号処理を行う。
(Column signal processing circuit unit 34)
The column signal processing circuit unit 34 is arranged for each column of the image sensor 100, and performs signal processing such as noise removal on pixel signals output from the image sensor 100 for one row for each pixel column. For example, the column signal processing circuit section 34 performs signal processing such as CDS (Correlated Double Sampling) and AD (Analog-Digital) conversion to remove pixel-specific fixed pattern noise.
 (水平駆動回路部36)
 水平駆動回路部36は、例えばシフトレジスタによって形成され、水平走査パルスを順次出力することによって、上述したカラム信号処理回路部34の各々を順番に選択し、カラム信号処理回路部34の各々から画素信号を水平信号線46に出力させる。
(Horizontal drive circuit section 36)
The horizontal drive circuit section 36 is formed of, for example, a shift register, and sequentially outputs horizontal scanning pulses to sequentially select each of the column signal processing circuit sections 34 described above, and to output pixels from each of the column signal processing circuit sections 34 . A signal is output on the horizontal signal line 46 .
 (出力回路部38)
 出力回路部38は、上述したカラム信号処理回路部34の各々から水平信号線46を通して順次に供給される画素信号に対し、信号処理を行って出力する。出力回路部38は、例えば、バッファリング(buffering)を行う機能部として機能してもよく、もしくは、黒レベル調整、列ばらつき補正、各種デジタル信号処理等の処理を行ってもよい。なお、バッファリングとは、画素信号のやり取りの際に、処理速度や転送速度の差を補うために、一時的に画素信号を保存することをいう。さらに、入出力端子48は、外部装置との間で信号のやり取りを行うための端子である。
(Output circuit section 38)
The output circuit section 38 performs signal processing on the pixel signals sequentially supplied from each of the column signal processing circuit sections 34 through the horizontal signal line 46 and outputs the processed signal. The output circuit section 38 may function, for example, as a functional section that performs buffering, or may perform processing such as black level adjustment, column variation correction, and various digital signal processing. Note that buffering refers to temporarily storing pixel signals in order to compensate for differences in processing speed and transfer speed when exchanging pixel signals. Furthermore, the input/output terminal 48 is a terminal for exchanging signals with an external device.
 (制御回路部40)
 制御回路部40は、入力クロックと、動作モード等を指令するデータを受け取り、また撮像装置1の内部情報等のデータを出力する。すなわち、制御回路部40は、垂直同期信号、水平同期信号及びマスタクロックに基づいて、垂直駆動回路部32、カラム信号処理回路部34及び水平駆動回路部36等の動作の基準となるクロック信号や制御信号を生成する。そして、制御回路部40は、生成したクロック信号や制御信号を、垂直駆動回路部32、カラム信号処理回路部34及び水平駆動回路部36等に出力する。
(Control circuit unit 40)
The control circuit unit 40 receives an input clock and data for instructing an operation mode and the like, and outputs data such as internal information of the imaging device 1 . That is, the control circuit unit 40 generates a clock signal and a clock signal that serve as a reference for the operation of the vertical drive circuit unit 32, the column signal processing circuit unit 34, the horizontal drive circuit unit 36, and the like based on the vertical synchronization signal, the horizontal synchronization signal, and the master clock. Generate control signals. The control circuit section 40 then outputs the generated clock signal and control signal to the vertical drive circuit section 32, the column signal processing circuit section 34, the horizontal drive circuit section 36, and the like.
 <<2. 本発明者らが本開示に係る実施形態を創作するに至った背景>>
 次に、本開示に係る実施形態の詳細を説明する前に、本発明者らが本開示に係る実施形態を創作するに至った背景について説明する。
<<2. Background that led to the creation of the embodiments according to the present disclosure by the present inventors>>
Next, before describing the details of the embodiments according to the present disclosure, the background that led to the creation of the embodiments according to the present disclosure by the present inventors will be described.
 ところで、本発明者らは、撮像画像の劣化を避けつつ、オートフォーカス機能をさらに向上させる、すなわち、位相差検出の精度を向上させるために、撮像装置1の画素アレイ部30の全面に位相差検出画素を設けることについて(全画素位相差検出)、鋭意検討を進めていた。このような状況の中、撮像時には1つの撮像素子として機能し、且つ、位相差検出時には一対の位相差検出画素として機能する撮像素子を、画素アレイ部30の全面に設けることが検討された(デュアルフォトダイオード構造)。このような全画素位相差検出においては、全面に位相差検出画素を設けていることから、位相差検出の精度を向上させることができ、さらには、すべての撮像素子で撮像を行うことができることから、撮像画像の劣化を避けることができる。 By the way, in order to further improve the autofocus function, that is, to improve the accuracy of phase difference detection while avoiding the deterioration of the captured image, the inventors of the present invention have found that the whole surface of the pixel array section 30 of the imaging device 1 is covered with a phase difference sensor. The provision of detection pixels (all-pixel phase difference detection) has been earnestly studied. Under such circumstances, it has been considered to provide an image sensor that functions as one image sensor during imaging and as a pair of phase difference detection pixels during phase difference detection on the entire surface of the pixel array section 30 ( dual photodiode structure). In such all-pixel phase difference detection, since phase difference detection pixels are provided on the entire surface, it is possible to improve the accuracy of phase difference detection, and furthermore, it is possible to perform imaging with all the imaging elements. Therefore, deterioration of the captured image can be avoided.
 さらに、本発明者らは、全画素位相差検出において、位相差検出の精度を向上させるために、位相差検出の際に、一対の位相差検出画素の出力が混ざることを避けるための、位相差検出画素を物理的且つ電気的に分離する要素を設けることを着想した。加えて、本発明者らは、全画素位相差検出において、撮像画像の劣化を避けるべく、一対の位相差検出画素の間にオーバーフローパスを設けることを着想した。詳細には、通常の撮像時に、位相差検出画素のいずれか一方の画素の電荷が飽和しそうになった際には、上記オーバーフローパスを介して他方の画素に電荷を移動させることにより、一方の画素の飽和を避けることができる。そして、このようなオーバーフローパスを設けることで、撮像素子から出力される画素信号のリニアリティを確保し、撮像画像の劣化を防ぐことができる。 Furthermore, the inventors of the present invention have proposed a phase difference detection method for avoiding mixing of the outputs of a pair of phase difference detection pixels during phase difference detection in order to improve the accuracy of phase difference detection in all-pixel phase difference detection. The idea was to provide an element that physically and electrically separates the phase detection pixels. In addition, the present inventors came up with the idea of providing an overflow path between a pair of phase difference detection pixels in order to avoid degradation of captured images in all-pixel phase difference detection. Specifically, during normal imaging, when the charge of one of the phase difference detection pixels is about to be saturated, the charge is transferred to the other pixel via the overflow path, thereby Pixel saturation can be avoided. By providing such an overflow path, it is possible to ensure the linearity of the pixel signal output from the image sensor and prevent deterioration of the captured image.
 すなわち、上述のような着眼点に基づき、本発明者らは、位相差検出の精度を向上させつつ、撮像画像の劣化を避けることを可能にする、本開示に係る実施形態を創作するに至った。以下に、本発明者らが創作した本開示に係る実施形態の詳細について順次説明する。 That is, based on the points of focus as described above, the present inventors have created an embodiment according to the present disclosure that makes it possible to avoid deterioration of a captured image while improving the accuracy of phase difference detection. rice field. Details of embodiments according to the present disclosure created by the present inventors will be sequentially described below.
 <<3. 第1の実施形態>>
 <3.1 第1の実施形態の背景>
 まずは、図2及び図3を参照して、本発明者らの創作した本開示の第1の実施形態の背景を説明する。図2は、本開示の第1の実施形態及び比較例に係る撮像素子100の断面を示す説明図であり、図3は、比較例に係る撮像素子100の平面を示す説明図である。詳細には、図3は、図2に示すA-A´線で撮像素子100を切断した際の断面に対応する。なお、ここで、比較例とは、本発明者らが本開示の実施形態をなす前に、検討を重ねていた撮像素子100のことを意味するものとする。
<<3. First Embodiment>>
<3.1 Background of the first embodiment>
First, the background of the first embodiment of the present disclosure created by the present inventors will be described with reference to FIGS. 2 and 3. FIG. FIG. 2 is an explanatory diagram showing a cross section of the image sensor 100 according to the first embodiment of the present disclosure and a comparative example, and FIG. 3 is an explanatory diagram showing a plane of the image sensor 100 according to the comparative example. Specifically, FIG. 3 corresponds to a cross section of the imaging device 100 taken along line AA' shown in FIG. Here, the comparative example means the imaging element 100 that the inventors of the present invention repeatedly studied before making the embodiment of the present disclosure.
 図2及び図3に示すように、比較例に係る撮像素子100は、オンチップレンズ200と、カラーフィルタ202と、遮光部204と、半導体基板10と、転送ゲート400a、400bとを有する。さらに、本実施形態においては、半導体基板10は、光電変換部302をそれぞれ有する、互いに隣り合う一対の画素300a、300bを有する。また、半導体基板10は、これら一対の画素300a、300bを分離する画素分離部304と、画素300a、300bを取り囲む素子分離壁310と、画素分離部304及び素子分離壁310の周囲に設けられた拡散領域306、308とを含む。以下に、本実施形態に係る撮像素子100の積層構造について説明するが、以下の説明においては、図2中の上側(受光面10a側)から下側に向かう順に従って説明する。なお、図2中の500の領域については、本開示の第2の実施形態で説明する。 As shown in FIGS. 2 and 3, the imaging device 100 according to the comparative example has an on-chip lens 200, a color filter 202, a light shielding section 204, a semiconductor substrate 10, and transfer gates 400a and 400b. Further, in this embodiment, the semiconductor substrate 10 has a pair of adjacent pixels 300a and 300b each having a photoelectric conversion unit 302 . The semiconductor substrate 10 also includes a pixel isolation portion 304 that isolates the pair of pixels 300a and 300b, an element isolation wall 310 that surrounds the pixels 300a and 300b, and a pixel isolation portion 304 and an element isolation wall 310 provided around the pixel isolation portion 304 and the element isolation wall 310. Diffusion regions 306, 308 are included. The layered structure of the imaging element 100 according to the present embodiment will be described below. In the following description, the layers are described in the order from the upper side (light receiving surface 10a side) to the lower side in FIG. Note that the area 500 in FIG. 2 will be described in the second embodiment of the present disclosure.
 図2に示すように、撮像素子100は、半導体基板10の受光面(第1の面)10aの上方に設けられ、入射光を光電変換部302に集光する1つのオンチップレンズ200を有する。当該撮像素子100においては、1つのオンチップレンズ200に対し、一対の画素300a、300bが設けられた構造を持つ。すなわち、オンチップレンズ200は、2つの画素300a、300bにより共有されている。なお、オンチップレンズ200は、例えば、シリコン窒化膜(SiN)、又は、スチレン系樹脂、アクリル系樹脂、スチレン-アクリル共重合系樹脂、もしくは、シロキサン系樹脂等の樹脂系材料によって形成することができる。 As shown in FIG. 2, the imaging element 100 is provided above the light receiving surface (first surface) 10a of the semiconductor substrate 10, and has one on-chip lens 200 that collects incident light onto the photoelectric conversion unit 302. . The imaging device 100 has a structure in which a pair of pixels 300 a and 300 b are provided for one on-chip lens 200 . That is, the on-chip lens 200 is shared by the two pixels 300a and 300b. The on-chip lens 200 can be made of, for example, a silicon nitride film (SiN), or a resin material such as a styrene resin, an acrylic resin, a styrene-acrylic copolymer resin, or a siloxane resin. can.
 そして、オンチップレンズ200により集光された入射光は、オンチップレンズ200の下方に設けられたカラーフィルタ202を介して、一対の画素300a、300bの光電変換部302のそれぞれに照射される。当該カラーフィルタ202は、赤色の波長成分を透過するカラーフィルタ、緑色の波長成分を透過するカラーフィルタ、又は、青色の波長成分を透過するカラーフィルタのいずれかである。例えば、カラーフィルタ202は、例えば、シリコーン等の透明バインダ中に顔料又は染料が分散させた材料から形成することができる。 The incident light condensed by the on-chip lens 200 passes through the color filter 202 provided below the on-chip lens 200 and irradiates each of the photoelectric conversion units 302 of the pair of pixels 300a and 300b. The color filter 202 is either a color filter that transmits a red wavelength component, a color filter that transmits a green wavelength component, or a color filter that transmits a blue wavelength component. For example, color filter 202 may be formed from a material having pigments or dyes dispersed in a transparent binder such as silicone.
 また、カラーフィルタ202を取り囲むように、半導体基板10の受光面10a上に、遮光部204が設けられている。当該遮光部204は、隣り合う撮像素子100の間に設けられることにより、隣り合う撮像素子100間でのクロストークを抑制し、位相差検出の際の精度をより向上させるために撮像素子100の間の遮光を行うことができる。遮光部204は、例えば、タングステン(W)、アルミニウム(Al)、銅(Cu)、チタン(Ti)、モリブデン(Mo)、ニッケル(Ni)等を含む金属材料等から形成することができる。 Further, a light shielding portion 204 is provided on the light receiving surface 10a of the semiconductor substrate 10 so as to surround the color filters 202 . The light shielding portion 204 is provided between the adjacent imaging elements 100 to suppress crosstalk between the adjacent imaging elements 100 and to improve the accuracy of phase difference detection. Light shielding can be performed between The light shielding portion 204 can be made of, for example, a metal material containing tungsten (W), aluminum (Al), copper (Cu), titanium (Ti), molybdenum (Mo), nickel (Ni), or the like.
 さらに、図2及び図3に示すように、例えば、第2の導電型(例えばp型)の半導体基板10内に、第1の導電型(例えばn型)の不純物を持つ、光電変換部302が、互いに隣接する画素300a、300bごとに設けられている。光電変換部302は、先に説明したように、カラーフィルタ202を介して入射された、赤色の波長成分、緑色の波長成分、又は、青色の波長成分を有する光を吸収して、電荷を生成する。そして、当該撮像素子100においては、画素300aの光電変換部302と画素300bの光電変換部302とは、位相差検出時には、一対の位相差検出画素として機能することができる。すなわち、当該撮像素子100においては、画素300aの光電変換部302と画素300bの光電変換部302とで生成した電荷に基づく画素信号の差分を検出することにより、位相差を検出することができる。 Furthermore, as shown in FIGS. 2 and 3, for example, a photoelectric conversion section 302 having a first conductivity type (eg, n-type) impurity in a second conductivity type (eg, p-type) semiconductor substrate 10. are provided for adjacent pixels 300a and 300b. As described above, the photoelectric conversion unit 302 absorbs light having a red wavelength component, a green wavelength component, or a blue wavelength component that is incident through the color filter 202, and generates charges. do. In the image sensor 100, the photoelectric conversion unit 302 of the pixel 300a and the photoelectric conversion unit 302 of the pixel 300b can function as a pair of phase difference detection pixels during phase difference detection. That is, in the image sensor 100, the phase difference can be detected by detecting the difference between the pixel signals based on the charges generated by the photoelectric conversion units 302 of the pixels 300a and 300b.
 詳細には、光電変換部302は、自身の光軸(受光面に垂直な軸)に対する光の入射角に依存して、生成する電荷量、すなわち感度が変化する。例えば、光電変換部302は、入射角が0度である場合には、最も感度が高く、さらに、光電変換部302の感度は、入射角に対して、入射角が0度のときを対象軸とした線対称の関係を有している。従って、画素300aの光電変換部302と画素300bの光電変換部302とにおいては、同じ点からの光が異なる入射角で入射され、入射角に応じた量の電荷をそれぞれ生成することから、検出する像にずれ(位相差)が生じることとなる。すなわち、画素300aの光電変換部302と画素300bの光電変換部302とで生成した電荷量に基づく画素信号の差分を検出することにより、位相差を検出することができる。そこで、このような画素信号の差(位相差)を、例えば、出力回路部38の検出部(図示省略)において差分信号として検出し、検出した位相差に基づいて、デフォーカス量を算出し、結像レンズ(図示省略)を調整(移動)することで、オートフォーカスを実現することができる。なお、上述の説明においては、位相差を画素300aの光電変換部302と画素300bの光電変換部302の画素信号の差として検出するとして説明したが、本実施形態においては、これに限定されるものではなく、例えば、画素300aの光電変換部302と画素300bの光電変換部302の画素信号の比として位相差を検出してもよい。 Specifically, the photoelectric conversion unit 302 changes the amount of charge generated, that is, the sensitivity, depending on the incident angle of light with respect to its own optical axis (the axis perpendicular to the light receiving surface). For example, the photoelectric conversion unit 302 has the highest sensitivity when the incident angle is 0 degrees. It has a line-symmetrical relationship. Therefore, in the photoelectric conversion unit 302 of the pixel 300a and the photoelectric conversion unit 302 of the pixel 300b, light from the same point is incident at different angles of incidence, and an amount of charge corresponding to the angle of incidence is generated. A shift (phase difference) occurs in the image to be projected. That is, the phase difference can be detected by detecting the difference between pixel signals based on the amount of charge generated by the photoelectric conversion unit 302 of the pixel 300a and the photoelectric conversion unit 302 of the pixel 300b. Therefore, such a difference (phase difference) between pixel signals is detected as a difference signal in a detection unit (not shown) of the output circuit unit 38, for example, and the defocus amount is calculated based on the detected phase difference, Autofocus can be achieved by adjusting (moving) an imaging lens (not shown). In the above description, the phase difference is detected as the difference between the pixel signals of the photoelectric conversion unit 302 of the pixel 300a and the photoelectric conversion unit 302 of the pixel 300b, but the present embodiment is limited to this. Instead, for example, the phase difference may be detected as a ratio of pixel signals of the photoelectric conversion unit 302 of the pixel 300a and the photoelectric conversion unit 302 of the pixel 300b.
 さらに、図2及び図3に示すように、当該撮像素子100においては、2つの光電変換部302は、画素分離部304によって物理的に分離されている。当該画素分離部304は、DTI(Deep Trench Isolation)として、半導体基板10を、当該半導体基板10の厚み方向に沿って、半導体基板10の受光面と反対側の面(第2の面)10bから当該半導体基板10の途中まで貫くように設けられた溝部(トレンチ)(図示省略)と、当該トレンチに埋め込まれた、シリコン酸化膜(SiO)、シリコン窒化膜、アモルファスシリコン、多結晶シリコン、チタン酸化膜(TiO)、アルミニウム、タングステン等の酸化膜や金属膜からなる材料とからなる。撮像素子100においては、位相差検出時に、一対の画素300a、300bの出力した画素信号が互いに混じり合い、混色を生じた場合には、位相差検出の精度が劣化する。そこで、当該撮像素子100においては、画素分離部304により、一対の画素300a、300bを効果的に物理的に分離することができ、その結果、混色の発生を抑え、位相差検出の精度をより向上させることができる。 Furthermore, as shown in FIGS. 2 and 3, in the image sensor 100, the two photoelectric conversion units 302 are physically separated by the pixel separation unit 304. FIG. The pixel separation section 304 is configured as a DTI (Deep Trench Isolation) by separating the semiconductor substrate 10 from the surface (second surface) 10b opposite to the light receiving surface of the semiconductor substrate 10 along the thickness direction of the semiconductor substrate 10. A trench (not shown) provided to penetrate partway through the semiconductor substrate 10, and a silicon oxide film (SiO), silicon nitride film, amorphous silicon, polycrystalline silicon, and titanium oxide embedded in the trench. It consists of a film (TiO), an oxide film such as aluminum or tungsten, or a material consisting of a metal film. In the imaging device 100, when phase difference detection is performed, the pixel signals output from the pair of pixels 300a and 300b are mixed with each other, and when color mixture occurs, the accuracy of phase difference detection is degraded. Therefore, in the image sensor 100, the pixel separation unit 304 can effectively and physically separate the pair of pixels 300a and 300b. can be improved.
 さらに、図2及び図3に示すように、画素分離部304の周囲には、第2の導電型(例えばp型)の不純物が拡散され、拡散領域306が形成されている。当該拡散領域306は、位相差検出の精度をより向上させるため、一対の画素300a、300bを電気的に分離し、混色が生じることを抑制することができる。 Furthermore, as shown in FIGS. 2 and 3, a second conductivity type (for example, p-type) impurity is diffused around the pixel separation section 304 to form a diffusion region 306 . Since the diffusion region 306 further improves the accuracy of phase difference detection, the pair of pixels 300a and 300b can be electrically isolated and the occurrence of color mixture can be suppressed.
 さらに、当該撮像素子100においては、図2に示すように、画素分離部304の上方(受光面10a側)には、第1の導電型(例えばn型)の不純物を含む拡散領域320が設けられており、当該拡散領域320は、画素300a、300bの間で生成された電荷をやり取りすることができるオーバーフローパスとして機能する。具体的には、通常の撮像時に、画素300a、300bのいずれか一方の画素の電荷が飽和しそうになった際には、上記オーバーフローパスを介して他方の画素に電荷を移動させることにより、一方の画素の飽和を避けることができる。そして、このようなオーバーフローパスを設けることにより、撮像素子100から出力される画素信号のリニアリティを確保し、撮像画像の劣化を防ぐことができる。 Further, in the imaging device 100, as shown in FIG. 2, a diffusion region 320 containing impurities of a first conductivity type (for example, n-type) is provided above the pixel separation section 304 (on the side of the light receiving surface 10a). The diffusion region 320 serves as an overflow path through which the charge generated between the pixels 300a, 300b can be exchanged. Specifically, when the charge of one of the pixels 300a and 300b is about to be saturated during normal imaging, the charge is transferred to the other pixel via the overflow path, thereby pixel saturation can be avoided. By providing such an overflow path, it is possible to ensure the linearity of pixel signals output from the image sensor 100 and prevent degradation of captured images.
 また、当該撮像素子100においては、半導体基板10内には、画素300a、300bを取り囲み、隣り合う撮像素子100を物理的に分離する、素子分離壁310が設けられている。素子分離壁310は、半導体基板10を、当該半導体基板10の厚み方向に沿って貫通するように設けられた溝部(トレンチ)(図示省略)と、当該トレンチに埋め込まれた、シリコン酸化膜、シリコン窒化膜、アモルファスシリコン、多結晶シリコン、チタン酸化膜、アルミニウム、タングステン等の酸化膜や金属膜からなる材料とからなる。 Further, in the imaging device 100 , an element isolation wall 310 is provided in the semiconductor substrate 10 to surround the pixels 300 a and 300 b and physically separate the adjacent imaging devices 100 . The element isolation wall 310 includes a trench (not shown) provided to penetrate the semiconductor substrate 10 along the thickness direction of the semiconductor substrate 10, and a silicon oxide film, a silicon oxide film, and a silicon oxide film embedded in the trench. It is made of a material comprising an oxide film such as a nitride film, amorphous silicon, polycrystalline silicon, titanium oxide film, aluminum, tungsten, or a metal film.
 さらに、当該撮像素子100においては、画素300aの光電変換部302と画素300bの光電変換部302で生成された電荷は、半導体基板10の受光面10aとは反対側に位置する面10b上に設けられた転送トランジスタ(上述した画素トランジスタの1種)の転送ゲート400a、400bを介して、転送されることとなる。転送ゲート400a、400bは、例えば金属膜から形成することができる。そして、当該電荷は、例えば、半導体基板10内に設けられた第1の導電型(例えばn型)を持つ半導体領域に設けられたフローティングディフュージョン部(図示省略)に蓄積されてもよい。なお、当該撮像素子100においては、上記フローティングディフュージョン部は、半導体基板10内に設けられていることに限定されるものではなく、例えば、半導体基板10に積層された他の基板(図示省略)に設けられていてもよい。 Furthermore, in the image sensor 100, charges generated by the photoelectric conversion units 302 of the pixels 300a and the photoelectric conversion units 302 of the pixels 300b are provided on the surface 10b of the semiconductor substrate 10 opposite to the light receiving surface 10a. are transferred through the transfer gates 400a and 400b of the transfer transistors (one type of pixel transistor described above). The transfer gates 400a and 400b can be formed, for example, from metal films. Then, the charges may be accumulated, for example, in a floating diffusion portion (not shown) provided in a semiconductor region having a first conductivity type (for example, n-type) provided in the semiconductor substrate 10 . In addition, in the imaging device 100, the floating diffusion portion is not limited to being provided in the semiconductor substrate 10. For example, the floating diffusion portion is provided in another substrate (not shown) laminated on the semiconductor substrate 10. may be provided.
 さらに、半導体基板10の表面10b上には、電荷を画素信号として読み出したりするため等に用いる、上述した転送トランジスタ以外の複数の画素トランジスタ(図示省略)が設けられていてもよい。さらに、当該撮像素子100においては、当該画素トランジスタは、半導体基板10に設けられていてもよく、もしくは、半導体基板10に積層された他の基板(図示省略)に設けられていてもよい。 Furthermore, on the surface 10b of the semiconductor substrate 10, a plurality of pixel transistors (not shown) other than the transfer transistors described above, which are used for reading charges as pixel signals, etc., may be provided. Furthermore, in the imaging device 100, the pixel transistor may be provided on the semiconductor substrate 10, or may be provided on another substrate (not shown) stacked on the semiconductor substrate 10.
 以上説明したように、比較例に係る撮像素子100においては、一対の画素300a、300bを物理的に分離する画素分離部304と電気的に分離する拡散領域306とを設けていることから、画素300a、300bを用いた位相差検出が可能である。さらに、比較例に係る撮像素子100においては、オーバーフローパスを設けていることから、通常の撮像時に、画素300a、300bのいずれか一方の画素の電荷が飽和しそうになった際には、上記オーバーフローパスを介して他方の画素に電荷を移動させることにより、一方の画素の飽和を避けることができる。すなわち、比較例に係る撮像素子100においては、このようなオーバーフローパスを設けることにより、撮像素子100から出力される画素信号のリニアリティを確保し、撮像画像の劣化を防ぐことができる。 As described above, in the image pickup device 100 according to the comparative example, the pixel separation portion 304 that physically separates the pair of pixels 300a and 300b and the diffusion region 306 that electrically separates the pair of pixels 300a and 300b are provided. Phase difference detection using 300a and 300b is possible. Furthermore, since the image sensor 100 according to the comparative example is provided with an overflow path, when the charge of one of the pixels 300a and 300b is about to be saturated during normal imaging, the overflow By moving charge through the path to the other pixel, saturation of one pixel can be avoided. That is, in the imaging device 100 according to the comparative example, by providing such an overflow path, the linearity of the pixel signals output from the imaging device 100 can be ensured and deterioration of the captured image can be prevented.
 本発明者らは、上述のような撮像素子100をより微細化した場合であっても、強い光を飽和することなく検出することができるように、撮像素子100において光電変換によって発生し得る電荷量の最大値(飽和電荷量Qs)をより大きくしたいと考えていた。しかしながら、本発明者らが検討したところ、上述した比較例に係る撮像素子100の構成においては、飽和電荷量Qsをより大きくすることには限界があった。 The inventors of the present invention have found that even if the imaging device 100 is further miniaturized as described above, the charge that can be generated by photoelectric conversion in the imaging device 100 can be detected without saturating strong light. I wanted to increase the maximum value of the amount (saturated charge amount Qs). However, as a result of studies by the present inventors, there is a limit to increasing the saturation charge amount Qs in the configuration of the imaging device 100 according to the comparative example described above.
 詳細には、上述した比較例に係る撮像素子100においては、先に説明したように、画素分離部304の上方(受光面10a側)にオーバーフローパスとして機能する拡散領域320が設けられている。すなわち、当該拡散領域320は、オーバーフローパスとして機能するものであり、光電変換部として機能しない。従って、当該撮像素子100においては、オーバーフローパスの存在により、光を吸収して電荷を発生する光電変換部302の体積を大きくすることに制限があることから、飽和電荷量Qsをより大きくさせることに限界があった。 Specifically, in the imaging device 100 according to the comparative example described above, the diffusion region 320 functioning as an overflow path is provided above the pixel separation section 304 (on the side of the light receiving surface 10a), as described above. That is, the diffusion region 320 functions as an overflow path and does not function as a photoelectric conversion section. Therefore, in the imaging device 100, the presence of the overflow path limits the volume of the photoelectric conversion unit 302 that absorbs light and generates electric charges. had a limit.
 また、比較例に係る撮像素子100においては、半導体基板10の膜厚の途中まで貫く画素分離部304を形成する工程は、半導体基板10を貫通する素子分離壁310の形成とは別工程で行われることとなる。そして、これらが異なる工程で行われるため、素子分離壁310に対して画素分離部304が位置ずれを起こす可能性が生じる。従って、このような位置ずれにも対処するべく、画素分離部304、当該画素分離部304の周囲の拡散領域306、及び、オーバーフローパスとなる拡散領域320のレイアウトに、多少の余裕を持たせることが求められる。その結果、レイアウトに余裕を持たせた分だけ光電変換部302の体積が小さくなり、上述と同様に、飽和電荷量Qsを大きくすることに限界があった。 In addition, in the imaging device 100 according to the comparative example, the step of forming the pixel isolation portion 304 penetrating halfway through the film thickness of the semiconductor substrate 10 is performed in a separate step from the formation of the element isolation wall 310 penetrating the semiconductor substrate 10 . will be Since these are performed in different steps, there is a possibility that the pixel isolation portion 304 will be misaligned with respect to the element isolation wall 310 . Therefore, in order to deal with such a positional deviation, the layout of the pixel isolation portion 304, the diffusion region 306 around the pixel isolation portion 304, and the diffusion region 320 serving as an overflow path should be given some margin. is required. As a result, the volume of the photoelectric conversion unit 302 is reduced by the amount of margin in the layout, and similarly to the above, there is a limit to increasing the saturated charge amount Qs.
 そこで、飽和電荷量Qsをより大きくするため、増加させるため、本発明者らは比較例に係る撮像素子100を鋭意検討し、画素分離部304の一部の幅を狭くし、オーバーフローパスとして機能する領域の体積をより小さくすることにより、光電変換部302の体積を大きくすることを着想した。さらに、本発明者らは、画素分離部304の一部の幅を狭くすることにより、マイクロローディング効果及びセルフストップ現象を利用することができることを独自に見出し、画素分離部304と素子分離壁310とを同時に形成することができる、本開示の第1の実施形態を創作した(なお、マイクロローディング効果及びセルフストップ現象については、後述する)。以下、本発明者らが創作した本開示の第1の実施形態の詳細を順次説明する。 Therefore, in order to further increase the saturation charge amount Qs, the present inventors diligently studied the imaging device 100 according to the comparative example, narrowed the width of a part of the pixel separation section 304, and functioned as an overflow path. It was conceived to increase the volume of the photoelectric conversion section 302 by reducing the volume of the region where the light is applied. Further, the present inventors independently found that by narrowing the width of a part of the pixel isolation portion 304, it is possible to utilize the microloading effect and the self-stop phenomenon. (Note that the microloading effect and the self-stopping phenomenon will be described later). Details of the first embodiment of the present disclosure created by the present inventors will be sequentially described below.
 <3.2 平面構成>
 まずは、図4を参照して、本実施形態に係る撮像素子100の平面構成を説明する。図4は、本実施形態に係る撮像素子100の平面を示す説明図であり、詳細には、図2に示すA-A´線で撮像素子100を切断した断面に対応する。なお、以下の説明においては、図2及び図3を参照して説明した比較例に係る撮像素子100と共通する構成については、説明を省略する。
<3.2 Planar configuration>
First, with reference to FIG. 4, the planar configuration of the imaging device 100 according to this embodiment will be described. FIG. 4 is an explanatory diagram showing a plan view of the imaging element 100 according to the present embodiment, and in detail corresponds to a cross section of the imaging element 100 taken along line AA' shown in FIG. In addition, in the following description, the description of the configuration common to the image sensor 100 according to the comparative example described with reference to FIGS. 2 and 3 will be omitted.
 図4に示すように、本実施形態においては、互いに隣接する画素300a、300bは、素子分離壁310と一体となって形成された画素分離部(第1の画素分離部)304によって分離されている。詳細には、当該画素分離部304は、撮像素子100を受光面10aの反対側の面10bの上方から見た場合、撮像素子100の中心において、素子分離壁310の幅aに比べて狭い幅cを持つ幅狭部304aを有する。また、画素分離部304は、幅狭部304aを挟むように幅広部304bをさらに有し、幅広部304bの幅bは、例えば、素子分離壁310の幅aと略同一である。さらに、図4に示すように、画素分離部304の周囲には、第2の導電型(例えばp型)の不純物を含む拡散領域306が形成されている。また、後述するように、幅狭部304aと半導体基板10の受光面10aとの間には、第1の導電型(例えばn型)の不純物を含む拡散領域320からなるオーバーフローパス(図4中矢印で示される)が設けられる。 As shown in FIG. 4, in the present embodiment, pixels 300a and 300b adjacent to each other are separated by a pixel isolation portion (first pixel isolation portion) 304 integrally formed with an element isolation wall 310. there is Specifically, when the image pickup element 100 is viewed from above the surface 10b opposite to the light receiving surface 10a, the pixel separation section 304 has a width narrower than the width a of the element separation wall 310 at the center of the image pickup element 100. It has a narrowed portion 304a with a width c. The pixel isolation portion 304 further has a wide portion 304b sandwiching the narrow portion 304a, and the width b of the wide portion 304b is substantially the same as the width a of the element isolation wall 310, for example. Further, as shown in FIG. 4, a diffusion region 306 containing impurities of the second conductivity type (for example, p-type) is formed around the pixel isolation portion 304 . As will be described later, between the narrow portion 304a and the light-receiving surface 10a of the semiconductor substrate 10, there is an overflow path (see FIG. 4) consisting of a diffusion region 320 containing impurities of the first conductivity type (for example, n-type). indicated by arrows) are provided.
 なお、本実施形態においては、例えば、素子分離壁310の幅aが幅広部304bの幅bと略同一とした場合、幅狭部304aの幅cは、素子分離壁310の幅aに対して、0.5×a<c<0.95×aであることが好ましい。また、本実施形態においては、例えば、幅狭部304aの長さeは、撮像素子100の長さdに対して、e<d×0.8であることが好ましい。 In this embodiment, for example, when the width a of the element isolation wall 310 is substantially the same as the width b of the wide portion 304b, the width c of the narrow portion 304a is larger than the width a of the element isolation wall 310. , 0.5×a<c<0.95×a. Further, in the present embodiment, for example, the length e of the narrow portion 304a is preferably e<d×0.8 with respect to the length d of the imaging device 100 .
 以上のように、本実施形態においては、画素分離部304の一部として幅の狭い幅狭部304aを設けている。従って、本実施形態によれば、幅狭部304aの幅が狭いことにより、オーバーフローパスの領域が小さくなることから、オーバーフローパスが存在しても、光電変換部302の体積をより大きくすることができ、ひいては飽和電荷量Qsをより大きくすることができる。 As described above, in the present embodiment, the narrow portion 304a having a narrow width is provided as part of the pixel separation portion 304. As shown in FIG. Therefore, according to the present embodiment, since the width of the narrow portion 304a is narrow, the area of the overflow path becomes small, so even if the overflow path exists, the volume of the photoelectric conversion section 302 can be increased. In addition, the saturated charge amount Qs can be increased.
 さらに、比較例に係る撮像素子100では、光が、画素分離部304の上側(受光面10a側)の端面に反射されたり、散乱されたりして、隣接する撮像素子100へ入射されてしまい、混色等を生じる場合があった(図2及び図3参照)。しかしながら、本実施形態においては、画素分離部304の一部として幅の狭い幅狭部304aを設けることにより、画素分離部304の上側の端面が狭くなるため、当該端面での光の反射や散乱を抑制することができる。従って、本実施形態によれば、混色の発生を抑えることができる。 Furthermore, in the image sensor 100 according to the comparative example, the light is reflected or scattered by the end face on the upper side (light receiving surface 10a side) of the pixel separation section 304, and is incident on the adjacent image sensor 100. In some cases, color mixture or the like occurred (see FIGS. 2 and 3). However, in the present embodiment, by providing the narrow width portion 304a as a part of the pixel separation portion 304, the upper end surface of the pixel separation portion 304 becomes narrower, so that light is reflected or scattered on the end surface. can be suppressed. Therefore, according to this embodiment, it is possible to suppress the occurrence of color mixture.
 <3.3 断面構成>
 次に、図5を参照して、本実施形態に係る撮像素子100の断面構成を説明する。図5は、本実施形態に係る撮像素子100の断面を示す説明図であって、詳細には、図4に示すB-B´線で撮像素子100を切断した断面に対応する。なお、図4において、本実施形態に係る撮像素子100をB-B線に対して垂直に交わる線で切断した場合には、先に説明した図2に示す断面となるため、当該断面に対する説明を省略する。すなわち、以下の説明においては、図2及び図3を参照して説明した比較例に係る撮像素子100と共通する構成については、説明を省略する。
<3.3 Cross-sectional configuration>
Next, with reference to FIG. 5, the cross-sectional configuration of the imaging device 100 according to this embodiment will be described. FIG. 5 is an explanatory diagram showing a cross section of the image sensor 100 according to the present embodiment, and in detail corresponds to a cross section of the image sensor 100 taken along line BB' shown in FIG. In FIG. 4, when the imaging element 100 according to the present embodiment is cut along a line perpendicular to the line BB, the cross section shown in FIG. 2 is obtained. omitted. That is, in the following description, the description of the configuration common to the image sensor 100 according to the comparative example described with reference to FIGS. 2 and 3 will be omitted.
 図5に示すように、撮像素子100の中心に位置する幅狭部304aは、半導体基板10を、当該半導体基板10の厚み方向に沿って、半導体基板10の受光面と反対側の面(第2の面)10bから当該半導体基板10の途中まで貫くように設けられている。詳細には、幅狭部304aは、半導体基板10を、当該半導体基板10の厚み方向に沿って、半導体基板10の受光面と反対側の面10bから当該半導体基板10の途中まで貫くように設けられた溝部(トレンチ)(図示省略)と、当該トレンチに埋め込まれた酸化膜や金属膜等からなる材料とからなる。一方、幅広部304bは、半導体基板10を、当該半導体基板10の厚み方向に沿って、半導体基板10を貫通するように設けられている。さらに、本実施形態においては、幅狭部304aと半導体基板10の受光面10aとの間には、第1の導電型(例えばn型)の不純物を含む拡散領域320からなるオーバーフローパスが設けられている。また、半導体基板10の受光面10aと反対側の面10b上には、トランジスタのゲート電極402等が設けられている。 As shown in FIG. 5, the narrow portion 304a located at the center of the image pickup device 100 extends the semiconductor substrate 10 along the thickness direction of the semiconductor substrate 10 on the side opposite to the light-receiving surface of the semiconductor substrate 10 (the second side). 2) 10b and extends halfway through the semiconductor substrate 10. As shown in FIG. Specifically, the narrow portion 304a is provided to penetrate the semiconductor substrate 10 along the thickness direction of the semiconductor substrate 10 from the surface 10b opposite to the light receiving surface of the semiconductor substrate 10 halfway through the semiconductor substrate 10. and a material such as an oxide film or a metal film embedded in the trench (not shown). On the other hand, the wide portion 304 b is provided so as to penetrate the semiconductor substrate 10 along the thickness direction of the semiconductor substrate 10 . Furthermore, in the present embodiment, an overflow path is provided between the narrow portion 304a and the light receiving surface 10a of the semiconductor substrate 10, and is composed of a diffusion region 320 containing impurities of the first conductivity type (for example, n-type). ing. A gate electrode 402 of a transistor and the like are provided on the surface 10b of the semiconductor substrate 10 opposite to the light receiving surface 10a.
 本実施形態においては、例えば、半導体基板10の膜厚方向における幅狭部304aの長さ(深さ)gは、幅広部304bの長さ(深さ)hに対して、0.5×h<g<0.95×hであることが好ましい。 In this embodiment, for example, the length (depth) g of the narrow portion 304a in the film thickness direction of the semiconductor substrate 10 is 0.5×h with respect to the length (depth) h of the wide portion 304b. It is preferred that <g<0.95×h.
 以上のように、本実施形態においては、画素分離部304の一部として幅の狭い幅狭部304aを設けている。従って、本実施形態によれば、幅狭部304aの幅が狭いことにより、オーバーフローパスの領域が小さくなることから、オーバーフローパスが存在しても、光電変換部302の体積をより大きくすることができ、ひいては飽和電荷量Qsをより大きくすることができる。 As described above, in the present embodiment, the narrow portion 304a having a narrow width is provided as part of the pixel separation portion 304. As shown in FIG. Therefore, according to the present embodiment, since the width of the narrow portion 304a is narrow, the area of the overflow path becomes small, so even if the overflow path exists, the volume of the photoelectric conversion section 302 can be increased. In addition, the saturated charge amount Qs can be increased.
 <3.4 製造方法>
 次に、図6Aから図6Cを参照して、本実施形態に係る撮像素子100の製造方法を説明する。図6Aから図6Cは、本実施形態に係る撮像素子100の製造方法を示す説明図であり、詳細には、各図面の上段は、図4に示す平面を90度右回転した平面図に対応し、各図面の下段は、図5に示す断面を上下反転した断面図に対応する。
<3.4 Manufacturing method>
Next, a method for manufacturing the imaging device 100 according to this embodiment will be described with reference to FIGS. 6A to 6C. 6A to 6C are explanatory diagrams showing the manufacturing method of the imaging device 100 according to the present embodiment. Specifically, the upper part of each drawing corresponds to a plan view obtained by rotating the plane shown in FIG. 4 to the right by 90 degrees. The lower part of each drawing corresponds to a cross-sectional view in which the cross-section shown in FIG. 5 is turned upside down.
 まず、図6Aに示すように、リソグラフィを用いてレジストをパターニングし、画素分離部304と素子分離壁310となる位置に、異なる幅を持つ矩形状の開口部を有するマスク600を形成する。 First, as shown in FIG. 6A, a resist is patterned using lithography to form a mask 600 having rectangular openings with different widths at positions to be the pixel isolation section 304 and the element isolation wall 310 .
 次に、図6Bに示すように、マスク600のパターンに従って、ドライエッチングを用いて、半導体基板10を加工して、トレンチ(溝)610を形成する。 Next, as shown in FIG. 6B, dry etching is used to process the semiconductor substrate 10 according to the pattern of the mask 600 to form a trench (groove) 610 .
 ドライエッチングにおいては、パターン寸法の微細化やアスペクト比の増大に応じて、エッチング速度が遅くなる現象、マイクロローディング現象が生じ得る。従って、マイクロローディング現象により、幅狭部304aに対応する幅の狭い開口部におけるエッチングにおいては、幅広部304bに対応する幅の広い開口部におけるエッチングに比べて、エッチング速度が遅くなる。さらに、最終的には、幅狭部304aに対応する幅の狭い開口部におけるエッチングが停止してしまうことになる(セルフストップ現象)。 In dry etching, a phenomenon in which the etching rate slows down and a microloading phenomenon may occur as the pattern dimensions become finer and the aspect ratio increases. Therefore, due to the microloading phenomenon, etching in the narrow opening corresponding to the narrow portion 304a is slower than etching in the wide opening corresponding to the wide portion 304b. Furthermore, the etching in the narrow opening corresponding to the narrow width portion 304a eventually stops (self-stop phenomenon).
 従って、幅狭部304a、幅広部304b及び素子分離壁310に対応するトレンチ610の形成を、ドライエッチングによって同時に行っても、それぞれに対応するパターンの開口部の幅に応じた深さでトレンチ610を掘ることができる。言い換えると、本実施形態においては、トレンチ610の幅を狭くすると浅くしか掘れず、一方、トレンチ610の幅を広くすると深くまで掘れること利用することで、幅狭部304a、幅広部304b及び素子分離壁310に対応するトレンチ610の形成を同時に行うことができる。 Therefore, even if the trenches 610 corresponding to the narrow width portion 304a, the wide width portion 304b, and the element isolation walls 310 are simultaneously formed by dry etching, the trenches 610 have depths corresponding to the widths of the corresponding pattern openings. can be dug. In other words, in this embodiment, if the width of the trench 610 is narrowed, it can only be dug shallowly, while if the width of the trench 610 is widened, it can be dug deeply. Formation of trenches 610 corresponding to walls 310 can be performed at the same time.
 そして、図6Bの下段に示すように、幅狭部304aに対応する領域においては、トレンチ610が浅く、幅広部304bに対応する領域においては、トレンチ610が深くなる。すなわち、本実施形態においては、マイクロローディング現象及びセルフストップ現象を利用して、半導体基板10の膜厚の途中まで貫く幅狭部304aと、半導体基板10を貫通する幅広部304b及び素子分離壁310とを同時に(同工程で)形成することができる。 Then, as shown in the lower part of FIG. 6B, the trench 610 is shallow in the region corresponding to the narrow portion 304a and deep in the region corresponding to the wide portion 304b. That is, in this embodiment, by utilizing the microloading phenomenon and the self-stopping phenomenon, the narrow portion 304a penetrating halfway through the film thickness of the semiconductor substrate 10, the wide portion 304b penetrating the semiconductor substrate 10, and the element isolation walls 310 are formed. can be formed simultaneously (in the same process).
 なお、本実施形態においては、半導体基板10の材質や膜厚、幅狭部304aの幅c、幅広部304bの幅b、素子分離壁310の幅a等に応じて、エッチング条件等を適宜選択することが好ましい。 In this embodiment, the etching conditions and the like are appropriately selected according to the material and film thickness of the semiconductor substrate 10, the width c of the narrow portion 304a, the width b of the wide portion 304b, the width a of the isolation wall 310, and the like. preferably.
 そして、図6Cに示すように、トレンチ610を絶縁膜等の埋め込み膜で埋め込むことにより、画素分離部304(幅狭部304a、幅広部304b)及び素子分離壁310を形成することができる。なお、図6Cの下段に示す図において、幅狭部304aの下方に位置する半導体基板10の領域は、オーバーフローパスとなる。その後、拡散工程や、トランジスタ及び配線の形成工程、裏面加工工程等が行われることとなる。その際、半導体基板10の薄肉化工程において、図6Cの下段に示す図において一番下に残存する半導体基板10の領域が除去され、オーバーフローパスとなる領域のみが残ることとなる。 Then, as shown in FIG. 6C, the pixel isolation portion 304 (narrow portion 304a, wide portion 304b) and the element isolation wall 310 can be formed by filling the trench 610 with a buried film such as an insulating film. 6C, the region of the semiconductor substrate 10 located below the narrow portion 304a serves as an overflow path. After that, a diffusion process, a transistor and wiring formation process, a rear surface processing process, and the like are performed. At this time, in the thinning process of the semiconductor substrate 10, the region of the semiconductor substrate 10 remaining at the bottom in the drawing shown in the lower part of FIG. 6C is removed, leaving only the overflow path region.
 このように、本実施形態においては、マイクロローディング効果及びセルフストップ現象を利用することにより、半導体基板10の膜厚の途中まで貫く幅狭部304aと、半導体基板10を貫通する幅広部304b及び素子分離壁310とを同時に(同工程で)形成することができる。従って、本実施形態においては、これらが同一工程で形成されるため、素子分離壁310に対して画素分離部304(詳細には、幅狭部304a及び幅広部304b)が位置ずれを起こす可能性がないことから、画素分離部304、拡散領域306及び拡散領域320のレイアウトに、多少の余裕を持たせることは必要がなくなる。その結果、本実施形態によれば、光電変換部302の体積をより大きくすることができ、飽和電荷量Qsをより大きくすることができる。 As described above, in this embodiment, by utilizing the microloading effect and the self-stop phenomenon, the narrow portion 304a penetrating halfway through the film thickness of the semiconductor substrate 10, the wide portion 304b penetrating the semiconductor substrate 10, and the element The separation wall 310 can be formed simultaneously (in the same process). Therefore, in the present embodiment, since these are formed in the same process, there is a possibility that the pixel isolation portion 304 (more specifically, the narrow portion 304a and the wide portion 304b) may be misaligned with respect to the element isolation wall 310. Therefore, it is not necessary to provide some leeway in the layout of the pixel separation section 304, diffusion region 306 and diffusion region 320. FIG. As a result, according to this embodiment, the volume of the photoelectric conversion unit 302 can be increased, and the saturated charge amount Qs can be increased.
 さらに、本実施形態によれば、幅狭部304a、幅広部304b及び素子分離壁310を同時に形成することができことから、工程数を減らすことができ、ひいては撮像装置1の製造コストの増加を抑えることができる。また、本実施形態によれば、画素分離部304(詳細には、幅狭部304a及び幅広部304b)と素子分離壁310とを繋がったものして形成することができるため、画素分離部304と素子分離壁310との間につなぎ目があることによる不具合も生じ難い。詳細には、このようなつなぎ目が存在すると、画素分離部304と素子分離壁310との合わせズレにより、両者が重なった箇所が深くなったり(半導体基板10の厚み方向における長さが長くなる)、両者が重ならない場合には、トレンチ(溝)が分断したりといった不具合が生じる可能性がある。なお、従来技術においては、幅狭部304aと幅広部304bとは、半導体基板10の厚み方向における長さ(深さ)が異なるため、別工程で形成することとなる。 Furthermore, according to the present embodiment, the narrow width portion 304a, the wide width portion 304b, and the element isolation wall 310 can be formed at the same time. can be suppressed. Further, according to the present embodiment, the pixel isolation portion 304 (more specifically, the narrow portion 304a and the wide portion 304b) and the element isolation wall 310 can be formed to be connected to each other. and the element isolation wall 310, and the problem due to the presence of the seam is less likely to occur. More specifically, if such a joint exists, misalignment between the pixel isolation portion 304 and the element isolation wall 310 may cause the overlapped portion to become deeper (the length in the thickness direction of the semiconductor substrate 10 to become longer). If the two do not overlap each other, there is a possibility that a problem such as division of the trench may occur. In the conventional technology, the narrow portion 304a and the wide portion 304b have different lengths (depths) in the thickness direction of the semiconductor substrate 10, so they are formed in separate processes.
 以上説明したように、本実施形態によれば、位相差検出時には、一対の画素300a、300bを物理的に分離する画素分離部304と電気的に分離する拡散領域306と、一対の画素300a、300bを電気的に分離する拡散領域320とを設けていることから、位相差検出の精度を向上させる。さらに、本実施形態においては、オーバーフローパスを設けていることから、通常の撮像時に、画素300a、300bのいずれか一方の画素の電荷が飽和しそうになった際には、上記オーバーフローパスを介して他方の画素に電荷を移動させることにより、一方の画素の飽和を避けることができる。従って、本実施形態によれば、このようなオーバーフローパスを設けることにより、撮像素子100から出力される画素信号のリニアリティを確保し、撮像画像の劣化を防ぐことができる。 As described above, according to the present embodiment, during phase difference detection, the pixel separation unit 304 that physically separates the pair of pixels 300a and 300b, the diffusion region 306 that electrically separates the pair of pixels 300a and 300b, Since the diffusion region 320 electrically isolates 300b is provided, the accuracy of phase difference detection is improved. Furthermore, in this embodiment, since an overflow path is provided, when the charge of one of the pixels 300a and 300b is about to be saturated during normal imaging, By transferring charge to the other pixel, saturation of one pixel can be avoided. Therefore, according to the present embodiment, by providing such an overflow path, it is possible to ensure the linearity of the pixel signals output from the image sensor 100 and prevent deterioration of the captured image.
 加えて、本実施形態によれば、画素分離部304の一部として幅の狭い幅狭部304aを設けることにより、オーバーフローパスの領域が小さくなることから、オーバーフローパスが存在しても、光電変換部302の体積をより大きくすることができ、ひいては撮像素子100の飽和電荷量Qsをより大きくすることができる。また、本実施形態によれば、幅狭部304aの幅の狭くすることにより、幅狭部304aの上側の端面が狭くなるため、当該端面による光の反射や散乱を抑制することができ、その結果、混色の発生を抑えることができる。 In addition, according to the present embodiment, by providing the narrow width portion 304a as a part of the pixel separation portion 304, the area of the overflow path is reduced. The volume of the unit 302 can be increased, and the saturation charge amount Qs of the image sensor 100 can be increased. Further, according to the present embodiment, by narrowing the width of the narrow portion 304a, the upper end surface of the narrow portion 304a becomes narrower, so that the reflection and scattering of light by the end surface can be suppressed. As a result, the occurrence of color mixture can be suppressed.
 また、本実施形態においては、マイクロローディング効果及びセルフストップ現象を利用することにより、半導体基板10の膜厚の途中まで貫く幅狭部304aと、半導体基板10を貫通する幅広部304b及び素子分離壁310とを同時に(同工程で)形成することができる。従って、本実施形態においては、これらが同一工程で行われるため、素子分離壁310に対して画素分離部304が位置ずれを起こす可能性がないことから、画素分離部304、拡散領域306及び拡散領域320のレイアウトに、多少の余裕を持たせることは必要がなくなる。その結果、本実施形態によれば、光電変換部302の体積をより大きくすることができ、飽和電荷量Qsをより大きくすることができる。 In addition, in this embodiment, by utilizing the microloading effect and the self-stop phenomenon, the narrow portion 304a penetrating halfway through the film thickness of the semiconductor substrate 10, the wide portion 304b penetrating the semiconductor substrate 10, and the element isolation walls are formed. 310 can be formed simultaneously (in the same process). Therefore, in the present embodiment, since these are performed in the same process, there is no possibility that the pixel isolation portion 304 will be misaligned with respect to the element isolation wall 310. Therefore, the pixel isolation portion 304, the diffusion region 306, and the diffusion region 306 are eliminated. It is no longer necessary to provide some leeway in the layout of area 320 . As a result, according to this embodiment, the volume of the photoelectric conversion unit 302 can be increased, and the saturated charge amount Qs can be increased.
 さらに、本実施形態によれば、幅狭部304a、幅広部304b及び素子分離壁310を同時に形成することができことから、工程数を減らすことができ、ひいては撮像装置1の製造コストの増加を抑えることができる。また、本実施形態によれば、画素分離部304(詳細には、幅狭部304a及び幅広部304b)と素子分離壁310とを繋がったものして形成することができるため、画素分離部304(詳細には、幅狭部304a及び幅広部304b)と素子分離壁310との間につなぎ目があることによる不具合も生じ難い。 Furthermore, according to the present embodiment, the narrow width portion 304a, the wide width portion 304b, and the element isolation wall 310 can be formed at the same time. can be suppressed. Further, according to the present embodiment, the pixel isolation portion 304 (more specifically, the narrow portion 304a and the wide portion 304b) and the element isolation wall 310 can be formed to be connected to each other. Problems due to joints between (specifically, the narrow portion 304a and the wide portion 304b) and the element isolation wall 310 are less likely to occur.
 また、本実施形態においては、オーバーフローパス(すなわち、幅狭部304a)の位置や個数や大きさを任意に変えることが容易であることから、以下に説明するような変形例1も可能となる。以下に、本実施形態の変形例1の詳細を説明する。 In addition, in this embodiment, since it is easy to arbitrarily change the position, number, and size of the overflow path (that is, the narrow portion 304a), Modification Example 1 as described below is also possible. . Details of Modification 1 of the present embodiment will be described below.
 <3.5 変形例1>
 次に、図7から図9を参照して、本実施形態の変形例1に係る撮像素子100を説明する。図7は、本実施形態の変形例1に係る撮像素子100の平面を示す説明図であり、図8及び図9は、本実施形態の変形例1に係る撮像素子100の断面を示す説明図であり、詳細には、図7に示すC-C´線で撮像素子100を切断した断面に対応する。
<3.5 Modification 1>
Next, an imaging device 100 according to Modification 1 of the present embodiment will be described with reference to FIGS. 7 to 9. FIG. FIG. 7 is an explanatory diagram showing a plane of the imaging device 100 according to Modification 1 of the present embodiment, and FIGS. 8 and 9 are explanatory diagrams showing cross sections of the imaging device 100 according to Modification 1 of the present embodiment. Specifically, it corresponds to a cross section of the imaging device 100 taken along line CC' shown in FIG.
 例えば、図7及び図8に示すように、本変形例1においては、画素分離部304は、交互に配置された複数の幅狭部304aと複数の幅広部304bとを有することができる。そして、図8の示すように、各幅狭部304aと半導体基板10の受光面10aとの間に、第1の導電型(例えばn型)の不純物を含む拡散領域320からなるオーバーフローパスが設けられることとなる。 For example, as shown in FIGS. 7 and 8, in Modification 1, the pixel separation section 304 can have a plurality of narrow width sections 304a and a plurality of wide width sections 304b that are alternately arranged. As shown in FIG. 8, an overflow path is provided between each narrow portion 304a and the light-receiving surface 10a of the semiconductor substrate 10. The overflow path is composed of a diffusion region 320 containing impurities of a first conductivity type (for example, n-type). will be
 また、本変形例1においては、各幅狭部304aの幅cを略同一にすることにより、図8に示すように、半導体基板10の膜厚方向における各幅狭部304aの長さ(深さ)gを略同一としてもよい。 Further, in Modification 1, by making the width c of each narrow portion 304a approximately the same, as shown in FIG. g) g may be substantially the same.
 もしくは、本変形例1においては、各幅狭部304aの幅cを異なるものにすることにより、図9に示すように、半導体基板10の膜厚方向における各幅狭部304aの長さ(深さ)gを異なるものとしてもよい。本変形例1においては、本実施形態と同様に、マイクロローディング効果及びセルフストップ現象を利用することにより、各幅狭部304aの幅cに応じて、半導体基板10の膜厚方向における各幅狭部304aの長さ(深さ)gを制御することが容易であることから、図9に示すような構成も容易に形成することができる。 Alternatively, in Modification 1, by making the width c of each narrow portion 304a different, as shown in FIG. g) g may be different. In Modification 1, as in the present embodiment, each narrow width in the film thickness direction of the semiconductor substrate 10 is controlled according to the width c of each narrow width portion 304a by utilizing the microloading effect and the self-stop phenomenon. Since it is easy to control the length (depth) g of the portion 304a, the configuration shown in FIG. 9 can be easily formed.
 また、本実施形態によれば、画素分離部304は、半導体基板10の平面において矩形状であることに限定されるものではなく、台形状、波状、矩形波状等、様々な形状にすることができることから、以下に説明するような変形例2も可能となる。以下に、本実施形態の変形例2の詳細を説明する。 Further, according to the present embodiment, the pixel separating portion 304 is not limited to having a rectangular shape in the plane of the semiconductor substrate 10, and may have various shapes such as a trapezoidal shape, a wavy shape, and a rectangular wavy shape. Therefore, Modification 2 as described below is also possible. Details of Modification 2 of the present embodiment will be described below.
 <3.6 変形例2>
 さらに、図10から図14を参照して、本実施形態の変形例2に係る撮像素子100を説明する。図10は、本実施形態の変形例2に係る撮像素子100の平面を示す説明図であり、図11は、本実施形態の変形例2に係る撮像素子100の断面を示す説明図であって、詳細には、図10に示すD-D´線で撮像素子100を切断した断面に対応する。また、図12は、本実施形態の変形例2に係る撮像素子100の平面を示す説明図であり、図13は、本実施形態の変形例2に係る撮像素子100の断面を示す説明図であって、詳細には、図12に示すE-E´線で撮像素子100を切断した断面に対応する。さらに、図14は、本実施形態の変形例2に係る撮像素子100の平面を示す説明図である。
<3.6 Modification 2>
Further, an imaging device 100 according to Modification 2 of the present embodiment will be described with reference to FIGS. 10 to 14. FIG. FIG. 10 is an explanatory diagram showing a plan view of the imaging element 100 according to Modification 2 of the present embodiment, and FIG. 11 is an explanatory diagram showing a cross section of the imaging element 100 according to Modification 2 of the present embodiment. , in detail, corresponds to a cross section of the imaging element 100 taken along line DD' shown in FIG. FIG. 12 is an explanatory diagram showing a plan view of the imaging element 100 according to Modification 2 of the present embodiment, and FIG. 13 is an explanatory diagram showing a cross section of the imaging element 100 according to Modification 2 of the present embodiment. Specifically, it corresponds to a cross section of the imaging element 100 taken along line EE' shown in FIG. Furthermore, FIG. 14 is an explanatory diagram showing a plane of an imaging device 100 according to Modification 2 of the present embodiment.
 本変形例2においては、例えば、図10に示すように、撮像素子100を半導体基板10の受光面10aとは反対側の面10bから見た場合、画素分離部304は、撮像素子100の中心に向かって暫時狭くなる幅cを有してもよい。 In Modified Example 2, for example, as shown in FIG. It may have a width c that gradually narrows towards .
 また、図10に示すような構成の場合、撮像素子100の断面は、図11に示すように形成される。詳細には、図11に示すように、半導体基板10の膜厚方向における画素分離部304の長さ(深さ)gは、画素分離部304の幅cが広くなるに従って、長くなる。そして、画素分離部304の長さgが最も短くなっている個所の上方に、オーバーフローパスとなる拡散領域320を設けられることとなる。 Also, in the case of the configuration as shown in FIG. 10, the cross section of the imaging element 100 is formed as shown in FIG. Specifically, as shown in FIG. 11, the length (depth) g of the pixel isolation portion 304 in the film thickness direction of the semiconductor substrate 10 increases as the width c of the pixel isolation portion 304 increases. A diffusion region 320 serving as an overflow path is provided above the portion where the length g of the pixel separation portion 304 is the shortest.
 また、本変形例2においては、例えば、図12に示すように、撮像素子100を半導体基板10の受光面10aとは反対側の面10bから見た場合、画素分離部304は、撮像素子100の列方向(図12中Y方向)に沿って、暫時狭くなる幅cを有してもよい。なお、本変形例2においては、画素分離部304は、図12中上下に位置する画素300a、300bを分離するために、撮像素子100の行方向(図12中X方向)に沿って、暫時狭くなる幅cを有してもよい。 Further, in Modification 2, for example, as shown in FIG. may have a width c that gradually narrows along the column direction (the Y direction in FIG. 12). In Modification 2, in order to separate the pixels 300a and 300b positioned above and below in FIG. It may have a narrowing width c.
 また、図12に示すような構成の場合、撮像素子100の断面は、図13に示すように形成される。詳細には、図13に示すように、半導体基板10の膜厚方向における画素分離部304の長さ(深さ)gは、画素分離部304の幅cが広くなるに従って、長くなる。そして、画素分離部304の長さgが最も短くなっている個所の上方に、オーバーフローパスとなる拡散領域320を設けられることとなる。 Also, in the case of the configuration as shown in FIG. 12, the cross section of the imaging element 100 is formed as shown in FIG. Specifically, as shown in FIG. 13, the length (depth) g of the pixel isolation portion 304 in the film thickness direction of the semiconductor substrate 10 increases as the width c of the pixel isolation portion 304 increases. A diffusion region 320 serving as an overflow path is provided above the portion where the length g of the pixel separation portion 304 is the shortest.
 本変形例2においては、例えば、素子分離壁310の幅aが幅広部304bの幅bと略同一とした場合、画素分離部304の幅の最大値cmaxと最小値cminとは、素子分離壁310の幅aに対して、cmin<cmax<a、cmin×1<cmax-cmin<cmax×0.8であることが好ましい。さらに、本変形例においては、例えば、半導体基板10の膜厚方向における画素分離部304の長さ(深さ)の最大値gmaxと最小値gminとは、素子分離壁310の長さ(深さ)nに対して、gmin<gmax<n、gmin×0.05<gmax-gmin<n×0.7であることが好ましい。 In Modified Example 2, for example, when the width a of the isolation wall 310 is substantially the same as the width b of the wide portion 304b, the maximum value cmax and the minimum value cmin of the width of the pixel isolation portion 304 are equal to the width of the isolation wall For width a of 310, preferably cmin<cmax<a, cmin×1<cmax−cmin<cmax×0.8. Furthermore, in this modification, for example, the maximum value gmax and the minimum value gmin of the length (depth) of the pixel isolation portion 304 in the film thickness direction of the semiconductor substrate 10 are equal to the length (depth) of the element isolation wall 310. ), preferably gmin<gmax<n and gmin×0.05<gmax−gmin<n×0.7.
 なお、本変形例2においては、変形例1と組み合わせてもよい。例えば、図14に示すように、画素分離部304は、撮像素子100の列方向(図14中Y方向)に沿って、暫時狭くなり、広くなる幅cを繰り返す山形波状に形成されてもよい。 It should be noted that Modification 2 may be combined with Modification 1. For example, as shown in FIG. 14, the pixel separation section 304 may be formed in a chevron shape that repeats a width c that gradually narrows and widens along the column direction (the Y direction in FIG. 14) of the image sensor 100. .
 <<4. 第2の実施形態>>
 <4.1 第2の実施形態の背景>
 まずは、図15及び図16を参照して、本発明者らの創作した本開示の第2の実施形態の背景を説明する。図15は、比較例に係る撮像素子100の断面の一部を示す説明図であり、詳細には、図2に示す領域500の拡大図である。また、図16は、比較例に係る撮像素子100の平面を示す説明図であり、図2に示すA-A´線で撮像素子100を切断した際の断面に対応する。なお、ここで、比較例とは、本発明者らが本開示の実施形態をなす前に、検討を重ねていた撮像素子100のことを意味するものとする。
<<4. Second Embodiment>>
<4.1 Background of Second Embodiment>
First, the background of the second embodiment of the present disclosure created by the present inventors will be described with reference to FIGS. 15 and 16. FIG. FIG. 15 is an explanatory view showing a part of the cross section of the imaging device 100 according to the comparative example, and more specifically, an enlarged view of the region 500 shown in FIG. FIG. 16 is an explanatory diagram showing a plane of the imaging device 100 according to the comparative example, and corresponds to a cross section when the imaging device 100 is cut along line AA' shown in FIG. Here, the comparative example means the imaging element 100 that the inventors of the present invention repeatedly studied before making the embodiment of the present disclosure.
 図15及び図16に示すように、比較例に係る撮像素子100においては、画素分離部304は、ポリシリコンからなる中心部330と、当該中心部330の外周を覆うシリコン酸化膜からなる外周部332とで構成される。このような構造においては、シリコン酸化膜からなる外周部332とシリコンからなる半導体基板10とでは、屈折率が異なることから、外周部332と半導体基板10との界面で入射光が反射されてしまう。そして、反射された光は、本来吸収されるべき撮像素子100に隣り合う他の撮像素子100へ入射してしまう。さらに、他の撮像素子100が、一方の撮像素子100と異なる色の光を検出するための撮像素子100であった場合には、他の撮像素子100において混色が生じることから、色ノイズや象のボケ(MTF(Modulation Transfer Function)の悪化)等が発生してしまう。 As shown in FIGS. 15 and 16, in the imaging device 100 according to the comparative example, the pixel separation section 304 includes a central portion 330 made of polysilicon and an outer peripheral portion made of a silicon oxide film covering the outer periphery of the central portion 330. 332. In such a structure, since the outer peripheral portion 332 made of the silicon oxide film and the semiconductor substrate 10 made of silicon have different refractive indices, incident light is reflected at the interface between the outer peripheral portion 332 and the semiconductor substrate 10 . . Then, the reflected light enters another imaging element 100 adjacent to the imaging element 100 that should be absorbed. Furthermore, if the other image pickup device 100 is an image pickup device 100 for detecting light of a color different from that of the one image pickup device 100, color mixture occurs in the other image pickup device 100, resulting in color noise and phenomena. blurring (deterioration of MTF (Modulation Transfer Function)), etc. will occur.
 そこで、図15及び図16に示すように、比較例に係る撮像素子100においては、画素分離部304の中心部330を半導体基板10と同等の屈折率を持つポリシリコンで形成することにより、中心部330でも光を吸収するようにして、上述のような界面での反射を抑制している。 Therefore, as shown in FIGS. 15 and 16, in the image sensor 100 according to the comparative example, the central portion 330 of the pixel separating portion 304 is formed of polysilicon having the same refractive index as that of the semiconductor substrate 10. The portion 330 is also designed to absorb light, thereby suppressing reflection at the interface as described above.
 このような状況において、本発明者らが上述のような撮像素子100の構成をより詳細に検討したところ、当該構成においては、撮像素子100をより微細化した場合により強く求められる撮像素子100の光電変換効率(Qe)の向上に限界があることがわかった。詳細には、画素分離部304の中心部330で吸収された光は、光電変換されることがないため画素信号として利用することができない。従って、撮像素子100に入射した光の一部が中心部330で吸収されてしまうことから、光電変換部302に入射する光が減少し、光電変換部302で発生する電荷量が少なくなってしまうことから、光電変換効率(Qe)の向上には限界があった。 Under such circumstances, the inventors of the present invention have studied the configuration of the imaging device 100 as described above in more detail. It has been found that there is a limit to improving the photoelectric conversion efficiency (Qe). Specifically, the light absorbed by the central portion 330 of the pixel separating portion 304 cannot be used as a pixel signal because it is not photoelectrically converted. Therefore, part of the light incident on the image pickup device 100 is absorbed by the central portion 330, so the amount of light incident on the photoelectric conversion unit 302 is reduced, and the amount of charge generated in the photoelectric conversion unit 302 is reduced. Therefore, there is a limit to improvement in photoelectric conversion efficiency (Qe).
 この状況を鑑みて、本発明者らは、上述した界面での光の反射を抑えつつ、中心部330での光の吸収を抑えることできる構成について鋭意検討を行い、以下に説明する本開示の第2の実施形態を創作するに至った。以下、本発明者らが創作した本開示の第2の実施形態の詳細を順次説明する。 In view of this situation, the present inventors have made intensive studies on a configuration that can suppress the absorption of light at the central portion 330 while suppressing the reflection of light at the interface described above. I came to create the second embodiment. Details of the second embodiment of the present disclosure created by the present inventors will be sequentially described below.
 <4.2 構成>
 まずは、図17を参照して、本実施形態に係る撮像素子100の構成を説明する。図17は、本実施形態に係る撮像素子100の平面を示す説明図であり、詳細には、図2に示すA-A´線で撮像素子100を切断した断面に対応する。なお、以下の説明においては、図2及び図3を参照して説明した比較例に係る撮像素子100と共通する構成については、説明を省略する。
<4.2 Configuration>
First, with reference to FIG. 17, the configuration of the imaging element 100 according to this embodiment will be described. FIG. 17 is an explanatory diagram showing a plan view of the imaging device 100 according to the present embodiment, and in detail corresponds to a cross section of the imaging device 100 taken along line AA' shown in FIG. In addition, in the following description, the description of the configuration common to the image sensor 100 according to the comparative example described with reference to FIGS. 2 and 3 will be omitted.
 図17に示すように、本実施形態においては、撮像素子100の中心に列方向(図17中Y方向)に沿って並ぶ複数の画素分離部(第2の画素分離部)350により、画素300aと画素300bとは分離されている。詳細には、画素分離部350は、撮像素子100を受光面10aの反対側の面10bの上方から見た場合、略円状の中心部352と、中心部352の外周を覆う外周部354とから構成される。 As shown in FIG. 17, in this embodiment, a plurality of pixel separators (second pixel separators) 350 arranged along the column direction (the Y direction in FIG. 17) in the center of the image sensor 100 separates the pixels 300a. and pixel 300b are separated. Specifically, when the imaging device 100 is viewed from above the surface 10b opposite to the light receiving surface 10a, the pixel separating section 350 has a substantially circular central portion 352 and an outer peripheral portion 354 covering the outer periphery of the central portion 352. consists of
 さらに詳細には、画素分離部350は、半導体基板10の厚み方向に沿って、半導体基板10の受光面と反対側の面(第2の面)10bから当該半導体基板10の途中まで貫く略円柱として形成される。そして、画素分離部350の上側に位置する、画素分離部350と半導体基板10の受光面10aとの間には、第1の導電型(例えばn型)の不純物を含む拡散領域320からなるオーバーフローパスが設けられることとなる。さらに、図17に示すように、画素分離部350の周囲には、第2の導電型(例えばp型)の不純物を含む拡散領域306が形成されている。 More specifically, the pixel separating portion 350 is a substantially circular column penetrating halfway through the semiconductor substrate 10 from the surface (second surface) 10b opposite to the light receiving surface of the semiconductor substrate 10 along the thickness direction of the semiconductor substrate 10. formed as Between the pixel separation section 350 and the light-receiving surface 10a of the semiconductor substrate 10 located above the pixel separation section 350, an overflow region 320 including a first conductivity type (for example, n-type) impurity is provided. A path will be provided. Furthermore, as shown in FIG. 17, a diffusion region 306 containing impurities of the second conductivity type (for example, p-type) is formed around the pixel isolation portion 350 .
 また、本実施形態においては、中心部352は、例えば、ポリシリコン、シリコン酸化膜、シリコン窒化膜、チタン窒化膜等から形成することができ、すなわち、半導体基板10と屈折率がより近い材料から形成されることが好ましい。さらに、本実施形態においては、外周部354は、設けることが必須ではないが、設けた場合には、シリコン酸化膜、シリコン窒化膜、チタン窒化膜等から形成することができる。 In addition, in this embodiment, the central portion 352 can be formed of, for example, polysilicon, silicon oxide film, silicon nitride film, titanium nitride film, or the like. preferably formed. Furthermore, in the present embodiment, the peripheral portion 354 is not essential to be provided, but if provided, it can be formed of a silicon oxide film, a silicon nitride film, a titanium nitride film, or the like.
 さらに、本実施形態においては、光の吸収を抑えるために、中心部352の直径Rは小さいことが好ましい。しかしながら、本実施形態においては、複数の画素分離部350及び拡散領域306により、画素300aと画素300bとを分離するために、図17に示す、中心部352の直径Rや、画素分離部350の中心から隣り合う画素分離部350の中心までの距離(ピッチ)Pや、拡散領域306の長さ(拡散長)Dは、P≦R+D×2の関係を満たすことが好ましい。なお、本実施形態においては、このような条件を満たすように、画素分離部350の数やピッチP、中心部352の直径Rを適宜選択することが好ましく、さらに、拡散長D決定する拡散条件等についても適宜選択することが好ましい。 Furthermore, in this embodiment, the diameter R of the central portion 352 is preferably small in order to suppress light absorption. However, in this embodiment, in order to separate the pixels 300a and the pixels 300b by the plurality of pixel separating portions 350 and the diffusion regions 306, the diameter R of the central portion 352 and the width of the pixel separating portion 350 shown in FIG. It is preferable that the distance (pitch) P from the center to the center of the adjacent pixel separation section 350 and the length (diffusion length) D of the diffusion region 306 satisfy the relationship P≦R+D×2. In the present embodiment, it is preferable to appropriately select the number and pitch P of the pixel separating portions 350 and the diameter R of the central portion 352 so as to satisfy such conditions. etc. are also preferably selected as appropriate.
 また、本実施形態においては、複数の画素分離部350は、列方向(図17中Y方向)に沿って並ぶことに限定されるものではなく、行方向(図17中X方向)に沿って並んでもよい。さらに、本実施形態においては、画素分離部350の数も3つに限定されるものではなく、2つ以上あればよい。 In addition, in the present embodiment, the plurality of pixel separation units 350 are not limited to being arranged along the column direction (the Y direction in FIG. 17), but are arranged along the row direction (the X direction in FIG. 17). You can line up. Furthermore, in the present embodiment, the number of pixel separating sections 350 is not limited to three, and may be two or more.
 図16に示すような比較例に係る撮像素子100では、直線的な構成を持つ画素分離部304により画素300aと画素300bとを分離していた。一方、本実施形態においては、図17に示すように、略円状の画素分離部350を複数個並べることにより、画素300aと画素300bとを分離する。本実施形態においては、図17に示されるような構成にすることにより、比較例に比べて、シリコン酸化膜で形成された外周部354を少なくすることができることから、外周部354と半導体基板10との界面での光の反射を抑え、ひいては混色等の発生を抑えることができる。 In the image sensor 100 according to the comparative example as shown in FIG. 16, the pixels 300a and 300b are separated by the pixel separating section 304 having a linear configuration. On the other hand, in the present embodiment, as shown in FIG. 17, the pixels 300a and the pixels 300b are separated by arranging a plurality of substantially circular pixel separating portions 350. As shown in FIG. In the present embodiment, by adopting the configuration as shown in FIG. 17, the peripheral portion 354 formed of the silicon oxide film can be reduced as compared with the comparative example. It is possible to suppress the reflection of light at the interface between the two and, in turn, suppress the occurrence of color mixture and the like.
 さらに、本実施形態によれば、図17に示されるような構成にすることにより、比較例に比べて、シリコンで形成された中心部352を少なくすることができることから、中心部352による光の吸収を抑えることができる。従って、本実施形態によれば、撮像素子100の光電変換効率(Qe)をより向上させることができる。 Furthermore, according to the present embodiment, by adopting the configuration shown in FIG. 17, the central portion 352 made of silicon can be reduced compared to the comparative example. Absorption can be suppressed. Therefore, according to this embodiment, the photoelectric conversion efficiency (Qe) of the image sensor 100 can be further improved.
 <4.3 製造方法>
 次に、図18Aから図18Cを参照して、本実施形態に係る撮像素子100の製造方法を説明する。図18Aから図18Cは、本実施形態に係る撮像素子100の製造方法を示す説明図であり、詳細には、各図面の上段は、図17に示す平面図に対応し、各図面の下段は、図17に示すF-F´線で撮像素子100を切断した断面に対応するが、下側が受光面10aとなる。
<4.3 Manufacturing method>
Next, a method for manufacturing the imaging device 100 according to this embodiment will be described with reference to FIGS. 18A to 18C. 18A to 18C are explanatory diagrams showing the manufacturing method of the imaging device 100 according to this embodiment. Specifically, the upper stage of each drawing corresponds to the plan view shown in FIG. , corresponds to the cross section of the imaging device 100 taken along line FF' shown in FIG.
 まず、図18Aに示すように、リソグラフィ及びドライエッチングを用いて、半導体基板10を加工して、トレンチ(溝)610を形成する。この際、素子分離壁310に対応するトレンチ610は、深く、且つ、矩形状に、画素分離部650に対応するトレンチ610は、浅く、略円状に形成する。 First, as shown in FIG. 18A, lithography and dry etching are used to process the semiconductor substrate 10 to form a trench (groove) 610 . At this time, the trenches 610 corresponding to the element isolation walls 310 are deep and rectangular, and the trenches 610 corresponding to the pixel isolation portions 650 are shallow and substantially circular.
 次に、図18Bに示すように、トレンチ610内にBSG(Bron silicate glass)362を成膜し、熱を印加して、ボロンをBSG362から半導体基板10へ拡散させる。 Next, as shown in FIG. 18B, a film of BSG (Bron Silicate Glass) 362 is formed in the trench 610 and heat is applied to diffuse boron from the BSG 362 into the semiconductor substrate 10 .
 さらに、図18Cに示すように、トレンチ610内にポリシリコン352を埋め込み、さらに拡散工程を行う。その後、トランジスタ及び配線の形成工程、裏面加工工程、ウエハ貼り合わせ工程等が行われることとなる。 Furthermore, as shown in FIG. 18C, the trenches 610 are filled with polysilicon 352, and a diffusion process is further performed. After that, a transistor and wiring forming process, a back surface processing process, a wafer bonding process, and the like are performed.
 以上説明したように、本実施形態によれば、位相差検出時には、一対の画素300a、300bを物理的に分離する画素分離部304と電気的に分離する拡散領域306と、一対の画素300a、300bを電気的に分離する拡散領域320等とを設けていることから、位相差検出の精度を向上させる。さらに、本実施形態においては、オーバーフローパスを設けていることから、通常の撮像時に、画素300a、300bのいずれか一方の画素の電荷が飽和しそうになった際には、上記オーバーフローパスを介して他方の画素に電荷を移動させることにより、一方の画素の飽和を避けることができる。従って、本実施形態によれば、このようなオーバーフローパスを設けることにより、撮像素子100から出力される画素信号のリニアリティを確保し、撮像画像の劣化を防ぐことができる。 As described above, according to the present embodiment, during phase difference detection, the pixel separation unit 304 that physically separates the pair of pixels 300a and 300b, the diffusion region 306 that electrically separates the pair of pixels 300a and 300b, Since the diffusion region 320 and the like are provided for electrically separating 300b, the accuracy of phase difference detection is improved. Furthermore, in this embodiment, since an overflow path is provided, when the charge of one of the pixels 300a and 300b is about to be saturated during normal imaging, By transferring charge to the other pixel, saturation of one pixel can be avoided. Therefore, according to the present embodiment, by providing such an overflow path, it is possible to ensure the linearity of the pixel signals output from the image sensor 100 and prevent deterioration of the captured image.
 加えて、本実施形態によれば、シリコン酸化膜で形成された外周部354を少なくすることができることから、外周部354と半導体基板10との界面での光の反射を抑えることができ、ひいては混色等の発生を抑えることができる。さらに、本実施形態によれば、シリコンで形成された中心部352を少なくすることができることから、中心部352による光の吸収を抑えることができる。従って、本実施形態によれば、撮像素子100の光電変換効率(Qe)をより向上させることができる。 In addition, according to the present embodiment, since the outer peripheral portion 354 formed of the silicon oxide film can be reduced, reflection of light at the interface between the outer peripheral portion 354 and the semiconductor substrate 10 can be suppressed. It is possible to suppress the occurrence of color mixture and the like. Furthermore, according to the present embodiment, since the central portion 352 made of silicon can be reduced, absorption of light by the central portion 352 can be suppressed. Therefore, according to this embodiment, the photoelectric conversion efficiency (Qe) of the image sensor 100 can be further improved.
 <4.4 変形例>
 次に、図19及び図20を参照して、本実施形態の変形例に係る撮像素子100を説明する。図19及び図20は、本実施形態の変形例に係る撮像素子100の平面を示す説明図である。
<4.4 Modifications>
Next, an imaging device 100 according to a modification of this embodiment will be described with reference to FIGS. 19 and 20. FIG. 19 and 20 are explanatory diagrams showing the plane of the imaging device 100 according to the modified example of the present embodiment.
 本変形例においては、図19に示すように、画素分離部(第3の画素分離部)350は、略円状の断面を持つものだけでなく、略矩形状の断面を持っていてもよい。さらに、本変形例においては、複数の画素分離部350は、行方向(図19中X方向)に沿って並んでもよい。 In this modification, as shown in FIG. 19, the pixel separation section (third pixel separation section) 350 may have a substantially rectangular cross section as well as a substantially circular cross section. . Furthermore, in this modification, the plurality of pixel separating sections 350 may be arranged along the row direction (the X direction in FIG. 19).
 また、本変形例においては、図20に示すように、複数の画素分離部350は、行方向(図20中X方向)に沿って並んでもよく、さらに、複数の画素分離部350で構成される列は、撮像素子100の中心を通過するものでなくてもよい。 In addition, in this modification, as shown in FIG. 20, the plurality of pixel separation sections 350 may be arranged along the row direction (X direction in FIG. It is not necessary for the line to pass through the center of the imaging device 100 .
 <<5. まとめ>>
 以上説明したように、本開示の各実施形態によれば、位相差検出時には一対の画素300a、300bを分離する要素と、通常撮影時にオーバーフローパスとしての機能する要素とを設けていることから、位相差検出の精度を向上させつつ、撮像画像の劣化を避けることができる。
<<5. Summary>>
As described above, according to each embodiment of the present disclosure, an element that separates the pair of pixels 300a and 300b during phase difference detection and an element that functions as an overflow path during normal imaging are provided. Degradation of the captured image can be avoided while improving the accuracy of phase difference detection.
 なお、上述した本開示の実施形態においては、裏面照射型CMOSイメージセンサ構造に適用した場合について説明したが、本開示の実施形態はこれに限定されるものではなく、他の構造に適用されてもよい。 In addition, in the above-described embodiments of the present disclosure, the case of application to a back-illuminated CMOS image sensor structure has been described, but the embodiments of the present disclosure are not limited to this, and can be applied to other structures. good too.
 なお、上述した本開示の実施形態においては、第1の導電型をn型とし、第2の導電型をp型とし、電子を信号電荷として用いた撮像素子100について説明したが、本開示の実施形態はこのような例に限定されるものではない。例えば、本実施形態は、第1の導電型をp型とし、第2の導電型をn型とし、正孔を信号電荷として用いる撮像素子100に適用することが可能である。 In the embodiment of the present disclosure described above, the first conductivity type is the n-type, the second conductivity type is the p-type, and the imaging device 100 using electrons as signal charges has been described. Embodiments are not limited to such examples. For example, this embodiment can be applied to the imaging device 100 in which the first conductivity type is p-type, the second conductivity type is n-type, and holes are used as signal charges.
 また、上述した本開示の実施形態においては、半導体基板10は、必ずしもシリコン基板でなくてもよく、他の基板(例えば、SOI(Silicon On Insulator)基板やSiGe基板など)であっても良い。また、上記半導体基板10は、このような種々の基板上に半導体構造等が形成されたものでも良い。 Also, in the above-described embodiments of the present disclosure, the semiconductor substrate 10 does not necessarily have to be a silicon substrate, and may be another substrate (for example, an SOI (Silicon On Insulator) substrate, a SiGe substrate, etc.). Also, the semiconductor substrate 10 may be one in which a semiconductor structure or the like is formed on such various substrates.
 さらに、本開示の実施形態に係る撮像装置1は、可視光の入射光量の分布を検知して画像として撮像する撮像装置に限定されるものではない。例えば、本実施形態は、赤外線やX線、あるいは粒子等の入射量の分布を画像として撮像する撮像装置や、圧力や静電容量など、他の物理量の分布を検知して画像として撮像する指紋検出センサ等の撮像装置(物理量分布検知装置)に対して適用することができる。 Furthermore, the imaging device 1 according to the embodiment of the present disclosure is not limited to an imaging device that detects the distribution of the amount of incident visible light and captures an image. For example, the present embodiment includes an imaging device that captures an image of the distribution of incident amounts of infrared rays, X-rays, particles, etc., and a fingerprint that detects the distribution of other physical quantities such as pressure and capacitance and captures the image. It can be applied to an imaging device (physical quantity distribution detection device) such as a detection sensor.
 また、本開示の実施形態に係る撮像装置1は、一般的な半導体装置の製造に用いられる、方法、装置、及び条件を用いることで製造することが可能である。すなわち、本実施形態に係る撮像装置1は、既存の半導体装置の製造工程を用いて製造することが可能である。 Also, the imaging device 1 according to the embodiment of the present disclosure can be manufactured by using methods, devices, and conditions that are used for manufacturing general semiconductor devices. That is, the imaging device 1 according to the present embodiment can be manufactured using the existing manufacturing process of semiconductor devices.
 なお、上述の方法としては、例えば、PVD(Physical Vapor Deposition)法、CVD(Chemical Vapor Deposition)法及びALD(Atomic Layer Deposition)法等を挙げることができる。PVD法としては、真空蒸着法、EB(電子ビーム)蒸着法、各種スパッタリング法(マグネトロンスパッタリング法、RF(Radio Frequency)-DC(Direct Current)結合形バイアススパッタリング法、ECR(Electron Cyclotron Resonance)スパッタリング法、対向ターゲットスパッタリング法、高周波スパッタリング法等)、イオンプレーティング法、レーザーアブレーション法、分子線エピタキシー法(MBE(Molecular Beam Epitaxy)法)、レーザー転写法を挙げることができる。また、CVD法としては、プラズマCVD法、熱CVD法、有機金属(MO)CVD法、光CVD法を挙げることができる。さらに、他の方法としては、電解メッキ法や無電解メッキ法、スピンコート法;浸漬法;キャスト法;マイクロコンタクトプリント法;ドロップキャスト法;スクリーン印刷法やインクジェット印刷法、オフセット印刷法、グラビア印刷法、フレキソ印刷法といった各種印刷法;スタンプ法;スプレー法;エアドクタコーター法、ブレードコーター法、ロッドコーター法、ナイフコーター法、スクイズコーター法、リバースロールコーター法、トランスファーロールコーター法、グラビアコーター法、キスコーター法、キャストコーター法、スプレーコーター法、スリットオリフィスコーター法、カレンダーコーター法といった各種コーティング法を挙げることができる。さらに、パターニング法としては、シャドーマスク、レーザー転写、フォトリソグラフィー等の化学的エッチング、紫外線やレーザー等による物理的エッチング等を挙げることができる。加えて、平坦化技術としては、CMP(Chemical Mechanical Polishing)法、レーザー平坦化法、リフロー法等を挙げることができる。 Examples of the above-described methods include a PVD (Physical Vapor Deposition) method, a CVD (Chemical Vapor Deposition) method, and an ALD (Atomic Layer Deposition) method. PVD methods include vacuum deposition, EB (electron beam) deposition, various sputtering methods (magnetron sputtering, RF (Radio Frequency)-DC (Direct Current) combined bias sputtering, and ECR (Electron Cyclotron Resonance) sputtering. , facing target sputtering method, high frequency sputtering method, etc.), ion plating method, laser ablation method, molecular beam epitaxy method (MBE (Molecular Beam Epitaxy) method), and laser transfer method. Examples of CVD methods include plasma CVD, thermal CVD, metal-organic (MO) CVD, and optical CVD. Further, other methods include electrolytic plating method, electroless plating method, spin coating method; immersion method; casting method; microcontact printing method; drop casting method; screen printing method, inkjet printing method, offset printing method, gravure printing. Various printing methods such as printing method, flexographic printing method; stamp method; spray method; air doctor coater method, blade coater method, rod coater method, knife coater method, squeeze coater method, reverse roll coater method, transfer roll coater method, gravure coater method , kiss coater method, cast coater method, spray coater method, slit orifice coater method and calendar coater method. Furthermore, examples of patterning methods include shadow masking, laser transfer, chemical etching such as photolithography, and physical etching using ultraviolet rays, lasers, and the like. In addition, planarization techniques include a CMP (Chemical Mechanical Polishing) method, a laser planarization method, a reflow method, and the like.
 <<6. カメラへの応用例>>
 本開示に係る技術(本技術)は、さらに様々な製品へ応用することができる。例えば、本開示に係る技術は、カメラ等に適用されてもよい。そこで、図21を参照して、本技術を適用した電子機器としての、カメラ700の構成例について説明する。図21は、本開示に係る技術(本技術)が適用され得るカメラ700の概略的な機能構成の一例を示す説明図である。
<<6. Example of application to camera>>
The technology (this technology) according to the present disclosure can be applied to various products. For example, the technology according to the present disclosure may be applied to cameras and the like. Therefore, a configuration example of a camera 700 as an electronic device to which the present technology is applied will be described with reference to FIG. 21 . FIG. 21 is an explanatory diagram showing an example of a schematic functional configuration of a camera 700 to which the technology according to the present disclosure (the present technology) can be applied.
 図21に示すように、カメラ700は、撮像装置702、光学レンズ710、シャッタ機構712、駆動回路ユニット714、及び、信号処理回路ユニット716を有する。光学レンズ710は、被写体からの像光(入射光)を撮像装置702の撮像面上に結像させる。これにより、撮像装置702の撮像素子100内に、一定期間、信号電荷が蓄積される。シャッタ機構712は、開閉することにより、撮像装置702への光照射期間及び遮光期間を制御する。駆動回路ユニット714は、撮像装置702の信号の転送動作やシャッタ機構712のシャッタ動作等を制御する駆動信号をこれらに供給する。すなわち、撮像装置702は、駆動回路ユニット714から供給される駆動信号(タイミング信号)に基づいて信号転送を行うこととなる。信号処理回路ユニット716は、各種の信号処理を行う。例えば、信号処理回路ユニット716は、信号処理を行った映像信号を例えばメモリ等の記憶媒体(図示省略)に出力したり、表示部(図示省略)に出力したりする。 As shown in FIG. 21, the camera 700 has an imaging device 702, an optical lens 710, a shutter mechanism 712, a drive circuit unit 714, and a signal processing circuit unit 716. The optical lens 710 forms an image of image light (incident light) from a subject on the imaging surface of the imaging device 702 . As a result, signal charges are accumulated in the imaging element 100 of the imaging device 702 for a certain period of time. The shutter mechanism 712 controls a light irradiation period and a light shielding period for the imaging device 702 by opening and closing. The drive circuit unit 714 supplies drive signals for controlling the signal transfer operation of the imaging device 702, the shutter operation of the shutter mechanism 712, and the like. That is, the imaging device 702 performs signal transfer based on the drive signal (timing signal) supplied from the drive circuit unit 714 . The signal processing circuit unit 716 performs various signal processing. For example, the signal processing circuit unit 716 outputs the signal-processed video signal to a storage medium (not shown) such as a memory or to a display unit (not shown).
 <<7. スマートフォンへの応用例>>
 本開示に係る技術(本技術)は、さらに様々な製品へ応用することができる。例えば、本開示に係る技術は、スマートフォン等に適用されてもよい。そこで、図22を参照して、本技術を適用した電子機器としての、スマートフォン900の構成例について説明する。図22は、本開示に係る技術(本技術)が適用され得るスマートフォン900の概略的な機能構成の一例を示すブロック図である。
<<7. Example of application to smartphone >>
The technology (this technology) according to the present disclosure can be applied to various products. For example, the technology according to the present disclosure may be applied to smartphones and the like. Therefore, a configuration example of a smartphone 900 as an electronic device to which the present technology is applied will be described with reference to FIG. 22 . FIG. 22 is a block diagram showing an example of a schematic functional configuration of a smartphone 900 to which the technology (the present technology) according to the present disclosure can be applied.
 図22に示すように、スマートフォン900は、CPU(Central Processing Unit)901、ROM(Read Only Memory)902、及びRAM(Random Access Memory)903を含む。また、スマートフォン900は、ストレージ装置904、通信モジュール905、及びセンサモジュール907を含む。さらに、スマートフォン900は、撮像装置909、表示装置910、スピーカ911、マイクロフォン912、入力装置913、及びバス914を含む。また、スマートフォン900は、CPU901に代えて、又はこれとともに、DSP(Digital Signal Processor)等の処理回路を有してもよい。 As shown in FIG. 22, the smartphone 900 includes a CPU (Central Processing Unit) 901, a ROM (Read Only Memory) 902, and a RAM (Random Access Memory) 903. Smartphone 900 also includes storage device 904 , communication module 905 , and sensor module 907 . Smartphone 900 further includes imaging device 909 , display device 910 , speaker 911 , microphone 912 , input device 913 and bus 914 . Also, the smartphone 900 may have a processing circuit such as a DSP (Digital Signal Processor) in place of the CPU 901 or together with it.
 CPU901は、演算処理装置及び制御装置として機能し、ROM902、RAM903、又はストレージ装置904等に記録された各種プログラムに従って、スマートフォン900内の動作全般又はその一部を制御する。ROM902は、CPU901が使用するプログラムや演算パラメータなどを記憶する。RAM903は、CPU901の実行において使用するプログラムや、その実行において適宜変化するパラメータ等を一次記憶する。CPU901、ROM902、及びRAM903は、バス914により相互に接続されている。また、ストレージ装置904は、スマートフォン900の記憶部の一例として構成されたデータ格納用の装置である。ストレージ装置904は、例えば、HDD(Hard Disk Drive)等の磁気記憶デバイス、半導体記憶デバイス、光記憶デバイス等により構成される。このストレージ装置904は、CPU901が実行するプログラムや各種データ、及び外部から取得した各種のデータ等を格納する。 The CPU 901 functions as an arithmetic processing device and a control device, and controls all or part of the operations within the smartphone 900 according to various programs recorded in the ROM 902, RAM 903, storage device 904, or the like. A ROM 902 stores programs and calculation parameters used by the CPU 901 . A RAM 903 temporarily stores programs used in the execution of the CPU 901, parameters that change as appropriate during the execution, and the like. The CPU 901 , ROM 902 and RAM 903 are interconnected by a bus 914 . Also, the storage device 904 is a data storage device configured as an example of a storage unit of the smartphone 900 . The storage device 904 is composed of, for example, a magnetic storage device such as a HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or the like. The storage device 904 stores programs executed by the CPU 901, various data, and various data acquired from the outside.
 通信モジュール905は、例えば、通信ネットワーク906に接続するための通信デバイスなどで構成された通信インタフェースである。通信モジュール905は、例えば、有線又は無線LAN(Local Area Network)、Bluetooth(登録商標)、WUSB(Wireless USB)用の通信カード等であり得る。また、通信モジュール905は、光通信用のルータ、ADSL(Asymmetric Digital Subscriber Line)用のルータ、又は、各種通信用のモデム等であってもよい。通信モジュール905は、例えば、インターネットや他の通信機器との間で、TCP(Transmission Control Protocol)/IP(Internet Protocol)等の所定のプロトコルを用いて信号等を送受信する。また、通信モジュール905に接続される通信ネットワーク906は、有線又は無線によって接続されたネットワークであり、例えば、インターネット、家庭内LAN、赤外線通信又は衛星通信等である。 The communication module 905 is, for example, a communication interface configured with a communication device for connecting to the communication network 906. The communication module 905 can be, for example, a communication card for wired or wireless LAN (Local Area Network), Bluetooth (registered trademark), or WUSB (Wireless USB). Also, the communication module 905 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), a modem for various types of communication, or the like. The communication module 905, for example, transmits and receives signals to and from the Internet and other communication devices using a predetermined protocol such as TCP (Transmission Control Protocol)/IP (Internet Protocol). A communication network 906 connected to the communication module 905 is a wired or wireless network, such as the Internet, home LAN, infrared communication, or satellite communication.
 センサモジュール907は、例えば、モーションセンサ(例えば、加速度センサ、ジャイロセンサ、地磁気センサ等)、生体情報センサ(例えば、脈拍センサ、血圧センサ、指紋センサ等)、又は位置センサ(例えば、GNSS(Global Navigation Satellite System)受信機等)等の各種のセンサを含む。 The sensor module 907 is, for example, a motion sensor (eg, an acceleration sensor, a gyro sensor, a geomagnetic sensor, etc.), a biological information sensor (eg, a pulse sensor, a blood pressure sensor, a fingerprint sensor, etc.), or a position sensor (eg, GNSS (Global Navigation Satellite system) receiver, etc.) and various sensors.
 撮像装置909は、スマートフォン900の表面に設けられ、スマートフォン900の裏側又は表側に位置する対象物等を撮像することができる。詳細には、撮像装置909は、本開示に係る技術(本技術)が適用され得るCMOS(Complementary MOS)イメージセンサ等の撮像素子(図示省略)と、撮像素子で光電変換された信号に対して撮像信号処理を施す信号処理回路(図示省略)とを含んで構成することができる。さらに、撮像装置909は、撮像レンズ、ズームレンズ、及びフォーカスレンズ等により構成される光学系機構(図示省略)及び、上記光学系機構の動作を制御する駆動系機構(図示省略)をさらに有することができる。そして、上記撮像素子は、対象物からの入射光を光学像として集光し、上記信号処理回路は、結像された光学像を画素単位で光電変換し、各画素の信号を撮像信号として読み出し、画像処理することにより撮像画像を取得することができる。 The imaging device 909 is provided on the surface of the smartphone 900 and can image an object or the like positioned on the back side or the front side of the smartphone 900 . Specifically, the imaging device 909 includes an imaging device (not shown) such as a CMOS (Complementary MOS) image sensor to which the technology according to the present disclosure (this technology) can be applied, and a signal photoelectrically converted by the imaging device. and a signal processing circuit (not shown) that performs imaging signal processing. Furthermore, the imaging device 909 further includes an optical system mechanism (not shown) composed of an imaging lens, a zoom lens, a focus lens, and the like, and a drive system mechanism (not shown) for controlling the operation of the optical system mechanism. can be done. The image sensor collects incident light from an object as an optical image, and the signal processing circuit photoelectrically converts the formed optical image pixel by pixel, and reads the signal of each pixel as an image signal. , a captured image can be acquired by performing image processing.
 表示装置910は、スマートフォン900の表面に設けられ、例えば、LCD(Liquid Crystal Display)、有機EL(Electro Luminescence)ディスプレイ等の表示装置であることができる。表示装置910は、操作画面や、上述した撮像装置909が取得した撮像画像などを表示することができる。 The display device 910 is provided on the surface of the smartphone 900 and can be, for example, a display device such as an LCD (Liquid Crystal Display) or an organic EL (Electro Luminescence) display. The display device 910 can display an operation screen, captured images acquired by the imaging device 909 described above, and the like.
 スピーカ911は、例えば、通話音声や、上述した表示装置910が表示する映像コンテンツに付随する音声等を、ユーザに向けて出力することができる。 The speaker 911 can output, for example, the voice of a call, the voice accompanying the video content displayed by the display device 910 described above, and the like to the user.
 マイクロフォン912は、例えば、ユーザの通話音声、スマートフォン900の機能を起動するコマンドを含む音声や、スマートフォン900の周囲環境の音声を集音することができる。 The microphone 912 can collect, for example, the user's call voice, voice including commands for activating functions of the smartphone 900 , and ambient environment voice of the smartphone 900 .
 入力装置913は、例えば、ボタン、キーボード、タッチパネル、マウス等、ユーザによって操作される装置である。入力装置913は、ユーザが入力した情報に基づいて入力信号を生成してCPU901に出力する入力制御回路を含む。ユーザは、この入力装置913を操作することによって、スマートフォン900に対して各種のデータを入力したり処理動作を指示したりすることができる。 The input device 913 is, for example, a device operated by a user, such as a button, keyboard, touch panel, or mouse. The input device 913 includes an input control circuit that generates an input signal based on information input by the user and outputs the signal to the CPU 901 . By operating the input device 913 , the user can input various data to the smartphone 900 and instruct processing operations.
 以上、スマートフォン900の構成例を示した。上記の各構成要素は、汎用的な部材を用いて構成されていてもよいし、各構成要素の機能に特化したハードウェアにより構成されていてもよい。かかる構成は、実施する時々の技術レベルに応じて適宜変更され得る。 A configuration example of the smartphone 900 has been shown above. Each component described above may be configured using general-purpose members, or may be configured by hardware specialized for the function of each component. Such a configuration can be changed as appropriate according to the technical level of implementation.
 <<8. 内視鏡手術システムへの応用例>>
 本開示に係る技術(本技術)は、さらに様々な製品へ応用することができる。例えば、本開示に係る技術は、内視鏡手術システムに適用されてもよい。
<<8. Example of application to an endoscopic surgery system>>
The technology (this technology) according to the present disclosure can be applied to various products. For example, the technology according to the present disclosure may be applied to an endoscopic surgery system.
 図23は、本開示に係る技術(本技術)が適用され得る内視鏡手術システムの概略的な構成の一例を示す図である。 FIG. 23 is a diagram showing an example of a schematic configuration of an endoscopic surgery system to which the technology (this technology) according to the present disclosure can be applied.
 図23では、術者(医師)11131が、内視鏡手術システム11000を用いて、患者ベッド11133上の患者11132に手術を行っている様子が図示されている。図示するように、内視鏡手術システム11000は、内視鏡11100と、気腹チューブ11111やエネルギー処置具11112等の、その他の術具11110と、内視鏡11100を支持する支持アーム装置11120と、内視鏡下手術のための各種の装置が搭載されたカート11200と、から構成される。 FIG. 23 shows how an operator (physician) 11131 is performing surgery on a patient 11132 on a patient bed 11133 using an endoscopic surgery system 11000 . As illustrated, an endoscopic surgery system 11000 includes an endoscope 11100, other surgical instruments 11110 such as a pneumoperitoneum tube 11111 and an energy treatment instrument 11112, and a support arm device 11120 for supporting the endoscope 11100. , and a cart 11200 loaded with various devices for endoscopic surgery.
 内視鏡11100は、先端から所定の長さの領域が患者11132の体腔内に挿入される鏡筒11101と、鏡筒11101の基端に接続されるカメラヘッド11102と、から構成される。図示する例では、硬性の鏡筒11101を有するいわゆる硬性鏡として構成される内視鏡11100を図示しているが、内視鏡11100は、軟性の鏡筒を有するいわゆる軟性鏡として構成されてもよい。 An endoscope 11100 is composed of a lens barrel 11101 whose distal end is inserted into the body cavity of a patient 11132 and a camera head 11102 connected to the proximal end of the lens barrel 11101 . In the illustrated example, an endoscope 11100 configured as a so-called rigid scope having a rigid lens barrel 11101 is illustrated, but the endoscope 11100 may be configured as a so-called flexible scope having a flexible lens barrel. good.
 鏡筒11101の先端には、対物レンズが嵌め込まれた開口部が設けられている。内視鏡11100には光源装置11203が接続されており、当該光源装置11203によって生成された光が、鏡筒11101の内部に延設されるライトガイドによって当該鏡筒の先端まで導光され、対物レンズを介して患者11132の体腔内の観察対象に向かって照射される。なお、内視鏡11100は、直視鏡であってもよいし、斜視鏡又は側視鏡であってもよい。 The tip of the lens barrel 11101 is provided with an opening into which the objective lens is fitted. A light source device 11203 is connected to the endoscope 11100, and light generated by the light source device 11203 is guided to the tip of the lens barrel 11101 by a light guide extending inside the lens barrel 11101, where it reaches the objective. Through the lens, the light is irradiated toward the observation object inside the body cavity of the patient 11132 . Note that the endoscope 11100 may be a straight scope, a perspective scope, or a side scope.
 カメラヘッド11102の内部には光学系及び撮像素子が設けられており、観察対象からの反射光(観察光)は当該光学系によって当該撮像素子に集光される。当該撮像素子によって観察光が光電変換され、観察光に対応する電気信号、すなわち観察像に対応する画像信号が生成される。当該画像信号は、RAWデータとしてカメラコントロールユニット(CCU: Camera Control Unit)11201に送信される。 An optical system and an imaging element are provided inside the camera head 11102, and the reflected light (observation light) from the observation target is focused on the imaging element by the optical system. The imaging element photoelectrically converts the observation light to generate an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image. The image signal is transmitted to a camera control unit (CCU: Camera Control Unit) 11201 as RAW data.
 CCU11201は、CPU(Central Processing Unit)やGPU(Graphics Processing Unit)等によって構成され、内視鏡11100及び表示装置11202の動作を統括的に制御する。さらに、CCU11201は、カメラヘッド11102から画像信号を受け取り、その画像信号に対して、例えば現像処理(デモザイク処理)等の、当該画像信号に基づく画像を表示するための各種の画像処理を施す。 The CCU 11201 is composed of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), etc., and controls the operations of the endoscope 11100 and the display device 11202 in an integrated manner. Further, the CCU 11201 receives an image signal from the camera head 11102 and performs various image processing such as development processing (demosaicing) for displaying an image based on the image signal.
 表示装置11202は、CCU11201からの制御により、当該CCU11201によって画像処理が施された画像信号に基づく画像を表示する。 The display device 11202 displays an image based on an image signal subjected to image processing by the CCU 11201 under the control of the CCU 11201 .
 光源装置11203は、例えばLED(Light Emitting Diode)等の光源から構成され、術部等を撮影する際の照射光を内視鏡11100に供給する。 The light source device 11203 is composed of a light source such as an LED (Light Emitting Diode), for example, and supplies the endoscope 11100 with irradiation light for photographing a surgical site or the like.
 入力装置11204は、内視鏡手術システム11000に対する入力インタフェースである。ユーザは、入力装置11204を介して、内視鏡手術システム11000に対して各種の情報の入力や指示入力を行うことができる。例えば、ユーザは、内視鏡11100による撮像条件(照射光の種類、倍率及び焦点距離等)を変更する旨の指示等を入力する。 The input device 11204 is an input interface for the endoscopic surgery system 11000. The user can input various information and instructions to the endoscopic surgery system 11000 via the input device 11204 . For example, the user inputs an instruction or the like to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) by the endoscope 11100 .
 処置具制御装置11205は、組織の焼灼、切開又は血管の封止等のためのエネルギー処置具11112の駆動を制御する。気腹装置11206は、内視鏡11100による視野の確保及び術者の作業空間の確保の目的で、患者11132の体腔を膨らめるために、気腹チューブ11111を介して当該体腔内にガスを送り込む。レコーダ11207は、手術に関する各種の情報を記録可能な装置である。プリンタ11208は、手術に関する各種の情報を、テキスト、画像又はグラフ等各種の形式で印刷可能な装置である。 The treatment instrument control device 11205 controls driving of the energy treatment instrument 11112 for tissue cauterization, incision, blood vessel sealing, or the like. The pneumoperitoneum device 11206 inflates the body cavity of the patient 11132 for the purpose of securing the visual field of the endoscope 11100 and securing the operator's working space, and injects gas into the body cavity through the pneumoperitoneum tube 11111. send in. The recorder 11207 is a device capable of recording various types of information regarding surgery. The printer 11208 is a device capable of printing various types of information regarding surgery in various formats such as text, images, and graphs.
 なお、内視鏡11100に術部を撮影する際の照射光を供給する光源装置11203は、例えばLED、レーザー光源又はこれらの組み合わせによって構成される白色光源から構成することができる。RGBレーザー光源の組み合わせにより白色光源が構成される場合には、各色(各波長)の出力強度及び出力タイミングを高精度に制御することができるため、光源装置11203において撮像画像のホワイトバランスの調整を行うことができる。また、この場合には、RGBレーザー光源それぞれからのレーザー光を時分割で観察対象に照射し、その照射タイミングに同期してカメラヘッド11102の撮像素子の駆動を制御することにより、RGBそれぞれに対応した画像を時分割で撮像することも可能である。当該方法によれば、当該撮像素子にカラーフィルタを設けなくても、カラー画像を得ることができる。 It should be noted that the light source device 11203 that supplies the endoscope 11100 with irradiation light for imaging the operative site can be composed of, for example, a white light source composed of an LED, a laser light source, or a combination thereof. When a white light source is configured by a combination of RGB laser light sources, the output intensity and output timing of each color (each wavelength) can be controlled with high accuracy. It can be carried out. In this case, laser light from each of the RGB laser light sources is irradiated to the observation target in a time division manner, and the drive of the imaging device of the camera head 11102 is controlled in synchronization with the irradiation timing, thereby corresponding to each of RGB. It is also possible to pick up images by time division. According to this method, a color image can be obtained without providing a color filter in the imaging device.
 また、光源装置11203は、出力する光の強度を所定の時間ごとに変更するようにその駆動が制御されてもよい。その光の強度の変更のタイミングに同期してカメラヘッド11102の撮像素子の駆動を制御して時分割で画像を取得し、その画像を合成することにより、いわゆる黒つぶれ及び白とびのない高ダイナミックレンジの画像を生成することができる。 Further, the driving of the light source device 11203 may be controlled so as to change the intensity of the output light every predetermined time. By controlling the drive of the imaging device of the camera head 11102 in synchronism with the timing of the change in the intensity of the light to obtain an image in a time-division manner and synthesizing the images, a high dynamic A range of images can be generated.
 また、光源装置11203は、特殊光観察に対応した所定の波長帯域の光を供給可能に構成されてもよい。特殊光観察では、例えば、体組織における光の吸収の波長依存性を利用して、通常の観察時における照射光(すなわち、白色光)に比べて狭帯域の光を照射することにより、粘膜表層の血管等の所定の組織を高コントラストで撮影する、いわゆる狭帯域光観察(Narrow Band Imaging)が行われる。あるいは、特殊光観察では、励起光を照射することにより発生する蛍光により画像を得る蛍光観察が行われてもよい。蛍光観察では、体組織に励起光を照射し当該体組織からの蛍光を観察すること(自家蛍光観察)、又はインドシアニングリーン(ICG)等の試薬を体組織に局注するとともに当該体組織にその試薬の蛍光波長に対応した励起光を照射し蛍光像を得ること等を行うことができる。光源装置11203は、このような特殊光観察に対応した狭帯域光及び/又は励起光を供給可能に構成され得る。 Also, the light source device 11203 may be configured to be able to supply light in a predetermined wavelength band corresponding to special light observation. In special light observation, for example, the wavelength dependence of light absorption in body tissues is used to irradiate a narrower band of light than the irradiation light (i.e., white light) used during normal observation, thereby observing the mucosal surface layer. So-called narrow band imaging, in which a predetermined tissue such as a blood vessel is imaged with high contrast, is performed. Alternatively, in special light observation, fluorescence observation may be performed in which an image is obtained from fluorescence generated by irradiation with excitation light. In fluorescence observation, the body tissue is irradiated with excitation light and the fluorescence from the body tissue is observed (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected into the body tissue and the body tissue is A fluorescence image can be obtained by irradiating excitation light corresponding to the fluorescence wavelength of the reagent. The light source device 11203 can be configured to supply narrowband light and/or excitation light corresponding to such special light observation.
 図24は、図23に示すカメラヘッド11102及びCCU11201の機能構成の一例を示すブロック図である。 FIG. 24 is a block diagram showing an example of functional configurations of the camera head 11102 and CCU 11201 shown in FIG.
 カメラヘッド11102は、レンズユニット11401と、撮像部11402と、駆動部11403と、通信部11404と、カメラヘッド制御部11405と、を有する。CCU11201は、通信部11411と、画像処理部11412と、制御部11413と、を有する。カメラヘッド11102とCCU11201とは、伝送ケーブル11400によって互いに通信可能に接続されている。 The camera head 11102 has a lens unit 11401, an imaging section 11402, a drive section 11403, a communication section 11404, and a camera head control section 11405. The CCU 11201 has a communication section 11411 , an image processing section 11412 and a control section 11413 . The camera head 11102 and the CCU 11201 are communicably connected to each other via a transmission cable 11400 .
 レンズユニット11401は、鏡筒11101との接続部に設けられる光学系である。鏡筒11101の先端から取り込まれた観察光は、カメラヘッド11102まで導光され、当該レンズユニット11401に入射する。レンズユニット11401は、ズームレンズ及びフォーカスレンズを含む複数のレンズが組み合わされて構成される。 A lens unit 11401 is an optical system provided at a connection with the lens barrel 11101 . Observation light captured from the tip of the lens barrel 11101 is guided to the camera head 11102 and enters the lens unit 11401 . A lens unit 11401 is configured by combining a plurality of lenses including a zoom lens and a focus lens.
 撮像部11402は、撮像素子で構成される。撮像部11402を構成する撮像素子は、1つ(いわゆる単板式)であってもよいし、複数(いわゆる多板式)であってもよい。撮像部11402が多板式で構成される場合には、例えば各撮像素子によってRGBそれぞれに対応する画像信号が生成され、それらが合成されることによりカラー画像が得られてもよい。あるいは、撮像部11402は、3D(Dimensional)表示に対応する右目用及び左目用の画像信号をそれぞれ取得するための1対の撮像素子を有するように構成されてもよい。3D表示が行われることにより、術者11131は術部における生体組織の奥行きをより正確に把握することが可能になる。なお、撮像部11402が多板式で構成される場合には、各撮像素子に対応して、レンズユニット11401も複数系統設けられ得る。 The imaging unit 11402 is composed of an imaging device. The imaging device constituting the imaging unit 11402 may be one (so-called single-plate type) or plural (so-called multi-plate type). When the image pickup unit 11402 is configured as a multi-plate type, for example, image signals corresponding to RGB may be generated by each image pickup element, and a color image may be obtained by synthesizing the image signals. Alternatively, the imaging unit 11402 may be configured to have a pair of imaging elements for respectively acquiring right-eye and left-eye image signals corresponding to 3D (Dimensional) display. The 3D display enables the operator 11131 to more accurately grasp the depth of the living tissue in the surgical site. Note that when the imaging unit 11402 is configured as a multi-plate type, a plurality of systems of lens units 11401 may be provided corresponding to each imaging element.
 また、撮像部11402は、必ずしもカメラヘッド11102に設けられなくてもよい。例えば、撮像部11402は、鏡筒11101の内部に、対物レンズの直後に設けられてもよい。 Also, the imaging unit 11402 does not necessarily have to be provided in the camera head 11102 . For example, the imaging unit 11402 may be provided inside the lens barrel 11101 immediately after the objective lens.
 駆動部11403は、アクチュエータによって構成され、カメラヘッド制御部11405からの制御により、レンズユニット11401のズームレンズ及びフォーカスレンズを光軸に沿って所定の距離だけ移動させる。これにより、撮像部11402による撮像画像の倍率及び焦点が適宜調整され得る。 The drive unit 11403 is configured by an actuator, and moves the zoom lens and focus lens of the lens unit 11401 by a predetermined distance along the optical axis under control from the camera head control unit 11405 . Thereby, the magnification and focus of the image captured by the imaging unit 11402 can be appropriately adjusted.
 通信部11404は、CCU11201との間で各種の情報を送受信するための通信装置によって構成される。通信部11404は、撮像部11402から得た画像信号をRAWデータとして伝送ケーブル11400を介してCCU11201に送信する。 The communication unit 11404 is composed of a communication device for transmitting and receiving various information to and from the CCU 11201. The communication unit 11404 transmits the image signal obtained from the imaging unit 11402 as RAW data to the CCU 11201 via the transmission cable 11400 .
 また、通信部11404は、CCU11201から、カメラヘッド11102の駆動を制御するための制御信号を受信し、カメラヘッド制御部11405に供給する。当該制御信号には、例えば、撮像画像のフレームレートを指定する旨の情報、撮像時の露出値を指定する旨の情報、並びに/又は撮像画像の倍率及び焦点を指定する旨の情報等、撮像条件に関する情報が含まれる。 Also, the communication unit 11404 receives a control signal for controlling driving of the camera head 11102 from the CCU 11201 and supplies it to the camera head control unit 11405 . The control signal includes, for example, information to specify the frame rate of the captured image, information to specify the exposure value at the time of imaging, and/or information to specify the magnification and focus of the captured image. Contains information about conditions.
 なお、上記のフレームレートや露出値、倍率、焦点等の撮像条件は、ユーザによって適宜指定されてもよいし、取得された画像信号に基づいてCCU11201の制御部11413によって自動的に設定されてもよい。後者の場合には、いわゆるAE(Auto Exposure)機能、AF(Auto Focus)機能及びAWB(Auto White Balance)機能が内視鏡11100に搭載されていることになる。 Note that the imaging conditions such as the frame rate, exposure value, magnification, and focus may be appropriately designated by the user, or may be automatically set by the control unit 11413 of the CCU 11201 based on the acquired image signal. good. In the latter case, the endoscope 11100 is equipped with so-called AE (Auto Exposure) function, AF (Auto Focus) function, and AWB (Auto White Balance) function.
 カメラヘッド制御部11405は、通信部11404を介して受信したCCU11201からの制御信号に基づいて、カメラヘッド11102の駆動を制御する。 The camera head control unit 11405 controls driving of the camera head 11102 based on the control signal from the CCU 11201 received via the communication unit 11404.
 通信部11411は、カメラヘッド11102との間で各種の情報を送受信するための通信装置によって構成される。通信部11411は、カメラヘッド11102から、伝送ケーブル11400を介して送信される画像信号を受信する。 The communication unit 11411 is composed of a communication device for transmitting and receiving various information to and from the camera head 11102 . The communication unit 11411 receives image signals transmitted from the camera head 11102 via the transmission cable 11400 .
 また、通信部11411は、カメラヘッド11102に対して、カメラヘッド11102の駆動を制御するための制御信号を送信する。画像信号や制御信号は、電気通信や光通信等によって送信することができる。 Also, the communication unit 11411 transmits a control signal for controlling driving of the camera head 11102 to the camera head 11102 . Image signals and control signals can be transmitted by electric communication, optical communication, or the like.
 画像処理部11412は、カメラヘッド11102から送信されたRAWデータである画像信号に対して各種の画像処理を施す。 The image processing unit 11412 performs various types of image processing on the image signal, which is RAW data transmitted from the camera head 11102 .
 制御部11413は、内視鏡11100による術部等の撮像、及び、術部等の撮像により得られる撮像画像の表示に関する各種の制御を行う。例えば、制御部11413は、カメラヘッド11102の駆動を制御するための制御信号を生成する。 The control unit 11413 performs various controls related to imaging of the surgical site and the like by the endoscope 11100 and display of the captured image obtained by imaging the surgical site and the like. For example, the control unit 11413 generates control signals for controlling driving of the camera head 11102 .
 また、制御部11413は、画像処理部11412によって画像処理が施された画像信号に基づいて、術部等が映った撮像画像を表示装置11202に表示させる。この際、制御部11413は、各種の画像認識技術を用いて撮像画像内における各種の物体を認識してもよい。例えば、制御部11413は、撮像画像に含まれる物体のエッジの形状や色等を検出することにより、鉗子等の術具、特定の生体部位、出血、エネルギー処置具11112の使用時のミスト等を認識することができる。制御部11413は、表示装置11202に撮像画像を表示させる際に、その認識結果を用いて、各種の手術支援情報を当該術部の画像に重畳表示させてもよい。手術支援情報が重畳表示され、術者11131に提示されることにより、術者11131の負担を軽減することや、術者11131が確実に手術を進めることが可能になる。 In addition, the control unit 11413 causes the display device 11202 to display a captured image showing the surgical site and the like based on the image signal that has undergone image processing by the image processing unit 11412 . At this time, the control unit 11413 may recognize various objects in the captured image using various image recognition techniques. For example, the control unit 11413 detects the shape, color, and the like of the edges of objects included in the captured image, thereby detecting surgical instruments such as forceps, specific body parts, bleeding, mist during use of the energy treatment instrument 11112, and the like. can recognize. When displaying the captured image on the display device 11202, the control unit 11413 may use the recognition result to display various types of surgical assistance information superimposed on the image of the surgical site. By superimposing and presenting the surgery support information to the operator 11131, the burden on the operator 11131 can be reduced and the operator 11131 can proceed with the surgery reliably.
 カメラヘッド11102及びCCU11201を接続する伝送ケーブル11400は、電気信号の通信に対応した電気信号ケーブル、光通信に対応した光ファイバ、又はこれらの複合ケーブルである。 A transmission cable 11400 connecting the camera head 11102 and the CCU 11201 is an electrical signal cable compatible with electrical signal communication, an optical fiber compatible with optical communication, or a composite cable of these.
 ここで、図示する例では、伝送ケーブル11400を用いて有線で通信が行われていたが、カメラヘッド11102とCCU11201との間の通信は無線で行われてもよい。 Here, in the illustrated example, wired communication is performed using the transmission cable 11400, but communication between the camera head 11102 and the CCU 11201 may be performed wirelessly.
 以上、本開示に係る技術が適用され得る内視鏡手術システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、例えば、内視鏡11100や、カメラヘッド11102(の撮像部11402)、CCU11201(の画像処理部11412)等)に適用され得る。 An example of an endoscopic surgery system to which the technology according to the present disclosure can be applied has been described above. The technology according to the present disclosure can be applied to, for example, the endoscope 11100, (the imaging unit 11402 of) the camera head 11102, (the image processing unit 11412 of) the CCU 11201, etc.) among the configurations described above.
 なお、ここでは、一例として内視鏡手術システムについて説明したが、本開示に係る技術は、その他、例えば、顕微鏡手術システム等に適用されてもよい。 Although the endoscopic surgery system has been described as an example here, the technology according to the present disclosure may also be applied to, for example, a microsurgery system.
 <<9. 移動体への応用例>>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。
<<9. Examples of application to moving bodies >>
The technology (the present technology) according to the present disclosure can be applied to various products. For example, the technology according to the present disclosure can be realized as a device mounted on any type of moving body such as automobiles, electric vehicles, hybrid electric vehicles, motorcycles, bicycles, personal mobility, airplanes, drones, ships, and robots. may
 図25は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。 FIG. 25 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile control system to which the technology according to the present disclosure can be applied.
 車両制御システム12000は、通信ネットワーク12001を介して接続された複数の電子制御ユニットを備える。図20に示した例では、車両制御システム12000は、駆動系制御ユニット12010、ボディ系制御ユニット12020、車外情報検出ユニット12030、車内情報検出ユニット12040、及び統合制御ユニット12050を備える。また、統合制御ユニット12050の機能構成として、マイクロコンピュータ12051、音声画像出力部12052、及び車載ネットワークI/F(interface)12053が図示されている。 A vehicle control system 12000 includes a plurality of electronic control units connected via a communication network 12001. In the example shown in FIG. 20, the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an inside information detection unit 12040, and an integrated control unit 12050. Also, as the functional configuration of the integrated control unit 12050, a microcomputer 12051, an audio/image output unit 12052, and an in-vehicle network I/F (interface) 12053 are illustrated.
 駆動系制御ユニット12010は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット12010は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。 The drive system control unit 12010 controls the operation of devices related to the drive system of the vehicle according to various programs. For example, the driving system control unit 12010 includes a driving force generator for generating driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism to adjust and a brake device to generate braking force of the vehicle.
 ボディ系制御ユニット12020は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット12020は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット12020には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット12020は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 The body system control unit 12020 controls the operation of various devices equipped on the vehicle body according to various programs. For example, the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, winkers or fog lamps. In this case, the body system control unit 12020 can receive radio waves transmitted from a portable device that substitutes for a key or signals from various switches. Body system control unit 12020 receives the input of these radio waves or signals and controls the door lock device, power window device, lamps, etc. of the vehicle.
 車外情報検出ユニット12030は、車両制御システム12000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット12030には、撮像部12031が接続される。車外情報検出ユニット12030は、撮像部12031に車外の画像を撮像させるとともに、撮像された画像を受信する。車外情報検出ユニット12030は、受信した画像に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。 The vehicle exterior information detection unit 12030 detects information outside the vehicle in which the vehicle control system 12000 is installed. For example, the vehicle exterior information detection unit 12030 is connected with an imaging section 12031 . The vehicle exterior information detection unit 12030 causes the imaging unit 12031 to capture an image of the exterior of the vehicle, and receives the captured image. The vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as people, vehicles, obstacles, signs, or characters on the road surface based on the received image.
 撮像部12031は、光を受光し、その光の受光量に応じた電気信号を出力する光センサである。撮像部12031は、電気信号を画像として出力することもできるし、測距の情報として出力することもできる。また、撮像部12031が受光する光は、可視光であっても良いし、赤外線等の非可視光であっても良い。 The imaging unit 12031 is an optical sensor that receives light and outputs an electrical signal according to the amount of received light. The imaging unit 12031 can output the electric signal as an image, and can also output it as distance measurement information. Also, the light received by the imaging unit 12031 may be visible light or non-visible light such as infrared rays.
 車内情報検出ユニット12040は、車内の情報を検出する。車内情報検出ユニット12040には、例えば、運転者の状態を検出する運転者状態検出部12041が接続される。運転者状態検出部12041は、例えば運転者を撮像するカメラを含み、車内情報検出ユニット12040は、運転者状態検出部12041から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。 The in-vehicle information detection unit 12040 detects in-vehicle information. The in-vehicle information detection unit 12040 is connected to, for example, a driver state detection section 12041 that detects the state of the driver. The driver state detection unit 12041 includes, for example, a camera that captures an image of the driver, and the in-vehicle information detection unit 12040 detects the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing off.
 マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット12010に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行うことができる。 The microcomputer 12051 calculates control target values for the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and controls the drive system control unit. A control command can be output to 12010 . For example, the microcomputer 12051 realizes the functions of ADAS (Advanced Driver Assistance System) including collision avoidance or shock mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, or vehicle lane deviation warning. Cooperative control can be performed for the purpose of
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 In addition, the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, etc. based on the information about the vehicle surroundings acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, so that the driver's Cooperative control can be performed for the purpose of autonomous driving, etc., in which vehicles autonomously travel without depending on operation.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030で取得される車外の情報に基づいて、ボディ系制御ユニット12020に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車外情報検出ユニット12030で検知した先行車又は対向車の位置に応じてヘッドランプを制御し、ハイビームをロービームに切り替える等の防眩を図ることを目的とした協調制御を行うことができる。 Also, the microcomputer 12051 can output a control command to the body system control unit 12020 based on the information outside the vehicle acquired by the information detection unit 12030 outside the vehicle. For example, the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the vehicle exterior information detection unit 12030, and performs cooperative control aimed at anti-glare such as switching from high beam to low beam. It can be carried out.
 音声画像出力部12052は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図25の例では、出力装置として、オーディオスピーカ12061、表示部12062及びインストルメントパネル12063が例示されている。表示部12062は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。 The audio/image output unit 12052 transmits at least one of audio and/or image output signals to an output device capable of visually or audibly notifying the passengers of the vehicle or the outside of the vehicle. In the example of FIG. 25, an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are illustrated as output devices. The display unit 12062 may include at least one of an on-board display and a head-up display, for example.
 図26は、撮像部12031の設置位置の例を示す図である。 FIG. 26 is a diagram showing an example of the installation position of the imaging unit 12031. FIG.
 図26では、車両12100は、撮像部12031として、撮像部12101,12102,12103,12104,12105を有する。 In FIG. 26, the vehicle 12100 has imaging units 12101, 12102, 12103, 12104, and 12105 as the imaging unit 12031.
 撮像部12101,12102,12103,12104,12105は、例えば、車両12100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部等の位置に設けられる。フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105は、主として車両12100の前方の画像を取得する。サイドミラーに備えられる撮像部12102,12103は、主として車両12100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部12104は、主として車両12100の後方の画像を取得する。撮像部12101及び12105で取得される前方の画像は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 The imaging units 12101, 12102, 12103, 12104, and 12105 are provided at positions such as the front nose of the vehicle 12100, the side mirrors, the rear bumper, the back door, and the upper part of the windshield in the vehicle interior, for example. An image pickup unit 12101 provided in the front nose and an image pickup unit 12105 provided above the windshield in the passenger compartment mainly acquire images in front of the vehicle 12100 . Imaging units 12102 and 12103 provided in the side mirrors mainly acquire side images of the vehicle 12100 . An imaging unit 12104 provided in the rear bumper or back door mainly acquires an image behind the vehicle 12100 . Forward images acquired by the imaging units 12101 and 12105 are mainly used for detecting preceding vehicles, pedestrians, obstacles, traffic lights, traffic signs, lanes, and the like.
 なお、図26には、撮像部12101ないし12104の撮影範囲の一例が示されている。撮像範囲12111は、フロントノーズに設けられた撮像部12101の撮像範囲を示し、撮像範囲12112,12113は、それぞれサイドミラーに設けられた撮像部12102,12103の撮像範囲を示し、撮像範囲12114は、リアバンパ又はバックドアに設けられた撮像部12104の撮像範囲を示す。例えば、撮像部12101ないし12104で撮像された画像データが重ね合わせられることにより、車両12100を上方から見た俯瞰画像が得られる。 Note that FIG. 26 shows an example of the imaging range of the imaging units 12101 to 12104. FIG. The imaging range 12111 indicates the imaging range of the imaging unit 12101 provided in the front nose, the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided in the side mirrors, respectively, and the imaging range 12114 The imaging range of an imaging unit 12104 provided on the rear bumper or back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 viewed from above can be obtained.
 撮像部12101ないし12104の少なくとも1つは、距離情報を取得する機能を有していてもよい。例えば、撮像部12101ないし12104の少なくとも1つは、複数の撮像素子からなるステレオカメラであってもよいし、位相差検出用の画素を有する撮像素子であってもよい。 At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the imaging units 12101 to 12104 may be a stereo camera composed of a plurality of imaging elements, or may be an imaging element having pixels for phase difference detection.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を基に、撮像範囲12111ないし12114内における各立体物までの距離と、この距離の時間的変化(車両12100に対する相対速度)を求めることにより、特に車両12100の進行路上にある最も近い立体物で、車両12100と略同じ方向に所定の速度(例えば、0km/h以上)で走行する立体物を先行車として抽出することができる。さらに、マイクロコンピュータ12051は、先行車の手前に予め確保すべき車間距離を設定し、自動ブレーキ制御(追従停止制御も含む)や自動加速制御(追従発進制御も含む)等を行うことができる。このように運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 For example, based on the distance information obtained from the imaging units 12101 to 12104, the microcomputer 12051 determines the distance to each three-dimensional object within the imaging ranges 12111 to 12114 and changes in this distance over time (relative velocity with respect to the vehicle 12100). , it is possible to extract, as the preceding vehicle, the closest three-dimensional object on the course of the vehicle 12100, which runs at a predetermined speed (for example, 0 km/h or more) in substantially the same direction as the vehicle 12100. can. Furthermore, the microcomputer 12051 can set the inter-vehicle distance to be secured in advance in front of the preceding vehicle, and perform automatic brake control (including following stop control) and automatic acceleration control (including following start control). In this way, cooperative control can be performed for the purpose of automatic driving in which the vehicle runs autonomously without relying on the operation of the driver.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を元に、立体物に関する立体物データを、2輪車、普通車両、大型車両、歩行者、電柱等その他の立体物に分類して抽出し、障害物の自動回避に用いることができる。例えば、マイクロコンピュータ12051は、車両12100の周辺の障害物を、車両12100のドライバが視認可能な障害物と視認困難な障害物とに識別する。そして、マイクロコンピュータ12051は、各障害物との衝突の危険度を示す衝突リスクを判断し、衝突リスクが設定値以上で衝突可能性がある状況であるときには、オーディオスピーカ12061や表示部12062を介してドライバに警報を出力することや、駆動系制御ユニット12010を介して強制減速や回避操舵を行うことで、衝突回避のための運転支援を行うことができる。 For example, based on the distance information obtained from the imaging units 12101 to 12104, the microcomputer 12051 converts three-dimensional object data related to three-dimensional objects to other three-dimensional objects such as motorcycles, ordinary vehicles, large vehicles, pedestrians, and utility poles. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into those that are visible to the driver of the vehicle 12100 and those that are difficult to see. Then, the microcomputer 12051 judges the collision risk indicating the degree of danger of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, an audio speaker 12061 and a display unit 12062 are displayed. By outputting an alarm to the driver via the drive system control unit 12010 and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be performed.
 撮像部12101ないし12104の少なくとも1つは、赤外線を検出する赤外線カメラであってもよい。例えば、マイクロコンピュータ12051は、撮像部12101ないし12104の撮像画像中に歩行者が存在するか否かを判定することで歩行者を認識することができる。かかる歩行者の認識は、例えば赤外線カメラとしての撮像部12101ないし12104の撮像画像における特徴点を抽出する手順と、物体の輪郭を示す一連の特徴点にパターンマッチング処理を行って歩行者か否かを判別する手順によって行われる。マイクロコンピュータ12051が、撮像部12101ないし12104の撮像画像中に歩行者が存在すると判定し、歩行者を認識すると、音声画像出力部12052は、当該認識された歩行者に強調のための方形輪郭線を重畳表示するように、表示部12062を制御する。また、音声画像出力部12052は、歩行者を示すアイコン等を所望の位置に表示するように表示部12062を制御してもよい。 At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether or not the pedestrian exists in the captured images of the imaging units 12101 to 12104 . Such recognition of a pedestrian is performed by, for example, a procedure for extracting feature points in images captured by the imaging units 12101 to 12104 as infrared cameras, and performing pattern matching processing on a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. This is done by a procedure that determines When the microcomputer 12051 determines that a pedestrian exists in the images captured by the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a rectangular outline for emphasis to the recognized pedestrian. is superimposed on the display unit 12062 . Also, the audio/image output unit 12052 may control the display unit 12062 to display an icon or the like indicating a pedestrian at a desired position.
 以上、本開示に係る技術が適用され得る車両制御システムの一例について説明した。本開示に係る技術は、以上説明した構成のうち、例えば、撮像部12031等に適用され得る。 An example of a vehicle control system to which the technology according to the present disclosure can be applied has been described above. The technology according to the present disclosure can be applied to, for example, the imaging unit 12031 among the configurations described above.
 <<10. 補足>>
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本開示の技術的範囲はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。
<<10. Supplement >>
Although the preferred embodiments of the present disclosure have been described in detail above with reference to the accompanying drawings, the technical scope of the present disclosure is not limited to such examples. It is obvious that a person having ordinary knowledge in the technical field of the present disclosure can conceive of various modifications or modifications within the scope of the technical idea described in the claims. are naturally within the technical scope of the present disclosure.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 Also, the effects described in this specification are merely descriptive or exemplary, and are not limiting. In other words, the technology according to the present disclosure can produce other effects that are obvious to those skilled in the art from the description of this specification, in addition to or instead of the above effects.
 なお、本技術は以下のような構成も取ることができる。
(1)
 半導体基板と、
 前記半導体基板上に行方向及び列方向に沿ってマトリックス状に配列し、入射された光に対して光電変換を行う、複数の撮像素子と、
 を備える撮像装置であって、
 前記各撮像素子は、
 前記半導体基板内に互いに隣接するように設けられた複数の画素と、
 前記複数の画素を取り囲み、且つ、前記半導体基板を貫通するように設けられた素子分離壁と、
 前記複数の画素が共有するように前記半導体基板の第1の面の上方に設けられたオンチップレンズと、
 前記複数の画素を分離する第1の画素分離部と、
 を有し、
 前記半導体基板の、前記第1の面と対向する第2の面の上方から見た場合、前記第1の画素分離部は、前記素子分離壁の幅に比べて狭い幅を持つ幅狭部を有し、
 前記幅狭部は、前記半導体基板の膜厚方向に沿って、前記第2の面から当該半導体基板の途中まで貫く、
 撮像装置。
(2)
 前記幅狭部と前記第1の面との間には、前記複数の画素の間で飽和電荷をやり取りするオーバーフローパスが設けられている、上記(1)に記載の撮像装置。
(3)
 前記第1の画素分離部の周囲には、不純物を含む拡散領域が設けられている、上記(1)又は(2)に記載の撮像装置。
(4)
 前記第1の画素分離部は、複数の前記幅狭部を有する、上記(1)~(3)のいずれか1つに記載の撮像装置。
(5)
 前記第2の面の上方から見た場合、
 前記第1の画素分離部は、前記素子分離壁の幅と略同一の幅を持つ幅広部を有し、
 前記幅広部は、前記半導体基板を貫通する、
 上記(1)~(4)のいずれか1つに記載の撮像装置。
(6)
 前記第1の画素分離部は、複数の前記幅広部を有する、上記(5)に記載の撮像装置。
(7)
 前記第2の面の上方から見た場合、
 前記第1の画素分離部は、前記行方向及び前記列方向に沿って、暫時狭くなる幅を有する、
 上記(1)に記載の撮像装置。
(8)
 前記第2の面の上方から見た場合、
 前記第1の画素分離部は、前記撮像素子の中心に向かって暫時狭くなる幅を有する、
 上記(1)に記載の撮像装置。
(9)
 前記半導体基板の膜厚方向における、前記第2の面からの前記第1の画素分離部の長さは、前記第1の画素分離部の前記幅が広くなるに従って、長くなる、上記(7)又は(8)に記載の撮像装置。
(10)
 半導体基板と、
 前記半導体基板上に行方向及び列方向に沿ってマトリックス状に配列し、入射された光に対して光電変換を行う、複数の撮像素子と、
 を備える撮像装置であって、
 前記各撮像素子は、
 前記半導体基板内に互いに隣接するように設けられた複数の画素と、
 前記複数の画素が共有するように前記半導体基板の第1の面の上方に設けられたオンチップレンズと、
 前記複数の画素を分離する1つ又は複数の第2の画素分離部と、
 を有し、
 前記半導体基板の、前記第1の面と対向する第2の面の上方から見た場合、前記第2の画素分離部は、略円状の断面を持つ、
 撮像装置。
(11)
 前記第2の画素分離部は、前記半導体基板の膜厚方向に沿って、前記第2の面から当該半導体基板の途中まで貫き、
 前記第2の画素分離部と前記第1の面との間には、前記複数の画素の間で飽和電荷をやり取りするオーバーフローパスが設けられている、
 上記(10)に記載の撮像装置。
(12)
 前記第2の画素分離部の周囲には、不純物を含む拡散領域が設けられている、
 上記(10)又は(11)に記載の撮像装置。
(13)
 前記第2の画素分離部は、シリコン、シリコン酸化膜、シリコン窒化膜、チタン窒化膜のうちから選択されるいずれか1つの材料からなる、上記(10)~(12)のいずれか1つに記載の撮像装置。
(14)
 前記第2の画素分離部は、前記第2の画素分離部の外周を覆う外周部を有する、上記(10)~(13)のいずれか1つに記載の撮像装置。
(15)
 前記外周部は、シリコン酸化膜、シリコン窒化膜、チタン窒化膜のうちから選択されるいずれか1つの材料からなる、上記(14)に記載の撮像装置。
(16)
 前記複数の第2の画素分離部は、前記第2の面の上方から見た場合、前記行方向及び前記列方向に沿って並ぶように設けられている、上記(10)~(15)のいずれか1つに記載の撮像装置。
(17)
 前記撮像素子は、前記第2の面の上方から見た場合、略矩形状の断面持つ1つ又は複数の第3の画素分離部をさらに有し、
 前記第3の画素分離部は、前記半導体基板の膜厚方向に沿って、前記第2の面から当該半導体基板の途中まで貫く、
 上記(10)~(16)のいずれか1つに記載の撮像装置。
(18)
 異なる幅を持つ矩形状の開口部を持つマスクを用いて、半導体基板をエッチングし、
 前記エッチングにより形成された溝に、絶縁膜を埋め込む、
 ことを含む、撮像装置の製造方法。
(19)
 半導体基板と、前記半導体基板上に行方向及び列方向に沿ってマトリックス状に配列し、入射された光に対して光電変換を行う、複数の撮像素子と、を有する撮像装置を含む電子機器であって、
 前記各撮像素子は、
 前記半導体基板内に互いに隣接するように設けられた複数の画素と、
 前記複数の画素を取り囲み、且つ、前記半導体基板を貫通するように設けられた素子分離壁と、
 前記複数の画素が共有するように前記半導体基板の第1の面の上方に設けられたオンチップレンズと、
 前記複数の画素を分離する第1の画素分離部と、
 を有し、
 前記半導体基板の、前記第1の面と対向する第2の面の上方から見た場合、前記第1の画素分離部は、前記素子分離壁の幅に比べて狭い幅を持つ幅狭部を有し、
 前記幅狭部は、前記半導体基板の膜厚方向に沿って、前記第2の面から当該半導体基板の途中まで貫く、
 電子機器。
Note that the present technology can also take the following configuration.
(1)
a semiconductor substrate;
a plurality of imaging elements arranged in a matrix along the row direction and the column direction on the semiconductor substrate and performing photoelectric conversion on incident light;
An imaging device comprising
Each of the imaging elements is
a plurality of pixels provided adjacent to each other in the semiconductor substrate;
an element isolation wall surrounding the plurality of pixels and extending through the semiconductor substrate;
an on-chip lens provided above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels;
a first pixel separation unit that separates the plurality of pixels;
has
When viewed from above the second surface of the semiconductor substrate facing the first surface, the first pixel isolation portion has a narrow portion having a width narrower than the width of the element isolation wall. have
the narrow portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate;
Imaging device.
(2)
The imaging device according to (1) above, wherein an overflow path for exchanging saturated charges between the plurality of pixels is provided between the narrow portion and the first surface.
(3)
The imaging device according to (1) or (2) above, wherein a diffusion region containing an impurity is provided around the first pixel separation section.
(4)
The imaging device according to any one of (1) to (3) above, wherein the first pixel separation section has a plurality of narrow width sections.
(5)
When viewed from above the second surface,
the first pixel separation section has a wide section having a width substantially equal to the width of the element separation wall;
The wide portion penetrates the semiconductor substrate,
The imaging device according to any one of (1) to (4) above.
(6)
The imaging device according to (5) above, wherein the first pixel separation section has a plurality of the wide sections.
(7)
When viewed from above the second surface,
the first pixel separation section has a width that gradually narrows along the row direction and the column direction;
The imaging device according to (1) above.
(8)
When viewed from above the second surface,
The first pixel separation section has a width that gradually narrows toward the center of the imaging element,
The imaging device according to (1) above.
(9)
The length of the first pixel separation section from the second surface in the film thickness direction of the semiconductor substrate increases as the width of the first pixel separation section increases, (7) above. Or the imaging device according to (8).
(10)
a semiconductor substrate;
a plurality of imaging elements arranged in a matrix along the row direction and the column direction on the semiconductor substrate and performing photoelectric conversion on incident light;
An imaging device comprising
Each of the imaging elements is
a plurality of pixels provided adjacent to each other in the semiconductor substrate;
an on-chip lens provided above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels;
one or more second pixel separators for separating the plurality of pixels;
has
When viewed from above a second surface of the semiconductor substrate facing the first surface, the second pixel separation section has a substantially circular cross section,
Imaging device.
(11)
the second pixel separating portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate;
An overflow path is provided between the second pixel separation section and the first surface for exchanging saturation charges between the plurality of pixels.
The imaging device according to (10) above.
(12)
A diffusion region containing an impurity is provided around the second pixel separation section,
The imaging device according to (10) or (11) above.
(13)
Any one of (10) to (12) above, wherein the second pixel separation section is made of any one material selected from silicon, a silicon oxide film, a silicon nitride film, and a titanium nitride film. The imaging device described.
(14)
The imaging device according to any one of (10) to (13) above, wherein the second pixel separation section has an outer peripheral portion that covers the outer periphery of the second pixel separation section.
(15)
The imaging device according to (14) above, wherein the outer peripheral portion is made of any one material selected from a silicon oxide film, a silicon nitride film, and a titanium nitride film.
(16)
According to (10) to (15) above, the plurality of second pixel separation sections are provided so as to line up along the row direction and the column direction when viewed from above the second surface. The imaging device according to any one of the above.
(17)
The imaging device further includes one or more third pixel separation sections having a substantially rectangular cross section when viewed from above the second surface,
the third pixel separating portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate;
The imaging device according to any one of (10) to (16) above.
(18)
etching a semiconductor substrate using a mask having rectangular openings with different widths;
embedding an insulating film in the trench formed by the etching;
A method of manufacturing an imaging device, comprising:
(19)
An electronic device including an imaging device having a semiconductor substrate, and a plurality of imaging elements arranged in a matrix on the semiconductor substrate along row and column directions and performing photoelectric conversion on incident light. There is
Each of the imaging elements is
a plurality of pixels provided adjacent to each other in the semiconductor substrate;
an element isolation wall surrounding the plurality of pixels and extending through the semiconductor substrate;
an on-chip lens provided above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels;
a first pixel separation unit that separates the plurality of pixels;
has
When viewed from above the second surface of the semiconductor substrate facing the first surface, the first pixel isolation portion has a narrow portion having a width narrower than the width of the element isolation wall. have
the narrow portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate;
Electronics.
  1  撮像装置
  10  半導体基板
  10a、10b  面
  30  画素アレイ部
  32  垂直駆動回路部
  34  カラム信号処理回路部
  36  水平駆動回路部
  38  出力回路部
  40  制御回路部
  42  画素駆動配線
  44  垂直信号線
  46  水平信号線
  48  入出力端子
  100  撮像素子
  200  オンチップレンズ
  202  カラーフィルタ
  204  遮光部
  300a、300b  画素
  302  光電変換部
  304、350  画素分離部
  304a  幅狭部
  304b  幅広部
  306、308、320  拡散領域
  310  素子分離壁
  330、352  中心部
  332、354  外周部
  400a、400b  転送ゲート
  402  ゲート電極
  500  領域
  600  マスク
  610  トレンチ
  612  BSG
Reference Signs List 1 imaging device 10 semiconductor substrate 10a, 10b surface 30 pixel array section 32 vertical drive circuit section 34 column signal processing circuit section 36 horizontal drive circuit section 38 output circuit section 40 control circuit section 42 pixel drive wiring 44 vertical signal line 46 horizontal signal line 48 input/output terminal 100 image sensor 200 on-chip lens 202 color filter 204 light shielding portion 300a, 300b pixel 302 photoelectric conversion portion 304, 350 pixel separation portion 304a narrow portion 304b wide portion 306, 308, 320 diffusion region 310 element separation wall 330 , 352 central portion 332, 354 peripheral portion 400a, 400b transfer gate 402 gate electrode 500 region 600 mask 610 trench 612 BSG

Claims (19)

  1.  半導体基板と、
     前記半導体基板上に行方向及び列方向に沿ってマトリックス状に配列し、入射された光に対して光電変換を行う、複数の撮像素子と、
     を備える撮像装置であって、
     前記各撮像素子は、
     前記半導体基板内に互いに隣接するように設けられた複数の画素と、
     前記複数の画素を取り囲み、且つ、前記半導体基板を貫通するように設けられた素子分離壁と、
     前記複数の画素が共有するように前記半導体基板の第1の面の上方に設けられたオンチップレンズと、
     前記複数の画素を分離する第1の画素分離部と、
     を有し、
     前記半導体基板の、前記第1の面と対向する第2の面の上方から見た場合、前記第1の画素分離部は、前記素子分離壁の幅に比べて狭い幅を持つ幅狭部を有し、
     前記幅狭部は、前記半導体基板の膜厚方向に沿って、前記第2の面から当該半導体基板の途中まで貫く、
     撮像装置。
    a semiconductor substrate;
    a plurality of imaging elements arranged in a matrix along the row direction and the column direction on the semiconductor substrate and performing photoelectric conversion on incident light;
    An imaging device comprising
    Each of the imaging elements is
    a plurality of pixels provided adjacent to each other in the semiconductor substrate;
    an element isolation wall surrounding the plurality of pixels and extending through the semiconductor substrate;
    an on-chip lens provided above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels;
    a first pixel separation unit that separates the plurality of pixels;
    has
    When viewed from above the second surface of the semiconductor substrate facing the first surface, the first pixel isolation portion has a narrow portion having a width narrower than the width of the element isolation wall. have
    the narrow portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate;
    Imaging device.
  2.  前記幅狭部と前記第1の面との間には、前記複数の画素の間で飽和電荷をやり取りするオーバーフローパスが設けられている、請求項1に記載の撮像装置。 The imaging device according to claim 1, wherein an overflow path is provided between said narrow portion and said first surface for exchanging saturation charges between said plurality of pixels.
  3.  前記第1の画素分離部の周囲には、不純物を含む拡散領域が設けられている、請求項1に記載の撮像装置。 The imaging device according to claim 1, wherein a diffusion region containing an impurity is provided around the first pixel separation section.
  4.  前記第1の画素分離部は、複数の前記幅狭部を有する、請求項1に記載の撮像装置。 The imaging device according to claim 1, wherein the first pixel separation section has a plurality of the narrow width sections.
  5.  前記第2の面の上方から見た場合、
     前記第1の画素分離部は、前記素子分離壁の幅と略同一の幅を持つ幅広部を有し、
     前記幅広部は、前記半導体基板を貫通する、
     請求項1に記載の撮像装置。
    When viewed from above the second surface,
    the first pixel separation section has a wide section having a width substantially equal to the width of the element separation wall;
    The wide portion penetrates the semiconductor substrate,
    The imaging device according to claim 1 .
  6.  前記第1の画素分離部は、複数の前記幅広部を有する、請求項5に記載の撮像装置。 The imaging device according to claim 5, wherein the first pixel separation section has a plurality of the wide sections.
  7.  前記第2の面の上方から見た場合、
     前記第1の画素分離部は、前記行方向及び前記列方向に沿って、暫時狭くなる幅を有する、
     請求項1に記載の撮像装置。
    When viewed from above the second surface,
    the first pixel separation section has a width that gradually narrows along the row direction and the column direction;
    The imaging device according to claim 1 .
  8.  前記第2の面の上方から見た場合、
     前記第1の画素分離部は、前記撮像素子の中心に向かって暫時狭くなる幅を有する、
     請求項1に記載の撮像装置。
    When viewed from above the second surface,
    The first pixel separation section has a width that gradually narrows toward the center of the imaging element,
    The imaging device according to claim 1 .
  9.  前記半導体基板の膜厚方向における、前記第2の面からの前記第1の画素分離部の長さは、前記第1の画素分離部の前記幅が広くなるに従って、長くなる、請求項7に記載の撮像装置。 8. The length of the first pixel separation section from the second surface in the thickness direction of the semiconductor substrate increases as the width of the first pixel separation section increases. The imaging device described.
  10.  半導体基板と、
     前記半導体基板上に行方向及び列方向に沿ってマトリックス状に配列し、入射された光に対して光電変換を行う、複数の撮像素子と、
     を備える撮像装置であって、
     前記各撮像素子は、
     前記半導体基板内に互いに隣接するように設けられた複数の画素と、
     前記複数の画素が共有するように前記半導体基板の第1の面の上方に設けられたオンチップレンズと、
     前記複数の画素を分離する1つ又は複数の第2の画素分離部と、
     を有し、
     前記半導体基板の、前記第1の面と対向する第2の面の上方から見た場合、前記第2の画素分離部は、略円状の断面を持つ、
     撮像装置。
    a semiconductor substrate;
    a plurality of imaging elements arranged in a matrix along the row direction and the column direction on the semiconductor substrate and performing photoelectric conversion on incident light;
    An imaging device comprising
    Each of the imaging elements is
    a plurality of pixels provided adjacent to each other in the semiconductor substrate;
    an on-chip lens provided above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels;
    one or more second pixel separators for separating the plurality of pixels;
    has
    When viewed from above a second surface of the semiconductor substrate facing the first surface, the second pixel separation section has a substantially circular cross section,
    Imaging device.
  11.  前記第2の画素分離部は、前記半導体基板の膜厚方向に沿って、前記第2の面から当該半導体基板の途中まで貫き、
     前記第2の画素分離部と前記第1の面との間には、前記複数の画素の間で飽和電荷をやり取りするオーバーフローパスが設けられている、
     請求項10に記載の撮像装置。
    the second pixel separating portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate;
    An overflow path is provided between the second pixel separation section and the first surface for exchanging saturation charges between the plurality of pixels.
    The imaging device according to claim 10.
  12.  前記第2の画素分離部の周囲には、不純物を含む拡散領域が設けられている、
     請求項10に記載の撮像装置。
    A diffusion region containing an impurity is provided around the second pixel separation section,
    The imaging device according to claim 10.
  13.  前記第2の画素分離部は、シリコン、シリコン酸化膜、シリコン窒化膜、チタン窒化膜のうちから選択されるいずれか1つの材料からなる、請求項10に記載の撮像装置。 The imaging device according to claim 10, wherein the second pixel separation section is made of any one material selected from silicon, a silicon oxide film, a silicon nitride film, and a titanium nitride film.
  14.  前記第2の画素分離部は、前記第2の画素分離部の外周を覆う外周部を有する、請求項10に記載の撮像装置。 The imaging device according to claim 10, wherein the second pixel separation section has an outer peripheral portion that covers the outer periphery of the second pixel separation section.
  15.  前記外周部は、シリコン酸化膜、シリコン窒化膜、チタン窒化膜のうちから選択されるいずれか1つの材料からなる、請求項14に記載の撮像装置。 15. The imaging device according to claim 14, wherein the outer peripheral portion is made of any one material selected from a silicon oxide film, a silicon nitride film, and a titanium nitride film.
  16.  前記複数の第2の画素分離部は、前記第2の面の上方から見た場合、前記行方向及び前記列方向に沿って並ぶように設けられている、請求項10に記載の撮像装置。 The imaging device according to claim 10, wherein the plurality of second pixel separators are arranged along the row direction and the column direction when viewed from above the second surface.
  17.  前記撮像素子は、前記第2の面の上方から見た場合、略矩形状の断面持つ1つ又は複数の第3の画素分離部をさらに有し、
     前記第3の画素分離部は、前記半導体基板の膜厚方向に沿って、前記第2の面から当該半導体基板の途中まで貫く、
     請求項10に記載の撮像装置。
    The imaging device further includes one or more third pixel separation sections having a substantially rectangular cross section when viewed from above the second surface,
    the third pixel separating portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate;
    The imaging device according to claim 10.
  18.  異なる幅を持つ矩形状の開口部を持つマスクを用いて、半導体基板をエッチングし、
     前記エッチングにより形成された溝に、絶縁膜を埋め込む、
     ことを含む、撮像装置の製造方法。
    etching a semiconductor substrate using a mask having rectangular openings with different widths;
    embedding an insulating film in the trench formed by the etching;
    A method of manufacturing an imaging device, comprising:
  19.  半導体基板と、前記半導体基板上に行方向及び列方向に沿ってマトリックス状に配列し、入射された光に対して光電変換を行う、複数の撮像素子と、を有する撮像装置を含む電子機器であって、
     前記各撮像素子は、
     前記半導体基板内に互いに隣接するように設けられた複数の画素と、
     前記複数の画素を取り囲み、且つ、前記半導体基板を貫通するように設けられた素子分離壁と、
     前記複数の画素が共有するように前記半導体基板の第1の面の上方に設けられたオンチップレンズと、
     前記複数の画素を分離する第1の画素分離部と、
     を有し、
     前記半導体基板の、前記第1の面と対向する第2の面の上方から見た場合、前記第1の画素分離部は、前記素子分離壁の幅に比べて狭い幅を持つ幅狭部を有し、
     前記幅狭部は、前記半導体基板の膜厚方向に沿って、前記第2の面から当該半導体基板の途中まで貫く、
     電子機器。
    An electronic device including an imaging device having a semiconductor substrate, and a plurality of imaging elements arranged in a matrix on the semiconductor substrate along row and column directions and performing photoelectric conversion on incident light. There is
    Each of the imaging elements is
    a plurality of pixels provided adjacent to each other in the semiconductor substrate;
    an element isolation wall surrounding the plurality of pixels and extending through the semiconductor substrate;
    an on-chip lens provided above the first surface of the semiconductor substrate so as to be shared by the plurality of pixels;
    a first pixel separation unit that separates the plurality of pixels;
    has
    When viewed from above the second surface of the semiconductor substrate facing the first surface, the first pixel isolation portion has a narrow portion having a width narrower than the width of the element isolation wall. have
    the narrow portion penetrates partway through the semiconductor substrate from the second surface along the film thickness direction of the semiconductor substrate;
    Electronics.
PCT/JP2022/005777 2021-03-29 2022-02-15 Imaging device, method for manufacturing imaging device, and electronic apparatus WO2022209365A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021055479A JP2022152637A (en) 2021-03-29 2021-03-29 Imaging device, method for manufacturing imaging device, and electronic apparatus
JP2021-055479 2021-03-29

Publications (1)

Publication Number Publication Date
WO2022209365A1 true WO2022209365A1 (en) 2022-10-06

Family

ID=83458361

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/005777 WO2022209365A1 (en) 2021-03-29 2022-02-15 Imaging device, method for manufacturing imaging device, and electronic apparatus

Country Status (2)

Country Link
JP (1) JP2022152637A (en)
WO (1) WO2022209365A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017212351A (en) * 2016-05-26 2017-11-30 キヤノン株式会社 Imaging device
WO2020013130A1 (en) * 2018-07-10 2020-01-16 ソニーセミコンダクタソリューションズ株式会社 Solid-state imaging device, and electronic device
JP2020068289A (en) * 2018-10-24 2020-04-30 キヤノン株式会社 Photoelectric conversion device, imaging system, mobile body, and semiconductor chip for lamination
WO2020095689A1 (en) * 2018-11-06 2020-05-14 ソニーセミコンダクタソリューションズ株式会社 Imaging element and electronic apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017212351A (en) * 2016-05-26 2017-11-30 キヤノン株式会社 Imaging device
WO2020013130A1 (en) * 2018-07-10 2020-01-16 ソニーセミコンダクタソリューションズ株式会社 Solid-state imaging device, and electronic device
JP2020068289A (en) * 2018-10-24 2020-04-30 キヤノン株式会社 Photoelectric conversion device, imaging system, mobile body, and semiconductor chip for lamination
WO2020095689A1 (en) * 2018-11-06 2020-05-14 ソニーセミコンダクタソリューションズ株式会社 Imaging element and electronic apparatus

Also Published As

Publication number Publication date
JP2022152637A (en) 2022-10-12

Similar Documents

Publication Publication Date Title
WO2019159711A1 (en) Imaging element
TWI814902B (en) camera device
WO2021193254A1 (en) Imaging device and electronic apparatus
WO2021193915A1 (en) Imaging device and electronic apparatus
WO2019092988A1 (en) Imaging element and imaging device
WO2019003681A1 (en) Solid-state image capture element and image capture device
WO2020137203A1 (en) Imaging element and imaging device
WO2021125116A1 (en) Light-receiving device
WO2021124974A1 (en) Imaging device
WO2019239754A1 (en) Solid-state imaging element, method for manufacturing solid-state imaging element, and electronic device
WO2022220084A1 (en) Imaging device
WO2022209365A1 (en) Imaging device, method for manufacturing imaging device, and electronic apparatus
JP2019165066A (en) Imaging element and electronic device
WO2019176302A1 (en) Imaging element and method for manufacturing imaging element
WO2023047632A1 (en) Imaging device and electronic apparatus
US20200286936A1 (en) Semiconductor device and manufacturing method of semiconductor device
WO2023017650A1 (en) Imaging device and electronic apparatus
WO2023119840A1 (en) Imaging element, method for manufacturing imaging element, and electronic device
WO2023068172A1 (en) Imaging device
JP7275125B2 (en) Image sensor, electronic equipment
WO2023021740A1 (en) Imaging element, imaging device and production method
WO2023042462A1 (en) Light detecting device, method for manufacturing light detecting device, and electronic instrument
JP7316340B2 (en) Solid-state imaging device and electronic equipment
US20240170509A1 (en) Light receiving element, photodetector, electronic apparatus, and method of manufacturing light receiving element
WO2022124131A1 (en) Light-receiving element, light-receiving device, and electronic apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22779592

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22779592

Country of ref document: EP

Kind code of ref document: A1