WO2021166672A1 - Imaging apparatus and electronic device - Google Patents

Imaging apparatus and electronic device Download PDF

Info

Publication number
WO2021166672A1
WO2021166672A1 PCT/JP2021/004221 JP2021004221W WO2021166672A1 WO 2021166672 A1 WO2021166672 A1 WO 2021166672A1 JP 2021004221 W JP2021004221 W JP 2021004221W WO 2021166672 A1 WO2021166672 A1 WO 2021166672A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
light
lens
unit
photoelectric conversion
Prior art date
Application number
PCT/JP2021/004221
Other languages
French (fr)
Japanese (ja)
Inventor
正彦 中溝
山本 篤志
兼作 前田
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Priority to JP2022501786A priority Critical patent/JPWO2021166672A1/ja
Publication of WO2021166672A1 publication Critical patent/WO2021166672A1/en

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B3/00Simple or compound lenses
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B5/00Optical elements other than lenses
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/02Mountings, adjusting means, or light-tight connections, for optical elements for lenses
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/02Mountings, adjusting means, or light-tight connections, for optical elements for lenses
    • G02B7/04Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B11/00Filters or other obturators specially adapted for photographic purposes
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B17/00Details of cameras or camera bodies; Accessories therefor
    • G03B17/02Bodies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/70SSIS architectures; Circuits associated therewith

Definitions

  • This technology relates to an imaging device and an electronic device, for example, an imaging device and an electronic device that realizes miniaturization and low profile of the device configuration and suppresses the occurrence of flare and ghost to perform imaging.
  • the lens and the solid-state image sensor are closer to each other on the optical axis, and the infrared light cut filter is arranged near the lens.
  • a technique has been proposed that realizes miniaturization of a solid-state image sensor by configuring a lens that is the lowest layer of a lens group composed of a plurality of lenses on a solid-state image sensor.
  • the lens of the lowest layer is formed on the solid-state image sensor, although it contributes to the miniaturization and the reduction of the height of the device configuration, the distance between the infrared light cut filter and the lens becomes shorter. , Flare and ghost caused by internal disturbance reflection due to light reflection occur.
  • This disclosure has been made in view of such a situation, and in particular, in a solid-state image sensor, it is possible to realize miniaturization and low profile, and to suppress the occurrence of flare and ghost.
  • the imaging device on one side of the present technology has a photoelectric conversion unit that generates a pixel signal by photoelectric conversion according to the amount of incident light, and a light receiving surface in which a plurality of the photoelectric conversion units are arranged in an array.
  • a lens group composed of a plurality of lenses for focusing the incident light, and a lens forming the lowest layer of the lens group with respect to the incident direction of the incident light, and are arranged in a fixed state.
  • a color filter is provided between the lowermost layer lens and the light receiving surface, and the colors of the color filters arranged on the plurality of adjacent photoelectric conversion units are the same.
  • the electronic device on one aspect of the present technology has a photoelectric conversion unit that generates a pixel signal by photoelectric conversion according to the amount of incident light, and a light receiving surface in which a plurality of the photoelectric conversion units are arranged in an array.
  • a lens group composed of a plurality of lenses for focusing the incident light, and a lens forming the lowest layer of the lens group with respect to the incident direction of the incident light, and are arranged in a fixed state.
  • An image pickup device provided with a color filter between the lowermost layer lens and the light receiving surface, and the color of the color filter arranged on a plurality of adjacent photoelectric conversion units is the same color, and the above. It includes a processing unit that processes a signal from the image pickup device.
  • a photoelectric conversion unit that generates a pixel signal by photoelectric conversion according to the amount of incident light and a light receiving surface in which a plurality of the photoelectric conversion units are arranged in an array.
  • a lens group composed of a plurality of lenses for focusing the incident light, and a lens forming the lowest layer of the lens group with respect to the incident direction of the incident light, and are arranged in a fixed state.
  • a color filter is provided between the lowest layer lens and the light receiving surface. Further, the colors of the color filters arranged on the plurality of adjacent photoelectric conversion units are the same.
  • the electronic device on one aspect of the present technology is configured to include the image pickup device.
  • the imaging device and the electronic device may be independent devices or internal blocks constituting one device.
  • FIG. 1 It is a figure which shows the structure of an example of an electronic device. It is a figure which shows an example of the schematic structure of the endoscopic surgery system. It is a block diagram which shows an example of the functional structure of a camera head and a CCU. It is a block diagram which shows an example of the schematic structure of a vehicle control system. It is explanatory drawing which shows an example of the installation position of the vehicle exterior information detection unit and the image pickup unit.
  • FIG. 1 is a side sectional view of the image pickup apparatus.
  • the image pickup device 1 of FIG. 1 is composed of a solid-state image pickup element 11, a glass substrate 12, an IRCF (infrared light cut filter) 14, a lens group 16, a circuit board 17, an actuator 18, a connector 19, and a spacer 20.
  • IRCF infrared light cut filter
  • the solid-state image sensor 11 is an image sensor made of so-called CMOS (Complementary Metal Oxide Semiconductor), CCD (Charge Coupled Device), or the like, and is fixed in a state of being electrically connected on the circuit board 17.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge Coupled Device
  • the solid-state image sensor 11 is composed of a plurality of pixels arranged in an array, and is focused and incident on the pixel unit from the upper part of the drawing via the lens group 16. A pixel signal corresponding to the amount of incident light is generated and output as an image signal from the connector 19 to the outside via the circuit board 17.
  • a glass substrate 12 is provided on the upper surface of the solid-state image sensor 11 in FIG. 1, and is attached by a transparent adhesive (GLUE) 13 having a refractive index substantially the same as that of the glass substrate 12.
  • GLUE transparent adhesive
  • IRCF14 that cuts infrared light from the incident light is provided on the upper surface of the glass substrate 12 in FIG. 1, and is a transparent adhesive (GLUE) having a refractive index substantially the same as that of the glass substrate 12. ) 15 is pasted together.
  • IRCF14 is composed of, for example, blue plate glass, and cuts (removes) infrared light.
  • the solid-state image sensor 11, the glass substrate 12, and the IRCF 14 are laminated and bonded by the transparent adhesives 13 and 15, to form an integral structure, and are connected to the circuit board 17.
  • the solid-state image sensor 11, the glass substrate 12, and the IRCF 14 surrounded by the alternate long and short dash line in the figure are bonded and integrated with adhesives 13 and 15 having substantially the same refractive index. , Hereinafter, it is also simply referred to as an integrated structure portion 10.
  • the IRCF 14 may be attached on the glass substrate 12 after being separated into individual pieces in the manufacturing process of the solid-state image sensor 11, or the wafer-shaped glass substrate 12 composed of a plurality of solid-state image sensors 11. After the large-format IRCF 14 is attached to the entire upper surface, the solid-state image sensor may be individualized in units of 11, or any method may be adopted.
  • a spacer 20 is configured on the circuit board 17 so as to surround the entire solid-state image sensor 11, the glass substrate 12, and the IRCF 14 integrally configured. Further, an actuator 18 is provided on the spacer 20.
  • the actuator 18 has a cylindrical shape, and has a built-in lens group 16 formed by stacking a plurality of lenses inside the cylinder, and drives the actuator 18 in the vertical direction in FIG.
  • the actuator 18 moves the lens group 16 in the vertical direction (front-back direction with respect to the optical axis) in FIG. Accordingly, autofocus is realized by adjusting the focus so as to form an image of the subject on the imaging surface of the solid-state image sensor 11.
  • FIG. 2 shows a schematic view of the appearance of the integrated structure portion 10.
  • the integrated structure portion 10 shown in FIG. 2 is a semiconductor package in which a solid-state image sensor 11 made of a laminated substrate composed of a lower substrate 25 and an upper substrate 26 laminated is packaged.
  • An R (red), G (green), or B (blue) color filter 27 and an on-chip lens 28 are formed on the upper surface of the upper substrate 26. Further, the upper substrate 26 is connected to the glass substrate 12 for protecting the on-chip lens 28 in a cavityless structure via an adhesive 13 made of a glass seal resin.
  • the upper substrate 26 is formed with a pixel region 21 in which pixel portions for photoelectric conversion are two-dimensionally arranged in an array, and a control circuit 22 for controlling the pixel portions.
  • a logic circuit 23 such as a signal processing circuit for processing a pixel signal output from a pixel unit is formed on the lower substrate 25.
  • only the pixel region 21 may be formed on the upper substrate 26, and the control circuit 22 and the logic circuit 23 may be formed on the lower substrate 25.
  • the logic circuit 23 or both the control circuit 22 and the logic circuit 23 are formed on the lower substrate 25 different from the upper substrate 26 of the pixel region 21 and laminated to form one semiconductor substrate.
  • the size of the image pickup apparatus 1 can be reduced as compared with the case where the pixel region 21, the control circuit 22, and the logic circuit 23 are arranged in the plane direction.
  • the upper substrate 26 on which at least the pixel region 21 is formed will be referred to as a pixel sensor substrate 26, and the lower substrate 25 on which at least the logic circuit 23 is formed will be referred to as a logic substrate 25.
  • FIG. 4 shows a configuration example of the solid-state image sensor 11.
  • the solid-state image sensor 11 includes a pixel array unit 33 in which pixels 32 are arranged in a two-dimensional array, a vertical drive circuit 34, a column signal processing circuit 35, a horizontal drive circuit 36, an output circuit 37, a control circuit 38, and input / output. Includes terminal 39.
  • the pixel 32 includes a photodiode as a photoelectric conversion element and a plurality of pixel transistors. An example of the circuit configuration of the pixel 32 will be described later with reference to FIG.
  • the pixel 32 may have a shared pixel structure.
  • This pixel sharing structure is composed of a plurality of photodiodes, a plurality of transfer transistors, one shared floating diffusion (floating diffusion region), and one shared other pixel transistor. That is, in the shared pixel, the photodiode and the transfer transistor constituting the plurality of unit pixels are configured by sharing the other pixel transistor.
  • the control circuit 38 receives an input clock and data for instructing an operation mode and the like, and outputs data such as internal information of the solid-state image sensor 11. That is, the control circuit 38 generates a clock signal or a control signal that serves as a reference for the operation of the vertical drive circuit 34, the column signal processing circuit 35, the horizontal drive circuit 36, etc., based on the vertical synchronization signal, the horizontal synchronization signal, and the master clock. do. Then, the control circuit 38 outputs the generated clock signal and control signal to the vertical drive circuit 34, the column signal processing circuit 35, the horizontal drive circuit 36, and the like.
  • the vertical drive circuit 34 is composed of, for example, a shift register, selects a predetermined pixel drive wiring 40, supplies a pulse for driving the pixel 32 to the selected pixel drive wiring 40, and drives the pixel 32 in a row unit. do. That is, the vertical drive circuit 34 selectively scans each pixel 32 of the pixel array unit 33 in a row-by-row manner in the vertical direction, and a pixel signal based on the signal charge generated in the photoelectric conversion unit of each pixel 32 according to the amount of light received. Is supplied to the column signal processing circuit 35 through the vertical signal line 41.
  • the column signal processing circuit 35 is arranged for each column of the pixel 32, and performs signal processing such as noise removal for each pixel string of the signal output from the pixel 32 for one row.
  • the column signal processing circuit 5 performs signal processing such as CDS (Correlated Double Sampling) and AD conversion for removing fixed pattern noise peculiar to pixels.
  • the horizontal drive circuit 36 is composed of, for example, a shift register, and by sequentially outputting horizontal scanning pulses, each of the column signal processing circuits 35 is sequentially selected, and a pixel signal is output from each of the column signal processing circuits 35 as a horizontal signal line. Output to 42.
  • the output circuit 37 performs signal processing on the signals sequentially supplied from each of the column signal processing circuits 35 through the horizontal signal line 42 and outputs the signals.
  • the output circuit 37 may, for example, only buffer, or may perform black level adjustment, column variation correction, various digital signal processing, and the like.
  • the input / output terminal 39 exchanges signals with the outside.
  • the solid-state image sensor 11 configured as described above is a CMOS image sensor called a column AD method in which a column signal processing circuit 35 that performs CDS processing and AD conversion processing is arranged for each pixel string.
  • FIG. 5 shows an equivalent circuit of pixels 32.
  • the pixel 32 shown in FIG. 5 shows a configuration that realizes an electronic global shutter function.
  • the pixel 32 includes a photodiode 51 as a photoelectric conversion element, a first transfer transistor 52, a memory unit (MEM) 53, a second transfer transistor 54, an FD (floating diffusion region) 55, a reset transistor 56, an amplification transistor 57, and a selection transistor. It has 58 and an emission transistor 59.
  • the photodiode 51 is a photoelectric conversion unit that generates and stores an electric charge (signal charge) according to the amount of received light.
  • the anode terminal of the photodiode 51 is grounded, and the cathode terminal is connected to the memory unit 53 via the first transfer transistor 52. Further, the cathode terminal of the photodiode 51 is also connected to a discharge transistor 59 for discharging unnecessary electric charges.
  • the memory unit 53 is a charge holding unit that temporarily holds the electric charge until the electric charge is transferred to the FD 55.
  • the second transfer transistor 54 When the second transfer transistor 54 is turned on by the transfer signal TRG, the second transfer transistor 54 reads out the electric charge held in the memory unit 53 and transfers it to the FD55.
  • the FD55 is a charge holding unit that holds the electric charge read from the memory unit 53 to read it as a signal.
  • the reset transistor 56 is turned on by the reset signal RST, the electric charge stored in the FD55 is discharged to the constant voltage source VDD to reset the potential of the FD55.
  • the amplification transistor 57 outputs a pixel signal according to the potential of the FD55. That is, the amplification transistor 57 constitutes a load MOS 60 as a constant current source and a source follower circuit, and a pixel signal indicating a level corresponding to the charge stored in the FD 55 is a column signal from the amplification transistor 57 via the selection transistor 58. It is output to the processing circuit 35 (FIG. 4).
  • the load MOS 60 is arranged in, for example, the column signal processing circuit 35.
  • the selection transistor 58 is turned on when the pixel 32 is selected by the selection signal SEL, and outputs the pixel signal of the pixel 32 to the column signal processing circuit 35 via the vertical signal line 41.
  • the discharge transistor 59 When the discharge transistor 59 is turned on by the discharge signal OFG, the discharge transistor 59 discharges the unnecessary charge stored in the photodiode 51 to the constant voltage source VDD.
  • the transfer signals TRX and TRG, the reset signal RST, the discharge signal OFG, and the selection signal SEL are supplied from the vertical drive circuit 34 via the pixel drive wiring 40.
  • the high-level emission signal OFG is supplied to the emission transistor 59 to turn on the emission transistor 59, and the electric charge stored in the photodiode 51 is discharged to the constant voltage source VDD to all the pixels. Photodiode 51 is reset.
  • the first transfer transistor 52 When a predetermined exposure time elapses, the first transfer transistor 52 is turned on by the transfer signal TRX in all the pixels of the pixel array unit 33, and the electric charge accumulated in the photodiode 51 is transferred to the memory unit 53. Will be done.
  • the electric charges held in the memory unit 53 of each pixel 32 are sequentially read out to the column signal processing circuit 35 row by row.
  • the second transfer transistor 54 of the pixel 32 of the read line is turned on by the transfer signal TRG, and the electric charge held in the memory unit 53 is transferred to the FD55.
  • the selection transistor 58 is turned on by the selection signal SEL, a signal indicating the level corresponding to the electric charge stored in the FD 55 is output from the amplification transistor 57 to the column signal processing circuit 35 via the selection transistor 58.
  • the exposure time is set to be the same for all the pixels of the pixel array unit 33, and after the exposure is completed, the electric charge is temporarily held in the memory unit 53. It is possible to perform a global shutter operation (imaging) in which electric charges are sequentially read from the memory unit 53 in line units.
  • the circuit configuration of the pixel 32 is not limited to the configuration shown in FIG. 5, and for example, a circuit configuration that does not have the memory unit 53 and operates by the so-called rolling shutter method can be adopted.
  • FIG. 6 is an enlarged cross-sectional view showing a part of the solid-state image sensor 11.
  • a multilayer wiring layer 82 is formed on the upper side (pixel sensor substrate 26 side) of a semiconductor substrate 81 (hereinafter referred to as a silicon substrate 81) made of silicon (Si).
  • the multi-layer wiring layer 82 constitutes the control circuit 22 and the logic circuit 23 of FIG.
  • the multilayer wiring layer 82 includes a plurality of wiring layers 83 including an uppermost wiring layer 83a closest to the pixel sensor substrate 26, an intermediate wiring layer 83b, and a lowermost wiring layer 83c closest to the silicon substrate 81. It is composed of an interlayer insulating film 84 formed between the wiring layers 83.
  • the plurality of wiring layers 83 are formed of, for example, copper (Cu), aluminum (Al), tungsten (W), etc.
  • the interlayer insulating film 84 is formed of, for example, a silicon oxide film, a silicon nitride film, or the like. ..
  • Each of the plurality of wiring layers 83 and the interlayer insulating film 84 may be formed of the same material in all layers, or two or more materials may be used properly depending on the layer.
  • a silicon through hole 85 penetrating the silicon substrate 81 is formed at a predetermined position of the silicon substrate 81, and silicon is formed by embedding a connecting conductor 87 in the inner wall of the silicon through hole 85 via an insulating film 86.
  • a through electrode (TSV: Through Silicon Via) 88 is formed.
  • the insulating film 86 can be formed of, for example, a SiO2 film or a SiN film.
  • an insulating film 86 and a connecting conductor 87 are formed along the inner wall surface, and the inside of the silicon through hole 85 is hollow.
  • the silicon through hole 85 The entire interior may be embedded with a connecting conductor 87. In other words, it does not matter whether the inside of the through hole is embedded with a conductor or a part of the through hole is hollow. This also applies to the through silicon via (TCV: Through Chip Via) 105 and the like.
  • the connecting conductor 87 of the through silicon via 88 is connected to the rewiring 90 formed on the lower surface side of the silicon substrate 81, and the rewiring 90 is connected to the solder ball 29.
  • the connecting conductor 87 and the rewiring 90 can be formed of, for example, copper (Cu), tungsten (W), tungsten (W), polysilicon, or the like.
  • solder mask (solder resist) 91 is formed so as to cover the rewiring 90 and the insulating film 86, except for the region where the solder balls 29 are formed.
  • a multilayer wiring layer 102 is formed on the lower side (logic substrate 25 side) of the semiconductor substrate 101 (hereinafter referred to as silicon substrate 101) made of silicon (Si).
  • the multi-layer wiring layer 102 constitutes the pixel circuit of the pixel region 21 of FIG.
  • the multilayer wiring layer 102 includes a plurality of wiring layers 103 including an uppermost wiring layer 103a closest to the silicon substrate 101, an intermediate wiring layer 103b, and a lowermost wiring layer 103c closest to the logic substrate 25. It is composed of an interlayer insulating film 104 formed between the wiring layers 103.
  • the same material as the materials of the wiring layer 83 and the interlayer insulating film 84 described above can be adopted. Further, the same as the wiring layer 83 and the interlayer insulating film 84 described above, the plurality of wiring layers 103 and the interlayer insulating film 104 may be formed by using one or more materials properly.
  • the multi-layer wiring layer 102 of the pixel sensor board 26 is composed of the three-layer wiring layer 103, and the multi-layer wiring layer 82 of the logic board 25 is composed of the four-layer wiring layer 83.
  • the total number of wiring layers is not limited to this, and can be formed by any number of layers.
  • a photodiode 51 formed by a PN junction is formed for each pixel 32.
  • the multilayer wiring layer 102 and the silicon substrate 101 are also formed with a plurality of pixel transistors such as the first transfer transistor 52 and the second transfer transistor 54, a memory unit (MEM) 53, and the like. ing.
  • the through silicon via 109 connected to the wiring layer 103a of the pixel sensor substrate 26 and the wiring layer 83a of the logic substrate 25 A connected through silicon via 105 is formed.
  • the through silicon via 105 and the through silicon via 109 are connected by a connection wiring 106 formed on the upper surface of the silicon substrate 101. Further, an insulating film 107 is formed between each of the through silicon via 109 and the through silicon via 105 and the silicon substrate 101. Further, a color filter 27 and an on-chip lens 28 are formed on the upper surface of the silicon substrate 101 via a flattening film (insulating film) 108.
  • the solid-state image sensor 11 shown in FIG. 2 has a laminated structure in which the multilayer wiring layer 102 side of the logic substrate 25 and the multilayer wiring layer 82 side of the pixel sensor substrate 26 are bonded together.
  • FIG. 6 the surface where the multilayer wiring layer 102 side of the logic substrate 25 and the multilayer wiring layer 82 side of the pixel sensor substrate 26 are bonded is shown by a broken line.
  • the wiring layer 103 of the pixel sensor substrate 26 and the wiring layer 83 of the logic substrate 25 are connected by two through electrodes, a through silicon via 109 and a through silicon via 105, and logic.
  • the wiring layer 83 of the substrate 25 and the solder ball (back surface electrode) 29 are connected to the through silicon via 88 by the rewiring 90.
  • the height direction can also be lowered.
  • the IRCF 14 is provided on the solid-state image pickup device 11 and the glass substrate 12, it is possible to suppress the generation of flare and ghost due to the diffused reflection of light. ..
  • the solid line is used.
  • the incident light is condensed as shown and is incident on the solid-state image sensor (CIS) 11 at position F0 via the IRCF 14, the glass substrate 12, and the adhesive 13, and then at position F0 as shown by the dotted line. It is reflected and reflected light is generated.
  • the reflected light reflected at the position F0 is partially arranged on the back surface of the IRCF 14 at a position separated from the glass substrate 12 via, for example, the adhesive 13 and the glass substrate 12 ( (Lower surface in FIG. 7) Reflected by R1 and again incident on the solid-state image sensor 11 at position F1 via the glass substrate 12 and the adhesive 13.
  • the reflected light reflected at the focal point F0 includes, for example, the adhesive 13, the glass substrate 12, and the IRCF 14 arranged at a position separated from the glass substrate 12. It is transmitted, reflected by the upper surface (upper surface in FIG. 7) R2 of the IRCF 14, and is incident on the solid-state image sensor 11 again at the position F2 via the IRCF 14, the glass substrate 12, and the adhesive 13.
  • the light that is incident again causes flare and ghost due to internal disturbance reflection. More specifically, as shown in the image P1 of FIG. 8, when the illumination L is imaged in the solid-state image sensor 11, it appears as flare or ghost as shown by the reflected lights R21 and R22. ..
  • the IRCF 14 is configured on the glass substrate 12 as in the imaging device 1 as shown in the right part of FIG. 7, which corresponds to the configuration of the image pickup device 1 in FIG. 1, it is shown by a solid line.
  • the incident light is collected, incident on the solid-state image sensor 11 at position F0 via the IRCF 14, the adhesive 15, the glass substrate 12, and the adhesive 13, and then reflected as shown by the dotted line.
  • the reflected light is reflected by the lens surface R11 of the lowest layer on the lens group 16 via the adhesive 13, the glass substrate 12, the adhesive 15, and the IRCF 14, but the lens group 16 is sufficiently from the IRCF 14. Since it is located at a distance from the lens, it is reflected in a range where the solid-state image sensor 11 cannot sufficiently receive light.
  • the solid-state image sensor 11, the glass substrate 12, and the IRCF 14 surrounded by the alternate long and short dash line in the figure are bonded and integrated by the adhesives 13 and 15 having substantially the same refractive index. It is configured as.
  • the integrated structure portion 10 by unifying the refractive indexes, the generation of internal disturbance reflection generated at the boundary between layers having different refractive indexes is suppressed, and for example, it is in the vicinity of the position F0 in the left portion of FIG. Re-incident at positions F1 and F2 is suppressed.
  • the configuration as shown in FIG. 1 enables the device configuration to be miniaturized and reduced in height, and the occurrence of flare and ghost due to internal disturbance reflection can be suppressed.
  • the image P1 of FIG. 8 is an image in which the illumination L is captured at night by the image pickup device 1 having the configuration of the left portion of FIG. 7, and the image P2 has the configuration of the right portion of FIG. 7 (FIG. 1). This is an image in which the illumination L is captured by the image pickup device 1 at night.
  • the configuration in which the focal length can be adjusted according to the distance to the subject by moving the lens group 16 in the vertical direction in FIG. 1 by the actuator 18 to realize autofocus will be described as an example.
  • the actuator 18 may not be provided, the focal length of the lens group 16 may not be adjusted, and the lens may function as a so-called single focus lens.
  • FIG. 9 among the lens group 16 composed of a plurality of lenses constituting the image pickup apparatus 1 in FIG. 1, the lens that is the lowest layer with respect to the incident direction of light is separated from the lens group 16 and is displayed on the IRCF14.
  • An example of the configuration of the image pickup apparatus 1 having the above configuration is shown.
  • the configurations having basically the same functions as those in FIG. 1 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
  • the image pickup device 1 of FIG. 9 differs from the image pickup device 1 of FIG. 1 in that the upper surface of the IRCF 14 in the drawing is further with respect to the incident direction of light among the plurality of lenses constituting the lens group 16. This is a point in which the lens 131, which is the lowest layer, is provided separately from the lens group 16.
  • the lens group 16 of FIG. 9 has the same reference numerals as the lens group 16 of FIG. 1, but is strictly defined in that the lens 131, which is the lowest layer with respect to the incident direction of light, is not included. Is different from the lens group 16 of FIG.
  • the IRCF 14 is provided on the glass substrate 12 provided on the solid-state image sensor 11, and the lowest layer lens 131 constituting the lens group 16 is further formed on the IRCF 14. Since it is provided, it is possible to suppress the occurrence of flare and ghost due to the diffused reflection of light.
  • the image pickup apparatus 1 shown in FIGS. 1 and 9 shows a configuration including a glass substrate 12, IRCF 14, an adhesive 15, and the like, but does not include a glass substrate 12, IRCF 14, an adhesive 15, and the like. This technology can also be applied.
  • the image pickup device 1 provided with the lens 131 shown in FIG. 9 will be described as an example. Further, in the following description, a more detailed configuration of the semiconductor package in which the solid-state image sensor 11 shown in FIG. 2 is packaged will be described. do.
  • FIG. 10 is an enlarged cross-sectional view of a part of the integrated structure portion 10 shown in FIG. 9, and FIG. 11 is a plan view of the pixel 32a when viewed from above (light receiving surface side).
  • a lens 131, an adhesive 13, an on-chip lens 28a, a flattening film 202, a color filter 27, and a photodiode 51 are laminated from the upper part of the drawing.
  • the glass substrate 12, IRCF 14, adhesive 15, and the like are not shown.
  • a photodiode 51 is formed in the pixel sensor substrate 26.
  • a color filter 27 is laminated on the photodiode 51.
  • An inter-pixel light-shielding portion 201 is formed between the color filters 27.
  • the inter-pixel light-shielding unit 201 is provided to block light so that light does not leak to adjacent pixels (photodiode 51).
  • the inter-pixel light-shielding portion 201 can be formed of a light-shielding member having a light-shielding function such as metal.
  • FIG. 11 shows 16 pixels of 4 ⁇ 4.
  • the green color filter 27 is arranged in the 2 ⁇ 2 4 pixels located in the upper left part of the figure.
  • a blue color filter 27 is arranged in the 2 ⁇ 2 4 pixels located in the upper right part of the figure.
  • a red color filter 27 is arranged in the 2 ⁇ 2 4 pixels located at the lower left in the figure.
  • a green color filter 27 is arranged in 4 pixels of 2 ⁇ 2 located at the lower right in the figure.
  • the color arrangement of the color filter 27 is a Bayer arrangement.
  • the color arrangement an arrangement other than the Bayer arrangement can be applied.
  • the color of the color filter 27 may be cyan (Cy), magenta (Mg), yellow (Ye), or the like. Moreover, white (transparent) may be included. This also applies to the following embodiments as well.
  • the on-chip lens 28a is provided for each photodiode 51.
  • the 2 ⁇ 2 4 pixels are configured to have the same color, so that, for example, the storage time of each of the 2 ⁇ 2 4 pixels is changed, and the pixel that extracts the signal depending on the amount of light is 4 pixels. You will be able to perform processing such as selecting from within. As a result, the pixels in which the electric charge is accumulated can be appropriately selected with the accumulation time according to the amount of light, and the dynamic range can be expanded.
  • the image quality may deteriorate.
  • the sensitivity may be different in the 2 ⁇ 2 4 pixels, and the image quality may be deteriorated.
  • the incident light is indicated by an arrow.
  • the angle of incidence of the incident light be the angle ⁇ .
  • the angle ⁇ is, for example, an angle with respect to a perpendicular line with respect to the surface of the flattening film 202.
  • the light input at the angle ⁇ passes through the lens 131 and is incident on the on-chip lens 28a at an angle smaller than the angle ⁇ .
  • the light incident on the on-chip lens 28a is incident at an angle close to vertical, at least an angle smaller than the angle ⁇ at which the incident light has been incident.
  • the incident light incident on the image pickup apparatus 1 passes through the lens group 16 before reaching the lens 131, even if the light is incident from an oblique direction, the incident light enters the lens group 16.
  • the incident light When it is transmitted, it is focused and corrected to an angle closer to vertical than the incident angle. Therefore, the light that reaches the lens 131 is corrected to an angle ⁇ that is close to vertical when it enters the lens 131.
  • the angle is further corrected by passing through the lens 131, and the light is further focused by the on-chip lens 28a, so that the photodiode 51 receives light converted into light that is almost vertical. Being incident.
  • the present technology it is possible to reduce the light leakage to the adjacent photodiode 51, so that the crosstalk due to the light leakage to the adjacent pixels can be reduced. Therefore, as described with reference to FIGS. 10 and 11, even when 4 pixels of 2 ⁇ 2 are made into pixels of the same color, it is possible to reduce the occurrence of a sensitivity difference between the same colors, and the image quality can be reduced. It is possible to prevent deterioration.
  • the width of the inter-pixel light-shielding portion 201 provided for reducing crosstalk may be narrowed.
  • the correction amount for pupil correction can be reduced.
  • the pixel 32a at the center of the angle of view and the pixel 32a at the end of the angle of view have the same structure, they can be efficiently collected. Light cannot be emitted, and a sensitivity difference occurs between the pixel 32a at the center of the angle of view and the pixel 32a at the end of the angle of view.
  • the optical axis of the lens group 16 and the photo There is a technique called pupil correction or the like in which the aperture of the diode 51 is adjusted and the position of the photodiode 51 is shifted according to the direction of the main light beam toward the end of the angle of view.
  • FIG. 12 shows the structure of the pixel 32a at the pixel end, and shows the structure of the pixel 32a after pupil correction.
  • the structure of the pixel 32a shown in FIG. 12 is the same as the structure of the pixel 32a shown in FIG. 11, but the on-chip lens 28a and the color filter 27 are shifted toward the center of the angle of view by performing pupil correction. It is said that it is arranged.
  • the pixel 32a shown in FIG. 12 has the center of the angle of view on the right side of the figure, and the on-chip lens 28a and the color filter 27 are shifted in the direction closer to the center of the angle of view.
  • the amount of deviation of this deviation is defined as the amount of deviation H1.
  • the lens group 16 and the lens 131 have a structure in which light converted into near-vertical light is incident on the photodiode 51. Such an effect can be obtained even at the edge of the angle of view. Therefore, the deviation amount H1 may be smaller than the deviation amount when the present technology is not applied. Even if the deviation amount H1 is set to 0, in other words, even if the pupil correction is not performed, according to the image pickup apparatus 1 to which the present technology is applied, the angle of view edge is higher than that of the image pickup apparatus to which the present technology is not applied. Also, crosstalk can be reduced.
  • the first embodiment it is possible to prevent color mixing, improve the sensitivity difference between the same colors, and prevent deterioration of image quality.
  • FIG. 13 is a diagram showing a configuration example of the pixel 32b according to the second embodiment.
  • the same parts as the pixels 32a in the first embodiment shown in FIG. 10 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
  • the pixel 32b shown in FIG. 13 has the same configuration as the pixel 32a in the first embodiment except that the trench 221b is added to the pixel 32a in the first embodiment. There is.
  • FIG. 13 Although a cross-sectional configuration example is shown in FIG. 13 as the pixel 32b in the second embodiment, the same configuration as in the case shown in FIG. 11 when viewed in a plane is the same as that of the pixel 32a in the first embodiment.
  • the 2 ⁇ 2 4 pixels 32b have the same color, and an on-chip lens 28a is provided for each pixel 32a2.
  • the trench 221b is formed between the photodiodes 51.
  • the inside of the trench 221b may be hollow or may be filled with metal.
  • the inside of the trench 221b is filled with metal, it can be integrated with the inter-pixel light-shielding portion 201.
  • FIG. 13 shows a case where the integrated configuration is used.
  • the trench 221b is provided between the photodiodes 51 to prevent light from leaking from the adjacent pixel 32a photodiode 51 and to electrically separate the photodiode 51. By providing the trench 221b in this way, the crosstalk between the pixels between the pixels 32b can be further reduced.
  • FIG. 14 shows the structure of the pixel 32b at the pixel end, and shows the structure of the pixel 32b for which pupil correction has been performed.
  • the structure of the pixel 32a shown in FIG. 14 is the same as the structure of the pixel 32b shown in FIG. 13, but the on-chip lens 28a and the color filter 27 are shifted toward the center of the angle of view by performing pupil correction. It is said that it is arranged.
  • the lens group 16 and the lens 131 have a structure in which light converted into near-vertical light is incident on the photodiode 51. Therefore, even in the pixel 32b shown in FIG. 14, the deviation amount H1 may be smaller than the deviation amount when the present technology is not applied.
  • the second embodiment it is possible to prevent color mixing, improve the sensitivity difference between the same colors, and prevent deterioration of image quality.
  • FIG. 15 is a diagram showing a cross-sectional configuration example of the pixel 32c in the third embodiment
  • FIG. 16 is a diagram showing a plane configuration example.
  • the same parts as the pixels 32a in the first embodiment shown in FIGS. 10 and 11 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
  • the pixel 32a in the first embodiment shown in FIGS. 10 and 11 includes an on-chip lens 28 for each photodiode 51.
  • the difference is that the 2 ⁇ 2 4-pixel 32c is provided with one on-chip lens 28c, and the other points are the same.
  • FIG. 16 shows 16 pixels of 4 ⁇ 4.
  • green color filters 27 are arranged on the 2 ⁇ 2 photodiodes 51 located in the upper left portion of the drawing.
  • a blue color filter 27 is arranged on each of the 2 ⁇ 2 photodiodes 51 located in the upper right part of the drawing.
  • a red color filter 27 is arranged on each of the 2 ⁇ 2 photodiodes 51 located at the lower left in the figure. Further, a green color filter 27 is arranged on each of the 2 ⁇ 2 photodiodes 51 located at the lower right in the figure.
  • the color arrangement of the color filter 27 is a Bayer arrangement. Such a configuration is the same as in the first and second embodiments.
  • the on-chip lens 28C is provided for each of four 2 ⁇ 2 photodiodes 51. That is, one on-chip lens 28c is laminated on the 2 ⁇ 2 photodiode 51 in which the green color filter 27 located in the upper left portion of the drawing is arranged. Further, one on-chip lens 28c is laminated on a 2 ⁇ 2 photodiode 51 in which a blue color filter 27 located in the upper left portion of the figure is arranged.
  • one on-chip lens 28c is laminated on the 2 ⁇ 2 photodiode 51 in which the red color filter 27 located at the lower left in the figure is arranged. Further, one on-chip lens 28c is laminated on a 2 ⁇ 2 photodiode 51 in which a green color filter 27 located at the lower right in the figure is arranged.
  • the on-chip lens 28c is shared by 4 pixels, and by configuring the 4 pixels to be the same color pixels, for example, the accumulation time of each of the 2 ⁇ 2 4 pixels can be changed.
  • processing such as selecting a pixel for extracting a signal from among four pixels.
  • the pixels in which the electric charge is accumulated can be appropriately selected with the accumulation time according to the amount of light, and the dynamic range can be expanded.
  • the width of the inter-pixel light-shielding portion 201 provided for reducing crosstalk may be narrowed.
  • the photodiode 51 can be formed larger, and the sensitivity can be improved.
  • FIG. 17 shows the structure of the pixel 32c at the pixel end, and shows the structure of the pixel 32c with pupil correction.
  • the structure of the pixel 32c shown in FIG. 17 is the same as the structure of the pixel 32c shown in FIG. 15, but the on-chip lens 28c and the color filter 27 are shifted toward the center of the angle of view by performing pupil correction. It is said that it is arranged.
  • the lens group 16 and the lens 131 have a structure in which light converted into near-vertical light is incident on the photodiode 51. Therefore, even in the pixel 32c shown in FIG. 17, the deviation amount H1 may be smaller than the deviation amount when the present technology is not applied.
  • the third embodiment it is possible to prevent color mixing, improve the sensitivity difference between the same colors, and prevent deterioration of image quality.
  • FIG. 18 is a diagram showing a configuration example of the pixel 32d according to the fourth embodiment.
  • the same parts as the pixels 32c in the third embodiment shown in FIG. 15 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
  • the pixel 32d shown in FIG. 18 has the same configuration as the pixel 32c in the third embodiment except that the trench 221d is added to the pixel 32c in the third embodiment. There is.
  • FIG. 18 Although a cross-sectional configuration example is shown in FIG. 18 as the pixel 32d in the fourth embodiment, the same configuration as in the case shown in FIG. 16 when viewed in a plane is the same as that of the pixel 32c in the third embodiment. 2 ⁇ 2 4 pixels 32d have the same color, and one on-chip lens 28c is laminated.
  • the trench 221d is formed between the photodiodes 51.
  • the inside of the trench 221d may be hollow or may be filled with metal.
  • the structure can be integrated with the inter-pixel light-shielding portion 201.
  • FIG. 18 illustrates a case where the configuration is integrated.
  • the trench 221d is provided between the photodiodes 51 to prevent light from leaking from the adjacent photodiodes 51 and to realize electrical separation of the photodiodes 51. By providing the trench 221d in this way, crosstalk between pixels between the pixels 32d can be further reduced.
  • FIG. 19 shows the structure of the pixel 32d at the pixel end, and shows the structure of the pixel 32d with pupil correction.
  • the structure of the pixel 32d shown in FIG. 19 is the same as the structure of the pixel 32d shown in FIG. 18, but the on-chip lens 28c and the color filter 27 are shifted toward the center of the angle of view by performing pupil correction. It is said that it is arranged.
  • the lens group 16 and the lens 131 have a structure in which light converted into near-vertical light is incident on the photodiode 51. Therefore, even in the pixel 32d shown in FIG. 19, the deviation amount H1 may be smaller than the deviation amount when the present technology is not applied.
  • the fourth embodiment it is possible to prevent color mixing, improve the sensitivity difference between the same colors, and prevent deterioration of image quality.
  • FIG. 20 is a diagram showing a cross-sectional configuration example of the pixel 32e in the fifth embodiment
  • FIG. 21 is a diagram showing a plane configuration example.
  • the same parts as the pixels 32c in the third embodiment shown in FIGS. 15 and 16 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
  • the pixel 32e shown in FIG. 20 is different from the pixel 32c in the third embodiment except that the thickness of the inter-pixel light-shielding portion 201 is different from that of the inter-pixel light-shielding portion 201 of the pixel 32c in the third embodiment. It has a similar configuration.
  • the inter-pixel light-shielding unit 201 of the pixels 32e shown in FIGS. 20 and 21 is composed of an inter-pixel light-shielding unit 201e-1 having a different line width and an inter-pixel light-shielding unit 201e-2.
  • the inter-pixel light-shielding portion 201e-2 is formed thicker than the inter-pixel light-shielding portion 201e-1.
  • the inter-pixel light-shielding unit 201e-1 has, for example, the same thickness as the inter-pixel light-shielding unit 201 of the pixels 32c in the third embodiment.
  • the inter-pixel light-shielding unit 2011-1 is a light-shielding unit that surrounds the outer peripheral portion of a 2 ⁇ 2 4-pixel 32e under one on-chip lens 28c.
  • the inter-pixel light-shielding unit 2011-1 is a light-shielding unit provided in a portion that shields light between pixels in which different colors are arranged. If light leaks to adjacent pixels between pixels in which different colors are arranged, light of different colors leaks. Therefore, in order to prevent such a situation, inter-pixel shading having a predetermined line width is prevented.
  • Part 201-1 is formed and is configured to have no effect.
  • the inter-pixel light-shielding unit 201-2 is a light-shielding unit formed between the pixels of 2 ⁇ 2 4 pixels 32e under one on-chip lens 28c.
  • the inter-pixel light-shielding unit 201-2 is a light-shielding unit provided in a portion that shields light between pixels in which the same color is arranged. If light leaks to adjacent pixels between pixels in which the same color is arranged, light of the same color leaks. Since it is considered that the influence is less than the case where the light of different colors leaks as described above, the inter-pixel light-shielding unit 201-2 may be formed with a line width narrower than that of the inter-pixel light-shielding unit 201-1.
  • the line width of the inter-pixel light-shielding portion 201-1 is formed to be thicker than the line width of the inter-pixel light-shielding portion 201-2. As described above, the line width of the inter-pixel light-shielding portion 201 may be different depending on the portion where the inter-pixel light-shielding portion 201 is formed.
  • the line width of the inter-pixel light-shielding unit 2011-1 can also be made narrower than the line width of the inter-pixel light-shielding unit provided on the pixels to which the present technology is not applied.
  • the inter-pixel light-shielding portion 2011-1 in this portion can sufficiently suppress light leakage (crosstalk is suppressed).
  • the thickness of the inter-pixel light-shielding portion 211-1 is also the inter-pixel light-shielding portion 201-. It may be as thin as the thickness of 2.
  • the aperture area of the photodiode 51 can be widened. Therefore, the sensitivity of the pixel 32c can be improved by narrowing the line width of the inter-pixel light-shielding portion 201-2.
  • the on-chip lens 28c and the color filter 27 are configured to shift the on-chip lens 28c and the color filter 27 toward the center of the angle of view at the pixel end by applying pupil correction to the pixel 32e in the fifth embodiment. You can also do it.
  • FIG. 22 is a diagram showing a cross-sectional configuration example of the pixel 32f in the sixth embodiment
  • FIG. 23 is a diagram showing a plane configuration example.
  • the same parts as the pixels 32c in the third embodiment shown in FIGS. 15 and 16 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
  • the third embodiment is that the pixel 32f in the sixth embodiment does not have an inter-pixel light-shielding portion formed between the pixels of the 2 ⁇ 2 four-pixel 32e under one on-chip lens 28c. Unlike the pixel 32c in the form, other points are the same.
  • the pixel 32f in the sixth embodiment and the pixel 32c in the third embodiment are compared (FIG. 15)
  • the pixel 32f is a pixel in which the pixel 32c is arranged under the color filter 27 of the same color.
  • the inter-pixel light-shielding portion 201 provided between the 32c is not formed.
  • the pixel 32f in the sixth embodiment does not have the inter-pixel light-shielding portion 201-2 formed by the pixel 32e. It is said that.
  • the pixel 32d in the fifth embodiment is an embodiment in which the line width of the inter-pixel light-shielding portion 201-2 is formed to be thin, and crosstalk can be suppressed even if the line width is made thin. Therefore, further, the pixel 32f in the sixth embodiment shows a case where the line width of the inter-pixel light-shielding portion 201-2 is set to 0, that is, a configuration in which the pixel 32f is not formed.
  • the pixel 32f in the sixth embodiment has an inter-pixel light-shielding portion 201f that surrounds an outer peripheral portion of a 2 ⁇ 2 4-pixel 32f under one on-chip lens 28c.
  • an inter-pixel light-shielding portion 201f is provided at a portion that shields light between pixels in which different colors are arranged.
  • the color filter 27f is formed so as to cover the 2 ⁇ 2 4 pixels 32f by not providing a light-shielding portion formed between the pixels of the 2 ⁇ 2 4 pixels 32f under one on-chip lens 28c. Will be done.
  • the aperture area of the photodiode 51 can be widened by providing a configuration in which a light-shielding portion formed between 2 ⁇ 2 4 pixels 32f pixels under one on-chip lens 28c is not provided. Therefore, the sensitivity of the pixel 32f can be improved.
  • the pixel 32f at the pixel end and the pupil-corrected pixel 32f have the on-chip lens 28 and the color filter 27f displaced toward the center of the angle of view.
  • the sixth embodiment it is possible to prevent color mixing, improve the sensitivity difference between the same colors, and prevent deterioration of image quality.
  • FIG. 24 is a diagram showing a configuration example of the pixel 32g according to the seventh embodiment.
  • the same parts as the pixels 32f in the sixth embodiment shown in FIG. 22 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
  • the pixel 32g shown in FIG. 24 has the same configuration as the pixel 32f in the sixth embodiment except that the trench 221g is added to the pixel 32f in the sixth embodiment. There is.
  • FIG. 24 Although a cross-sectional configuration example is shown in FIG. 24 as the pixel 32g in the seventh embodiment, the same configuration as that shown in FIG. 23 when viewed in a plane is the same as that of the pixel 32f in the sixth embodiment.
  • 32g of 4 pixels of 2 ⁇ 2 have the same color, and one on-chip lens 28c and a color filter 27f are laminated.
  • the trench 221g is formed between the photodiodes 51.
  • the inside of the trench 221g may be hollow or may be filled with metal.
  • the inter-pixel light-shielding portion 201f can be integrated with the inter-pixel light-shielding portion 201f in the portion where the inter-pixel light-shielding portion 201f is formed.
  • the trench 221g is also formed between the photodiodes 51 on the lower side of the color filter 27f in which the inter-pixel light-shielding portion 201 is not provided.
  • the trench 221g is provided between the photodiodes 51 to prevent light from leaking from the adjacent photodiodes 51 and to realize electrical separation of the photodiodes 51. By providing the trench 221 g in this way, crosstalk between pixels between the pixels 32 g can be further reduced.
  • the pixels at the pixel ends are 32 g, and the pupil-corrected pixels 32 g are arranged so that the on-chip lens 28c and the color filter 27f are shifted toward the center of the angle of view.
  • the seventh embodiment it is possible to prevent color mixing, improve the sensitivity difference between the same colors, and prevent deterioration of image quality.
  • FIG. 25 is a diagram showing a cross-sectional configuration example of the pixel 32h according to the eighth embodiment.
  • the same parts as the pixels 32g in the seventh embodiment shown in FIG. 24 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
  • the pixel 32h shown in FIG. 25 has the same configuration as the pixel 32g in the seventh embodiment, except that the thickness of the trench 221h is different from the trench 221g of the pixel 32g in the seventh embodiment. ing.
  • the trench 221h of the pixel 32h shown in FIG. 25 is composed of a trench 221-1 and a trench 221-2 having different thicknesses.
  • the trench 221h-2 is formed thinner than the trench 221h-1.
  • the trench 221h-1 has, for example, the same thickness as the trench 221g having 32 g of pixels in the seventh embodiment.
  • the trench 221h-1 is a light-shielding portion that surrounds the outer peripheral portion of a 2 ⁇ 2 4-pixel 32h under one on-chip lens 28c.
  • the trench 221h-1 is a light-shielding portion provided in a portion that shields light between pixels in which different colors are arranged. If light leaks to adjacent pixels between pixels in which different colors are arranged, light of different colors leaks. Therefore, in order to prevent such a situation, a trench 221h- having a predetermined line width is used. 1 is formed and is configured to have no effect.
  • the trench 221h-2 is a light-shielding portion formed between 2 ⁇ 2 4 pixels 32h pixels under one on-chip lens 28c.
  • the trench 221h-2 is a light-shielding portion provided in a portion that shields light between pixels in which the same color is arranged. If light leaks to adjacent pixels between pixels in which the same color is arranged, light of the same color leaks.
  • the trench 221h-2 may be formed with a line width narrower than that of the trench 221h-1, because it is considered that the influence is less than the case where the light of different colors leaks.
  • the line width of the trench 221h-1 is formed to be thicker than the line width of the trench 221h-2.
  • the line width of the trench 221h may be different depending on the portion where the trench 221h is formed.
  • the structure is such that crosstalk can be reduced. Therefore, even if the thickness of the trench 221h-2 is made thin, the crosstalk between the pixels 32h can be suppressed without being deteriorated.
  • the line width of the trench 221h-1 can also be made narrower than the line width of the inter-pixel light-shielding portion provided in the pixels to which the present technology is not applied.
  • the trench 221h-1 in this portion can sufficiently suppress light leakage (crosstalk can be suppressed).
  • the thickness of the trench 221h-1 is reduced to the same level as the thickness of the trench 221h-2 because the structure is such that crosstalk can be reduced according to the present technology. You may.
  • the opening area of the photodiode 51 can be widened. Therefore, the sensitivity of the pixel 32h can be improved by narrowing the line width of the trench 221h-2.
  • the on-chip lens 28c and the color filter 27f are configured to shift the on-chip lens 28c and the color filter 27f toward the center of the angle of view at the pixel end by applying pupil correction to the pixel 32h in the eighth embodiment. You can also do it.
  • FIG. 26 is a diagram showing a cross-sectional configuration example of the pixel 32i according to the ninth embodiment.
  • a 2 ⁇ 2 4-pixel 32 (photodiode 51) is treated as one unit, and the same color is applied on the basic unit of the 4-pixel 32 (photodiode 51).
  • the case where the filter 27 is arranged has been described as an example.
  • two photodiodes 51 are included in the subpixel 32i, the two photodiodes 51 are treated as one unit, and the same color is applied on the two photodiodes 51 which are the basic units thereof.
  • the case where the color filter 27f of the above is arranged will be described as an example.
  • the pixel 32i in the ninth embodiment shown in FIG. 26 is shown, for example, in combination with the pixel 32f in the sixth embodiment shown in FIG. 22, but is different from the embodiment other than the sixth embodiment. It is also possible to combine them.
  • the pixel 32i in the ninth embodiment shown in FIG. 26 shows the case of being combined with the pixel 32f in the sixth embodiment shown in FIG. 22, and the cross-sectional structure is as shown in FIG. 22. Become.
  • the pixel 32i shown in FIG. 26 has two photodiodes 51 as a basic unit, and a color filter 27f of the same color is laminated on the two photodiodes 51, and one on-chip lens 28c is laminated.
  • the pixel 32i shown in FIG. 26 shows an example in which the two photodiodes 51, which are the basic units, are arranged in the horizontal direction (horizontal direction in the figure), but in the vertical direction (vertical direction in the figure). It may be arranged in.
  • the pixel 32i in which two photodiodes 51 are arranged in the left-right direction or the up-down direction under one on-chip lens 28c shown in FIG. 26 acquires a phase difference called an image plane phase difference method or the like. It can be applied to a system that realizes autofocus using the phase difference.
  • the phase difference in the left-right direction can be acquired.
  • the phase difference in the vertical direction can be acquired.
  • a pixel 32i in which two photodiodes 51 are arranged in the left-right direction and a pixel 32i in which two photodiodes 51 are arranged in the vertical direction are arranged under one on-chip lens 28c inside the pixel array unit 33 or outside the pixel array unit 33. May be configured so that the phase difference in the left-right direction and the phase difference in the up-down direction can be obtained.
  • the pixel 32i in which two photodiodes 51 are arranged in the left-right direction is arranged under one on-chip lens 28c, the phase difference in the vertical direction cannot be obtained, but the number of readout gates can be reduced, and the photodiode 51 Saturation performance can be improved.
  • the ninth embodiment can be combined with the embodiments other than the sixth embodiment, for example, a configuration in which a trench 221 is provided or a configuration in which pupil correction is applied. May be.
  • the pixels 32 of the first to ninth embodiments may be applied to the pixels in the OPB (Optical Black) region.
  • OPB Optical Black
  • an image pickup device that uses an image sensor such as a CMOS (Complementary Metal Oxide Semiconductor) image sensor or a CCD (Charge Coupled Device) image sensor, the black level of the image signal obtained by the image sensor is corrected to a reference value. The clamping process is performed.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge Coupled Device
  • an OPB region for detecting a reference black level is provided outside the effective pixel region of the image sensor, and each pixel value in the effective pixel region is corrected using the pixel value.
  • the structure of each pixel in the OPB region is the same as that of the pixel in the effective pixel region, except that the light incident from the outside is blocked by the light-shielding film. Therefore, the pixels 32 of the first to ninth embodiments can be applied to the pixels arranged in the OPB region. Further, since the OPB region is usually provided outside the effective pixel region, the pixels in the OPB region may be arranged with pupil correction.
  • the present technology is not limited to application to an image sensor. That is, this technology captures images on an image capture unit (photoelectric conversion unit) such as an image pickup device such as a digital still camera or a video camera, a portable terminal device having an image pickup function, or a copier that uses an image sensor as an image reader. It can be applied to all electronic devices that use elements.
  • the image pickup device may be in the form of one chip, or may be in the form of a module having an image pickup function in which the image pickup unit and the signal processing unit or the optical system are packaged together.
  • FIG. 27 is a block diagram showing a configuration example of an imaging device as an electronic device to which the present technology is applied.
  • the image sensor 1000 of FIG. 27 includes an optical unit 1001 including a lens group, an image sensor (imaging device) 1002 adopting the configuration of the image sensor 1 of FIG. 1, and a DSP (Digital Signal Processor) which is a camera signal processing circuit.
  • the circuit 1003 is provided.
  • the image sensor 1000 also includes a frame memory 1004, a display unit 1005, a recording unit 1006, an operation unit 1007, and a power supply unit 1008.
  • the DSP circuit 1003, the frame memory 1004, the display unit 1005, the recording unit 1006, the operation unit 1007, and the power supply unit 1008 are connected to each other via the bus line 1009.
  • the optical unit 1001 captures incident light (image light) from the subject and forms an image on the image pickup surface of the image pickup device 1002.
  • the image sensor 1002 converts the amount of incident light imaged on the image pickup surface by the optical unit 1001 into an electric signal in pixel units and outputs it as a pixel signal.
  • the image pickup device 1002 the image pickup device 1 of FIG. 1 can be used.
  • the display unit 1005 is composed of a thin display such as an LCD (Liquid Crystal Display) or an organic EL (Electro Luminescence) display, and displays a moving image or a still image captured by the image pickup element 1002.
  • the recording unit 1006 records a moving image or a still image captured by the image sensor 1002 on a recording medium such as a hard disk or a semiconductor memory.
  • the operation unit 1007 issues operation commands for various functions of the image sensor 1000 under the operation of the user.
  • the power supply unit 1008 appropriately supplies various power sources that serve as operating power sources for the DSP circuit 1003, the frame memory 1004, the display unit 1005, the recording unit 1006, and the operation unit 1007.
  • FIG. 28 is a diagram showing an example of a schematic configuration of an endoscopic surgery system to which the technique according to the present disclosure (the present technique) can be applied.
  • FIG. 28 shows a surgeon (doctor) 11131 performing surgery on patient 11132 on patient bed 11133 using the endoscopic surgery system 11000.
  • the endoscopic surgery system 11000 includes an endoscope 11100, other surgical tools 11110 such as a pneumoperitoneum tube 11111 and an energy treatment tool 11112, and a support arm device 11120 that supports the endoscope 11100.
  • a cart 11200 equipped with various devices for endoscopic surgery.
  • the endoscope 11100 is composed of a lens barrel 11101 in which a region having a predetermined length from the tip is inserted into the body cavity of the patient 11132, and a camera head 11102 connected to the base end of the lens barrel 11101.
  • the endoscope 11100 configured as a so-called rigid mirror having a rigid barrel 11101 is illustrated, but the endoscope 11100 may be configured as a so-called flexible mirror having a flexible barrel. good.
  • An opening in which an objective lens is fitted is provided at the tip of the lens barrel 11101.
  • a light source device 11203 is connected to the endoscope 11100, and the light generated by the light source device 11203 is guided to the tip of the lens barrel by a light guide extending inside the lens barrel 11101 to be an objective. It is irradiated toward the observation target in the body cavity of the patient 11132 through the lens.
  • the endoscope 11100 may be a direct endoscope, a perspective mirror, or a side endoscope.
  • An optical system and an image pickup element are provided inside the camera head 11102, and the reflected light (observation light) from the observation target is focused on the image pickup element by the optical system.
  • the observation light is photoelectrically converted by the image sensor, and an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image is generated.
  • the image signal is transmitted as RAW data to the camera control unit (CCU: Camera Control Unit) 11201.
  • CCU Camera Control Unit
  • the CCU11201 is composed of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), etc., and comprehensively controls the operations of the endoscope 11100 and the display device 11202. Further, the CCU 11201 receives an image signal from the camera head 11102, and performs various image processing on the image signal for displaying an image based on the image signal, such as development processing (demosaic processing).
  • a CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the display device 11202 displays an image based on the image signal processed by the CCU 11201 under the control of the CCU 11201.
  • the light source device 11203 is composed of, for example, a light source such as an LED (light emission diode), and supplies irradiation light to the endoscope 11100 when photographing an operating part or the like.
  • a light source such as an LED (light emission diode)
  • the input device 11204 is an input interface for the endoscopic surgery system 11000.
  • the user can input various information and input instructions to the endoscopic surgery system 11000 via the input device 11204.
  • the user inputs an instruction to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) by the endoscope 11100.
  • the treatment tool control device 11205 controls the drive of the energy treatment tool 11112 for ablation of tissue, incision, sealing of blood vessels, and the like.
  • the pneumoperitoneum device 11206 uses a gas in the pneumoperitoneum tube 11111 to inflate the body cavity of the patient 11132 for the purpose of securing the field of view by the endoscope 11100 and securing the work space of the operator.
  • the recorder 11207 is a device capable of recording various information related to surgery.
  • the printer 11208 is a device capable of printing various information related to surgery in various formats such as texts, images, and graphs.
  • the light source device 11203 that supplies the irradiation light to the endoscope 11100 when photographing the surgical site can be composed of, for example, an LED, a laser light source, or a white light source composed of a combination thereof.
  • a white light source is configured by combining RGB laser light sources, the output intensity and output timing of each color (each wavelength) can be controlled with high accuracy. Therefore, the light source device 11203 adjusts the white balance of the captured image. It can be carried out.
  • the laser light from each of the RGB laser light sources is irradiated to the observation target in a time-division manner, and the drive of the image sensor of the camera head 11102 is controlled in synchronization with the irradiation timing to correspond to each of RGB. It is also possible to capture the image in a time-division manner. According to this method, a color image can be obtained without providing a color filter on the image sensor.
  • the drive of the light source device 11203 may be controlled so as to change the intensity of the output light at predetermined time intervals.
  • the drive of the image sensor of the camera head 11102 in synchronization with the timing of changing the light intensity to acquire an image in a time-divided manner and synthesizing the image, so-called high dynamic without blackout and overexposure. Range images can be generated.
  • the light source device 11203 may be configured to be able to supply light in a predetermined wavelength band corresponding to special light observation.
  • special light observation for example, by utilizing the wavelength dependence of light absorption in body tissue to irradiate light in a narrow band as compared with the irradiation light (that is, white light) in normal observation, the surface layer of the mucous membrane. So-called narrow band imaging, in which a predetermined tissue such as a blood vessel is photographed with high contrast, is performed.
  • fluorescence observation may be performed in which an image is obtained by fluorescence generated by irradiating with excitation light.
  • the body tissue is irradiated with excitation light to observe the fluorescence from the body tissue (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected into the body tissue and the body tissue is injected. It is possible to obtain a fluorescence image by irradiating excitation light corresponding to the fluorescence wavelength of the reagent.
  • the light source device 11203 may be configured to be capable of supplying narrow band light and / or excitation light corresponding to such special light observation.
  • FIG. 29 is a block diagram showing an example of the functional configuration of the camera head 11102 and CCU11201 shown in FIG. 28.
  • the camera head 11102 includes a lens unit 11401, an imaging unit 11402, a driving unit 11403, a communication unit 11404, and a camera head control unit 11405.
  • CCU11201 has a communication unit 11411, an image processing unit 11412, and a control unit 11413.
  • the camera head 11102 and CCU11201 are communicatively connected to each other by a transmission cable 11400.
  • the lens unit 11401 is an optical system provided at a connection portion with the lens barrel 11101.
  • the observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102 and incident on the lens unit 11401.
  • the lens unit 11401 is configured by combining a plurality of lenses including a zoom lens and a focus lens.
  • the image sensor constituting the image pickup unit 11402 may be one (so-called single plate type) or a plurality (so-called multi-plate type).
  • each image pickup element may generate an image signal corresponding to each of RGB, and a color image may be obtained by synthesizing them.
  • the image pickup unit 11402 may be configured to have a pair of image pickup elements for acquiring image signals for the right eye and the left eye corresponding to 3D (dimensional) display, respectively.
  • the 3D display enables the operator 11131 to more accurately grasp the depth of the biological tissue in the surgical site.
  • a plurality of lens units 11401 may be provided corresponding to each image pickup element.
  • the imaging unit 11402 does not necessarily have to be provided on the camera head 11102.
  • the imaging unit 11402 may be provided inside the lens barrel 11101 immediately after the objective lens.
  • the drive unit 11403 is composed of an actuator, and the zoom lens and focus lens of the lens unit 11401 are moved by a predetermined distance along the optical axis under the control of the camera head control unit 11405. As a result, the magnification and focus of the image captured by the imaging unit 11402 can be adjusted as appropriate.
  • the communication unit 11404 is composed of a communication device for transmitting and receiving various information to and from the CCU11201.
  • the communication unit 11404 transmits the image signal obtained from the image pickup unit 11402 as RAW data to the CCU 11201 via the transmission cable 11400.
  • the communication unit 11404 receives a control signal for controlling the drive of the camera head 11102 from the CCU 11201 and supplies the control signal to the camera head control unit 11405.
  • the control signal includes, for example, information to specify the frame rate of the captured image, information to specify the exposure value at the time of imaging, and / or information to specify the magnification and focus of the captured image, and the like. Contains information about the condition.
  • the imaging conditions such as the frame rate, exposure value, magnification, and focus may be appropriately specified by the user, or may be automatically set by the control unit 11413 of CCU11201 based on the acquired image signal. good.
  • the endoscope 11100 is equipped with a so-called AE (Auto Exposure) function, AF (Auto Focus) function, and AWB (Auto White Balance) function.
  • the camera head control unit 11405 controls the drive of the camera head 11102 based on the control signal from the CCU 11201 received via the communication unit 11404.
  • the communication unit 11411 is composed of a communication device for transmitting and receiving various information to and from the camera head 11102.
  • the communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
  • the communication unit 11411 transmits a control signal for controlling the drive of the camera head 11102 to the camera head 11102.
  • Image signals and control signals can be transmitted by telecommunications, optical communication, or the like.
  • the image processing unit 11412 performs various image processing on the image signal which is the RAW data transmitted from the camera head 11102.
  • the control unit 11413 performs various controls related to the imaging of the surgical site and the like by the endoscope 11100 and the display of the captured image obtained by the imaging of the surgical site and the like. For example, the control unit 11413 generates a control signal for controlling the drive of the camera head 11102.
  • control unit 11413 causes the display device 11202 to display an image captured by the surgical unit or the like based on the image signal processed by the image processing unit 11412.
  • the control unit 11413 may recognize various objects in the captured image by using various image recognition techniques. For example, the control unit 11413 detects the shape, color, and the like of the edge of an object included in the captured image to remove surgical tools such as forceps, a specific biological part, bleeding, and mist when using the energy treatment tool 11112. Can be recognized.
  • the control unit 11413 may superimpose and display various surgical support information on the image of the surgical unit by using the recognition result. By superimposing and displaying the surgical support information and presenting it to the surgeon 11131, it is possible to reduce the burden on the surgeon 11131 and to allow the surgeon 11131 to proceed with the surgery reliably.
  • the transmission cable 11400 that connects the camera head 11102 and CCU11201 is an electric signal cable that supports electric signal communication, an optical fiber that supports optical communication, or a composite cable thereof.
  • the communication was performed by wire using the transmission cable 11400, but the communication between the camera head 11102 and the CCU11201 may be performed wirelessly.
  • the technology according to the present disclosure can be applied to various products.
  • the technology according to the present disclosure is realized as a device mounted on a moving body of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
  • FIG. 30 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile control system to which the technology according to the present disclosure can be applied.
  • the vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001.
  • the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050.
  • a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (Interface) 12053 are shown as a functional configuration of the integrated control unit 12050.
  • the drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs.
  • the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating a braking force of a vehicle.
  • the body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs.
  • the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps.
  • the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches.
  • the body system control unit 12020 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
  • the vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000.
  • the imaging unit 12031 is connected to the vehicle exterior information detection unit 12030.
  • the vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image.
  • the vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or a character on the road surface based on the received image.
  • the imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received.
  • the image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
  • the in-vehicle information detection unit 12040 detects the in-vehicle information.
  • a driver state detection unit 12041 that detects the driver's state is connected to the vehicle interior information detection unit 12040.
  • the driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
  • the microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit.
  • a control command can be output to 12010.
  • the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
  • ADAS Advanced Driver Assistance System
  • the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform coordinated control for the purpose of automatic driving, etc., which runs autonomously without depending on the operation.
  • the microcomputer 12051 can output a control command to the body system control unit 12030 based on the information outside the vehicle acquired by the vehicle exterior information detection unit 12030.
  • the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the external information detection unit 12030, and performs coordinated control for the purpose of anti-glare such as switching the high beam to the low beam. It can be carried out.
  • the audio image output unit 12052 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger or the outside of the vehicle of the information.
  • an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices.
  • the display unit 12062 may include, for example, at least one of an onboard display and a heads-up display.
  • FIG. 31 is a diagram showing an example of the installation position of the imaging unit 12031.
  • the imaging unit 12031 includes imaging units 12101, 12102, 12103, 12104, and 12105.
  • the imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as, for example, the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100.
  • the image pickup unit 12101 provided on the front nose and the image pickup section 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100.
  • the imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100.
  • the imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100.
  • the imaging unit 12105 provided on the upper part of the windshield in the vehicle interior is mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
  • FIG. 31 shows an example of the photographing range of the imaging units 12101 to 12104.
  • the imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose
  • the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively
  • the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103.
  • the imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
  • At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information.
  • at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup device having pixels for phase difference detection.
  • the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100). By obtaining can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic braking control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform coordinated control for the purpose of automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
  • automatic braking control including follow-up stop control
  • automatic acceleration control including follow-up start control
  • the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that can be seen by the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
  • At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays.
  • the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104.
  • pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine.
  • the audio image output unit 12052 When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian.
  • the display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
  • the system represents the entire device composed of a plurality of devices.
  • the present technology can also have the following configurations.
  • a photoelectric conversion unit that generates a pixel signal by photoelectric conversion according to the amount of incident light
  • a lens group composed of a plurality of lenses that focuses the incident light on a light receiving surface in which the plurality of photoelectric conversion units are arranged in an array.
  • An imaging device in which the colors of the color filters arranged on a plurality of adjacent photoelectric conversion units are the same.
  • the imaging device according to (1) further including an on-chip lens for each photoelectric conversion unit.
  • the imaging device according to (1) further comprising an on-chip lens for each of the four 2 ⁇ 2 photoelectric conversion units.
  • the imaging device according to (1) further including an on-chip lens for each of the two photoelectric conversion units adjacent to each other on the top, bottom, left and right.
  • the layer of the color filter includes a light-shielding portion formed of a light-shielding member.
  • the light-shielding unit is also provided between the photoelectric conversion units.
  • the line width of the light-shielding portion between color filters in which different colors are arranged is thicker than the line width of the light-shielding portion between color filters in which the same color is arranged.
  • Imaging device (8) The imaging device according to (5) or (6) above, wherein the light-shielding portion is provided between color filters in which different colors are arranged. (9) The imaging device according to (8) above, further comprising a light-shielding unit formed of a light-shielding member between the photoelectric conversion units.
  • a photoelectric conversion unit that generates a pixel signal by photoelectric conversion according to the amount of incident light
  • a lens group composed of a plurality of lenses that focuses the incident light on a light receiving surface in which the plurality of photoelectric conversion units are arranged in an array.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Solid State Image Pick-Up Elements (AREA)

Abstract

The present invention pertains to an imaging apparatus and an electronic device, configured so as to be capable of realizing reduced size or lower profile in the apparatus configuration and of suppressing flare or ghosting. The present invention comprises photoelectric conversion units for generating pixel signals by photoelectric conversion in accordance with a quantity of incident light, a lens group comprising a plurality of lenses for focusing incident light on a light-receiving surface on which a plurality of photoelectric conversion units are positioned in an array, a lowest-layer lens that is a lens constituting the lowest layer with respect to the incidence direction of the incident light from among the lens group and that is positioned in a fixed state, and color filters located between the light-receiving surface and the lowest-layer lens, the color filters positioned on a plurality of adjacent photoelectric conversion units being the same color. The present technique is applicable to an imaging device, for example.

Description

撮像装置、電子機器Imaging equipment, electronic equipment
 本技術は撮像装置、電子機器に関し、例えば、装置構成の小型化や低背化を実現すると共に、フレアやゴーストの発生を抑制して撮像できるようにした撮像装置、電子機器に関する。 This technology relates to an imaging device and an electronic device, for example, an imaging device and an electronic device that realizes miniaturization and low profile of the device configuration and suppresses the occurrence of flare and ghost to perform imaging.
 近年、カメラ付き移動体端末装置や、デジタルスチルカメラなどで用いられる固体撮像素子において、高画素化および小型化、並びに低背化が進んでいる。 In recent years, in mobile terminal devices with cameras, solid-state image sensors used in digital still cameras, etc., the number of pixels has been increased, the size has been reduced, and the height has been reduced.
 カメラの高画素化および小型化にともない、レンズと固体撮像素子が光軸上で近くなり、赤外光カットフィルタがレンズ付近に配置されることが一般的となっている。例えば、複数のレンズからなるレンズ群のうち、最下位層となるレンズを、固体撮像素子上に構成することにより、固体撮像素子の小型化を実現する技術が提案されている。 With the increase in the number of pixels and the size of the camera, it is common that the lens and the solid-state image sensor are closer to each other on the optical axis, and the infrared light cut filter is arranged near the lens. For example, a technique has been proposed that realizes miniaturization of a solid-state image sensor by configuring a lens that is the lowest layer of a lens group composed of a plurality of lenses on a solid-state image sensor.
特開2015-061193号公報Japanese Unexamined Patent Publication No. 2015-061193
 しかしながら、固体撮像素子上に最下位層のレンズを構成するようにした場合、装置構成の小型化や低背化には貢献するものの、赤外光カットフィルタとレンズとの距離が近くなることにより、光の反射による内乱反射に起因したフレアや、ゴーストが生じてしまう。 However, when the lens of the lowest layer is formed on the solid-state image sensor, although it contributes to the miniaturization and the reduction of the height of the device configuration, the distance between the infrared light cut filter and the lens becomes shorter. , Flare and ghost caused by internal disturbance reflection due to light reflection occur.
 本開示は、このような状況に鑑みてなされたものであり、特に、固体撮像素子において、小型化や低背化を実現すると共に、フレアやゴーストの発生を抑制できるようにするものである。 This disclosure has been made in view of such a situation, and in particular, in a solid-state image sensor, it is possible to realize miniaturization and low profile, and to suppress the occurrence of flare and ghost.
 本技術の一側面の撮像装置は、入射光の光量に応じて光電変換により画素信号を生成する光電変換部と、複数の前記光電変換部がアレイ状に配置されている受光面に対して、前記入射光を合焦させる、複数のレンズからなるレンズ群と、前記レンズ群のうち、前記入射光の入射方向に対して最下位層を構成するレンズであり、固定された状態で配置されている最下位層レンズと、前記受光面との間にカラーフィルタとを備え、隣接する複数の前記光電変換部上に配置されている前記カラーフィルタの色は同色とされている。 The imaging device on one side of the present technology has a photoelectric conversion unit that generates a pixel signal by photoelectric conversion according to the amount of incident light, and a light receiving surface in which a plurality of the photoelectric conversion units are arranged in an array. A lens group composed of a plurality of lenses for focusing the incident light, and a lens forming the lowest layer of the lens group with respect to the incident direction of the incident light, and are arranged in a fixed state. A color filter is provided between the lowermost layer lens and the light receiving surface, and the colors of the color filters arranged on the plurality of adjacent photoelectric conversion units are the same.
 本技術の一側面の電子機器は、入射光の光量に応じて光電変換により画素信号を生成する光電変換部と、複数の前記光電変換部がアレイ状に配置されている受光面に対して、前記入射光を合焦させる、複数のレンズからなるレンズ群と、前記レンズ群のうち、前記入射光の入射方向に対して最下位層を構成するレンズであり、固定された状態で配置されている最下位層レンズと、前記受光面との間にカラーフィルタとを備え、隣接する複数の前記光電変換部上に配置されている前記カラーフィルタの色は同色とされている撮像装置と、前記撮像装置からの信号を処理する処理部とを備える。 The electronic device on one aspect of the present technology has a photoelectric conversion unit that generates a pixel signal by photoelectric conversion according to the amount of incident light, and a light receiving surface in which a plurality of the photoelectric conversion units are arranged in an array. A lens group composed of a plurality of lenses for focusing the incident light, and a lens forming the lowest layer of the lens group with respect to the incident direction of the incident light, and are arranged in a fixed state. An image pickup device provided with a color filter between the lowermost layer lens and the light receiving surface, and the color of the color filter arranged on a plurality of adjacent photoelectric conversion units is the same color, and the above. It includes a processing unit that processes a signal from the image pickup device.
 本技術の一側面の撮像装置においては、入射光の光量に応じて光電変換により画素信号を生成する光電変換部と、複数の前記光電変換部がアレイ状に配置されている受光面に対して、前記入射光を合焦させる、複数のレンズからなるレンズ群と、前記レンズ群のうち、前記入射光の入射方向に対して最下位層を構成するレンズであり、固定された状態で配置されている最下位層レンズと、前記受光面との間にカラーフィルタとが備えられている。また隣接する複数の前記光電変換部上に配置されている前記カラーフィルタの色は同色とされている。 In the imaging device on one aspect of the present technology, with respect to a photoelectric conversion unit that generates a pixel signal by photoelectric conversion according to the amount of incident light and a light receiving surface in which a plurality of the photoelectric conversion units are arranged in an array. , A lens group composed of a plurality of lenses for focusing the incident light, and a lens forming the lowest layer of the lens group with respect to the incident direction of the incident light, and are arranged in a fixed state. A color filter is provided between the lowest layer lens and the light receiving surface. Further, the colors of the color filters arranged on the plurality of adjacent photoelectric conversion units are the same.
 本技術の一側面の電子機器は、前記撮像装置を含む構成とされている。 The electronic device on one aspect of the present technology is configured to include the image pickup device.
 なお、撮像装置および電子機器は、独立した装置であっても良いし、1つの装置を構成している内部ブロックであっても良い。 The imaging device and the electronic device may be independent devices or internal blocks constituting one device.
本開示の撮像装置の構成例を説明する図である。It is a figure explaining the structural example of the image pickup apparatus of this disclosure. 固体撮像素子を含む一体化構成部の外観概略図である。It is a schematic appearance of the integrated component including a solid-state image sensor. 一体化構成部の基板構成を説明する図である。It is a figure explaining the board structure of the integrated component part. 積層基板の回路構成例を示す図である。It is a figure which shows the circuit structure example of a laminated board. 画素の等価回路を示す図である。It is a figure which shows the equivalent circuit of a pixel. 積層基板の詳細構造を示す図である。It is a figure which shows the detailed structure of a laminated substrate. 内乱反射に起因するゴーストやフレアが発生しないことを説明する図である。It is a figure explaining that ghost and flare due to internal disturbance reflection do not occur. 内乱反射に起因するゴーストやフレアが発生しないことを説明する図である。It is a figure explaining that ghost and flare due to internal disturbance reflection do not occur. 撮像装置の他の構成例を説明する図である。It is a figure explaining another configuration example of an image pickup apparatus. 第1の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 1st Embodiment. 第1の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 1st Embodiment. 第1の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 1st Embodiment. 第2の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 2nd Embodiment. 第2の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 2nd Embodiment. 第3の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 3rd Embodiment. 第3の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 3rd Embodiment. 第3の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 3rd Embodiment. 第4の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 4th Embodiment. 第4の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 4th Embodiment. 第5の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 5th Embodiment. 第5の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 5th Embodiment. 第6の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 6th Embodiment. 第6の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 6th Embodiment. 第7の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 7th Embodiment. 第8の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 8th Embodiment. 第9の実施の形態における画素の構成例を示す図である。It is a figure which shows the structural example of the pixel in 9th Embodiment. 電子機器の一例の構成を示す図である。It is a figure which shows the structure of an example of an electronic device. 内視鏡手術システムの概略的な構成の一例を示す図である。It is a figure which shows an example of the schematic structure of the endoscopic surgery system. カメラヘッド及びCCUの機能構成の一例を示すブロック図である。It is a block diagram which shows an example of the functional structure of a camera head and a CCU. 車両制御システムの概略的な構成の一例を示すブロック図である。It is a block diagram which shows an example of the schematic structure of a vehicle control system. 車外情報検出部及び撮像部の設置位置の一例を示す説明図である。It is explanatory drawing which shows an example of the installation position of the vehicle exterior information detection unit and the image pickup unit.
 以下に、本技術を実施するための形態(以下、実施の形態という)について説明する。 The embodiment for implementing the present technology (hereinafter referred to as the embodiment) will be described below.
 <撮像装置の構成例>
 図1を参照して、装置構成の小型化と低背化とを実現しつつ、ゴーストやフレアの発生を抑制する、本開示の第1の実施の形態の撮像装置の構成例について説明する。尚、図1は、撮像装置の側面断面図である。
<Configuration example of imaging device>
With reference to FIG. 1, a configuration example of the imaging device according to the first embodiment of the present disclosure, which suppresses the occurrence of ghosts and flares while realizing miniaturization and low profile of the device configuration, will be described. FIG. 1 is a side sectional view of the image pickup apparatus.
 図1の撮像装置1は、固体撮像素子11、ガラス基板12、IRCF(赤外光カットフィルタ)14、レンズ群16、回路基板17、アクチュエータ18、コネクタ19、およびスペーサ20より構成されている。 The image pickup device 1 of FIG. 1 is composed of a solid-state image pickup element 11, a glass substrate 12, an IRCF (infrared light cut filter) 14, a lens group 16, a circuit board 17, an actuator 18, a connector 19, and a spacer 20.
 固体撮像素子11は、いわゆるCMOS(Complementary Metal Oxide Semiconductor)や、CCD(Charge Coupled Device)などからなるイメージセンサであり、回路基板17上で電気的に接続された状態で固定されている。固体撮像素子11は、図4を参照して後述するように、アレイ状に配置された複数の画素より構成され、画素単位で、図中上方よりレンズ群16を介して集光されて入射される、入射光の光量に応じた画素信号を生成し、画像信号として回路基板17を介してコネクタ19より外部に出力する。 The solid-state image sensor 11 is an image sensor made of so-called CMOS (Complementary Metal Oxide Semiconductor), CCD (Charge Coupled Device), or the like, and is fixed in a state of being electrically connected on the circuit board 17. As will be described later with reference to FIG. 4, the solid-state image sensor 11 is composed of a plurality of pixels arranged in an array, and is focused and incident on the pixel unit from the upper part of the drawing via the lens group 16. A pixel signal corresponding to the amount of incident light is generated and output as an image signal from the connector 19 to the outside via the circuit board 17.
 固体撮像素子11の図1中の上面部には、ガラス基板12が設けられており、透明の、すなわち、ガラス基板12と略同一の屈折率の接着剤(GLUE)13により貼り合わされている。 A glass substrate 12 is provided on the upper surface of the solid-state image sensor 11 in FIG. 1, and is attached by a transparent adhesive (GLUE) 13 having a refractive index substantially the same as that of the glass substrate 12.
 ガラス基板12の図1中の上面部には、入射光のうち、赤外光をカットするIRCF14が設けられており、透明の、すなわち、ガラス基板12と略同一の屈折率の接着剤(GLUE)15により貼り合わされている。IRCF14は、例えば、青板ガラスから構成されており、赤外光をカット(除去)する。 An IRCF14 that cuts infrared light from the incident light is provided on the upper surface of the glass substrate 12 in FIG. 1, and is a transparent adhesive (GLUE) having a refractive index substantially the same as that of the glass substrate 12. ) 15 is pasted together. IRCF14 is composed of, for example, blue plate glass, and cuts (removes) infrared light.
 すなわち、固体撮像素子11、ガラス基板12、およびIRCF14が、積層され、透明の接着剤13,15により、貼り合わされて、一体的な構成とされて、回路基板17に接続されている。尚、図中の一点鎖線で囲まれた、固体撮像素子11、ガラス基板12、およびIRCF14は、略同一の屈折率の接着剤13,15により貼り合わされて一体化された構成にされているので、以降においては、単に、一体化構造部10とも称する。 That is, the solid-state image sensor 11, the glass substrate 12, and the IRCF 14 are laminated and bonded by the transparent adhesives 13 and 15, to form an integral structure, and are connected to the circuit board 17. The solid-state image sensor 11, the glass substrate 12, and the IRCF 14 surrounded by the alternate long and short dash line in the figure are bonded and integrated with adhesives 13 and 15 having substantially the same refractive index. , Hereinafter, it is also simply referred to as an integrated structure portion 10.
 また、IRCF14は、固体撮像素子11の製造工程において、個片化された後に、ガラス基板12上に貼り付けられるようにしてもよいし、複数の固体撮像素子11からなるウェハ状のガラス基板12上の全体に大判のIRCF14を貼り付けた後、固体撮像素子11単位で個片化するようにしてもよく、いずれの手法を採用してもよい。 Further, the IRCF 14 may be attached on the glass substrate 12 after being separated into individual pieces in the manufacturing process of the solid-state image sensor 11, or the wafer-shaped glass substrate 12 composed of a plurality of solid-state image sensors 11. After the large-format IRCF 14 is attached to the entire upper surface, the solid-state image sensor may be individualized in units of 11, or any method may be adopted.
 固体撮像素子11、ガラス基板12、およびIRCF14が一体構成された全体を取り囲むようにスペーサ20が回路基板17上に構成されている。また、スペーサ20の上に、アクチュエータ18が設けられている。アクチュエータ18は、円筒状に構成されており、その円筒内部に複数のレンズが積層されて構成されるレンズ群16を内蔵し、図1中の上下方向に駆動させる。 A spacer 20 is configured on the circuit board 17 so as to surround the entire solid-state image sensor 11, the glass substrate 12, and the IRCF 14 integrally configured. Further, an actuator 18 is provided on the spacer 20. The actuator 18 has a cylindrical shape, and has a built-in lens group 16 formed by stacking a plurality of lenses inside the cylinder, and drives the actuator 18 in the vertical direction in FIG.
 このような構成により、アクチュエータ18は、レンズ群16を、図1中の上下方向(光軸に対して前後方向)に移動させることで、図中の上方となる図示せぬ被写体までの距離に応じて、固体撮像素子11の撮像面上において、被写体を結像させるように焦点を調整することでオートフォーカスを実現する。 With such a configuration, the actuator 18 moves the lens group 16 in the vertical direction (front-back direction with respect to the optical axis) in FIG. Accordingly, autofocus is realized by adjusting the focus so as to form an image of the subject on the imaging surface of the solid-state image sensor 11.
 <外観概略図>
 次に、図2乃至図6を参照して、一体化構造部10の構成につい説明する。図2は、一体化構造部10の外観概略図を示している。
<Outline schematic view>
Next, the configuration of the integrated structure portion 10 will be described with reference to FIGS. 2 to 6. FIG. 2 shows a schematic view of the appearance of the integrated structure portion 10.
 図2に示される一体化構造部10は、下側基板25と上側基板26とが積層されて構成されている積層基板からなる固体撮像素子11がパッケージ化された半導体パッケージである。 The integrated structure portion 10 shown in FIG. 2 is a semiconductor package in which a solid-state image sensor 11 made of a laminated substrate composed of a lower substrate 25 and an upper substrate 26 laminated is packaged.
 固体撮像素子11を構成する積層基板の下側基板25には、図1の回路基板17と電気的に接続するための裏面電極であるはんだボール29が、複数、形成されている。 A plurality of solder balls 29, which are backside electrodes for electrically connecting to the circuit board 17 of FIG. 1, are formed on the lower substrate 25 of the laminated substrate constituting the solid-state image sensor 11.
 上側基板26の上面には、R(赤)、G(緑)、またはB(青)のカラーフィルタ27とオンチップレンズ28が形成されている。また、上側基板26は、オンチップレンズ28を保護するためのガラス基板12と、ガラスシール樹脂からなる接着剤13を介してキャビティレス構造で接続されている。 An R (red), G (green), or B (blue) color filter 27 and an on-chip lens 28 are formed on the upper surface of the upper substrate 26. Further, the upper substrate 26 is connected to the glass substrate 12 for protecting the on-chip lens 28 in a cavityless structure via an adhesive 13 made of a glass seal resin.
 例えば、上側基板26には、図3のAに示されるように、光電変換を行う画素部がアレイ状に2次元配列された画素領域21と、画素部の制御を行う制御回路22が形成されており、下側基板25には、画素部から出力された画素信号を処理する信号処理回路などのロジック回路23が形成されている。 For example, as shown in FIG. 3A, the upper substrate 26 is formed with a pixel region 21 in which pixel portions for photoelectric conversion are two-dimensionally arranged in an array, and a control circuit 22 for controlling the pixel portions. A logic circuit 23 such as a signal processing circuit for processing a pixel signal output from a pixel unit is formed on the lower substrate 25.
 あるいはまた、図3のBに示されるように、上側基板26には、画素領域21のみが形成され、下側基板25に、制御回路22とロジック回路23が形成される構成でもよい。 Alternatively, as shown in B of FIG. 3, only the pixel region 21 may be formed on the upper substrate 26, and the control circuit 22 and the logic circuit 23 may be formed on the lower substrate 25.
 以上のように、ロジック回路23または制御回路22及びロジック回路23の両方を、画素領域21の上側基板26とは別の下側基板25に形成して積層させることで、1枚の半導体基板に、画素領域21、制御回路22、およびロジック回路23を平面方向に配置した場合と比較して、撮像装置1としてのサイズを小型化することができる。 As described above, the logic circuit 23 or both the control circuit 22 and the logic circuit 23 are formed on the lower substrate 25 different from the upper substrate 26 of the pixel region 21 and laminated to form one semiconductor substrate. The size of the image pickup apparatus 1 can be reduced as compared with the case where the pixel region 21, the control circuit 22, and the logic circuit 23 are arranged in the plane direction.
 以下では、少なくとも画素領域21が形成される上側基板26を、画素センサ基板26と称し、少なくともロジック回路23が形成される下側基板25を、ロジック基板25と称して説明を行う。 In the following, the upper substrate 26 on which at least the pixel region 21 is formed will be referred to as a pixel sensor substrate 26, and the lower substrate 25 on which at least the logic circuit 23 is formed will be referred to as a logic substrate 25.
 <積層基板の構成例>
 図4は、固体撮像素子11の構成例を示している。
<Structure example of laminated substrate>
FIG. 4 shows a configuration example of the solid-state image sensor 11.
 固体撮像素子11は、画素32が2次元アレイ状に配列された画素アレイ部33と、垂直駆動回路34、カラム信号処理回路35、水平駆動回路36、出力回路37、制御回路38、および入出力端子39を含む。 The solid-state image sensor 11 includes a pixel array unit 33 in which pixels 32 are arranged in a two-dimensional array, a vertical drive circuit 34, a column signal processing circuit 35, a horizontal drive circuit 36, an output circuit 37, a control circuit 38, and input / output. Includes terminal 39.
 画素32は、光電変換素子としてのフォトダイオードと、複数の画素トランジスタを有して成る。画素32の回路構成例については、図5を参照して後述する。 The pixel 32 includes a photodiode as a photoelectric conversion element and a plurality of pixel transistors. An example of the circuit configuration of the pixel 32 will be described later with reference to FIG.
 また、画素32は、共有画素構造とすることもできる。この画素共有構造は、複数のフォトダイオードと、複数の転送トランジスタと、共有される1つのフローティングディフージョン(浮遊拡散領域)と、共有される1つずつの他の画素トランジスタとから構成される。すなわち、共有画素では、複数の単位画素を構成するフォトダイオード及び転送トランジスタが、他の1つずつの画素トランジスタを共有して構成される。 Further, the pixel 32 may have a shared pixel structure. This pixel sharing structure is composed of a plurality of photodiodes, a plurality of transfer transistors, one shared floating diffusion (floating diffusion region), and one shared other pixel transistor. That is, in the shared pixel, the photodiode and the transfer transistor constituting the plurality of unit pixels are configured by sharing the other pixel transistor.
 制御回路38は、入力クロックと、動作モードなどを指令するデータを受け取り、また固体撮像素子11の内部情報などのデータを出力する。すなわち、制御回路38は、垂直同期信号、水平同期信号及びマスタクロックに基づいて、垂直駆動回路34、カラム信号処理回路35及び水平駆動回路36などの動作の基準となるクロック信号や制御信号を生成する。そして、制御回路38は、生成したクロック信号や制御信号を、垂直駆動回路34、カラム信号処理回路35及び水平駆動回路36等に出力する。 The control circuit 38 receives an input clock and data for instructing an operation mode and the like, and outputs data such as internal information of the solid-state image sensor 11. That is, the control circuit 38 generates a clock signal or a control signal that serves as a reference for the operation of the vertical drive circuit 34, the column signal processing circuit 35, the horizontal drive circuit 36, etc., based on the vertical synchronization signal, the horizontal synchronization signal, and the master clock. do. Then, the control circuit 38 outputs the generated clock signal and control signal to the vertical drive circuit 34, the column signal processing circuit 35, the horizontal drive circuit 36, and the like.
 垂直駆動回路34は、例えばシフトレジスタによって構成され、所定の画素駆動配線40を選択し、選択された画素駆動配線40に画素32を駆動するためのパルスを供給し、行単位で画素32を駆動する。すなわち、垂直駆動回路34は、画素アレイ部33の各画素32を行単位で順次垂直方向に選択走査し、各画素32の光電変換部において受光量に応じて生成された信号電荷に基づく画素信号を、垂直信号線41を通してカラム信号処理回路35に供給する。 The vertical drive circuit 34 is composed of, for example, a shift register, selects a predetermined pixel drive wiring 40, supplies a pulse for driving the pixel 32 to the selected pixel drive wiring 40, and drives the pixel 32 in a row unit. do. That is, the vertical drive circuit 34 selectively scans each pixel 32 of the pixel array unit 33 in a row-by-row manner in the vertical direction, and a pixel signal based on the signal charge generated in the photoelectric conversion unit of each pixel 32 according to the amount of light received. Is supplied to the column signal processing circuit 35 through the vertical signal line 41.
 カラム信号処理回路35は、画素32の列ごとに配置されており、1行分の画素32から出力される信号を画素列ごとにノイズ除去などの信号処理を行う。例えば、カラム信号処理回路5は、画素固有の固定パターンノイズを除去するためのCDS(Correlated Double Sampling:相関2重サンプリング)およびAD変換等の信号処理を行う。 The column signal processing circuit 35 is arranged for each column of the pixel 32, and performs signal processing such as noise removal for each pixel string of the signal output from the pixel 32 for one row. For example, the column signal processing circuit 5 performs signal processing such as CDS (Correlated Double Sampling) and AD conversion for removing fixed pattern noise peculiar to pixels.
 水平駆動回路36は、例えばシフトレジスタによって構成され、水平走査パルスを順次出力することによって、カラム信号処理回路35の各々を順番に選択し、カラム信号処理回路35の各々から画素信号を水平信号線42に出力させる。 The horizontal drive circuit 36 is composed of, for example, a shift register, and by sequentially outputting horizontal scanning pulses, each of the column signal processing circuits 35 is sequentially selected, and a pixel signal is output from each of the column signal processing circuits 35 as a horizontal signal line. Output to 42.
 出力回路37は、カラム信号処理回路35の各々から水平信号線42を通して順次に供給される信号に対し、信号処理を行って出力する。出力回路37は、例えば、バファリングだけする場合もあるし、黒レベル調整、列ばらつき補正、各種デジタル信号処理などが行われる場合もある。入出力端子39は、外部と信号のやりとりをする。 The output circuit 37 performs signal processing on the signals sequentially supplied from each of the column signal processing circuits 35 through the horizontal signal line 42 and outputs the signals. The output circuit 37 may, for example, only buffer, or may perform black level adjustment, column variation correction, various digital signal processing, and the like. The input / output terminal 39 exchanges signals with the outside.
 以上のように構成される固体撮像素子11は、CDS処理とAD変換処理を行うカラム信号処理回路35が画素列ごとに配置されたカラムAD方式と呼ばれるCMOSイメージセンサである。 The solid-state image sensor 11 configured as described above is a CMOS image sensor called a column AD method in which a column signal processing circuit 35 that performs CDS processing and AD conversion processing is arranged for each pixel string.
 <画素の回路構成例>
 図5は、画素32の等価回路を示している。
<Pixel circuit configuration example>
FIG. 5 shows an equivalent circuit of pixels 32.
 図5に示される画素32は、電子式のグローバルシャッタ機能を実現する構成を示している。 The pixel 32 shown in FIG. 5 shows a configuration that realizes an electronic global shutter function.
 画素32は、光電変換素子としてのフォトダイオード51、第1転送トランジスタ52、メモリ部(MEM)53、第2転送トランジスタ54、FD(フローティング拡散領域)55、リセットトランジスタ56、増幅トランジスタ57、選択トランジスタ58、及び排出トランジスタ59を有する。 The pixel 32 includes a photodiode 51 as a photoelectric conversion element, a first transfer transistor 52, a memory unit (MEM) 53, a second transfer transistor 54, an FD (floating diffusion region) 55, a reset transistor 56, an amplification transistor 57, and a selection transistor. It has 58 and an emission transistor 59.
 フォトダイオード51は、受光量に応じた電荷(信号電荷)を生成し、蓄積する光電変換部である。フォトダイオード51のアノード端子が接地されているとともに、カソード端子が第1転送トランジスタ52を介してメモリ部53に接続されている。また、フォトダイオード51のカソード端子は、不要な電荷を排出するための排出トランジスタ59とも接続されている。 The photodiode 51 is a photoelectric conversion unit that generates and stores an electric charge (signal charge) according to the amount of received light. The anode terminal of the photodiode 51 is grounded, and the cathode terminal is connected to the memory unit 53 via the first transfer transistor 52. Further, the cathode terminal of the photodiode 51 is also connected to a discharge transistor 59 for discharging unnecessary electric charges.
 第1転送トランジスタ52は、転送信号TRXによりオンされたとき、フォトダイオード51で生成された電荷を読み出し、メモリ部53に転送する。メモリ部53は、FD55に電荷を転送するまでの間、一時的に電荷を保持する電荷保持部である。 When the first transfer transistor 52 is turned on by the transfer signal TRX, the first transfer transistor 52 reads out the electric charge generated by the photodiode 51 and transfers it to the memory unit 53. The memory unit 53 is a charge holding unit that temporarily holds the electric charge until the electric charge is transferred to the FD 55.
 第2転送トランジスタ54は、転送信号TRGによりオンされたとき、メモリ部53に保持されている電荷を読み出し、FD55に転送する。 When the second transfer transistor 54 is turned on by the transfer signal TRG, the second transfer transistor 54 reads out the electric charge held in the memory unit 53 and transfers it to the FD55.
 FD55は、メモリ部53から読み出された電荷を信号として読み出すために保持する電荷保持部である。リセットトランジスタ56は、リセット信号RSTによりオンされたとき、FD55に蓄積されている電荷が定電圧源VDDに排出されることで、FD55の電位をリセットする。 The FD55 is a charge holding unit that holds the electric charge read from the memory unit 53 to read it as a signal. When the reset transistor 56 is turned on by the reset signal RST, the electric charge stored in the FD55 is discharged to the constant voltage source VDD to reset the potential of the FD55.
 増幅トランジスタ57は、FD55の電位に応じた画素信号を出力する。すなわち、増幅トランジスタ57は定電流源としての負荷MOS60とソースフォロワ回路を構成し、FD55に蓄積されている電荷に応じたレベルを示す画素信号が、増幅トランジスタ57から選択トランジスタ58を介してカラム信号処理回路35(図4)に出力される。負荷MOS60は、例えば、カラム信号処理回路35内に配置されている。 The amplification transistor 57 outputs a pixel signal according to the potential of the FD55. That is, the amplification transistor 57 constitutes a load MOS 60 as a constant current source and a source follower circuit, and a pixel signal indicating a level corresponding to the charge stored in the FD 55 is a column signal from the amplification transistor 57 via the selection transistor 58. It is output to the processing circuit 35 (FIG. 4). The load MOS 60 is arranged in, for example, the column signal processing circuit 35.
 選択トランジスタ58は、選択信号SELにより画素32が選択されたときオンされ、画素32の画素信号を、垂直信号線41を介してカラム信号処理回路35に出力する。 The selection transistor 58 is turned on when the pixel 32 is selected by the selection signal SEL, and outputs the pixel signal of the pixel 32 to the column signal processing circuit 35 via the vertical signal line 41.
 排出トランジスタ59は、排出信号OFGによりオンされたとき、フォトダイオード51に蓄積されている不要電荷を定電圧源VDDに排出する。 When the discharge transistor 59 is turned on by the discharge signal OFG, the discharge transistor 59 discharges the unnecessary charge stored in the photodiode 51 to the constant voltage source VDD.
 転送信号TRX及びTRG、リセット信号RST、排出信号OFG、並びに選択信号SELは、画素駆動配線40を介して垂直駆動回路34から供給される。 The transfer signals TRX and TRG, the reset signal RST, the discharge signal OFG, and the selection signal SEL are supplied from the vertical drive circuit 34 via the pixel drive wiring 40.
 画素32の動作について簡単に説明する。 The operation of the pixel 32 will be briefly described.
 まず、露光開始前に、Highレベルの排出信号OFGが排出トランジスタ59に供給されることにより排出トランジスタ59がオンされ、フォトダイオード51に蓄積されている電荷が定電圧源VDDに排出され、全画素のフォトダイオード51がリセットされる。 First, before the start of exposure, the high-level emission signal OFG is supplied to the emission transistor 59 to turn on the emission transistor 59, and the electric charge stored in the photodiode 51 is discharged to the constant voltage source VDD to all the pixels. Photodiode 51 is reset.
 フォトダイオード51のリセット後、排出トランジスタ59が、Lowレベルの排出信号OFGによりオフされると、画素アレイ部33の全画素で露光が開始される。 After resetting the photodiode 51, when the emission transistor 59 is turned off by the low level emission signal OFG, exposure is started in all the pixels of the pixel array unit 33.
 予め定められた所定の露光時間が経過すると、画素アレイ部33の全画素において、転送信号TRXにより第1転送トランジスタ52がオンされ、フォトダイオード51に蓄積されていた電荷が、メモリ部53に転送される。 When a predetermined exposure time elapses, the first transfer transistor 52 is turned on by the transfer signal TRX in all the pixels of the pixel array unit 33, and the electric charge accumulated in the photodiode 51 is transferred to the memory unit 53. Will be done.
 第1転送トランジスタ52がオフされた後、各画素32のメモリ部53に保持されている電荷が、行単位に、順次、カラム信号処理回路35に読み出される。読み出し動作は、読出し行の画素32の第2転送トランジスタ54が転送信号TRGによりオンされ、メモリ部53に保持されている電荷が、FD55に転送される。そして、選択トランジスタ58が選択信号SELによりオンされることで、FD55に蓄積されている電荷に応じたレベルを示す信号が、増幅トランジスタ57から選択トランジスタ58を介してカラム信号処理回路35に出力される。 After the first transfer transistor 52 is turned off, the electric charges held in the memory unit 53 of each pixel 32 are sequentially read out to the column signal processing circuit 35 row by row. In the read operation, the second transfer transistor 54 of the pixel 32 of the read line is turned on by the transfer signal TRG, and the electric charge held in the memory unit 53 is transferred to the FD55. Then, when the selection transistor 58 is turned on by the selection signal SEL, a signal indicating the level corresponding to the electric charge stored in the FD 55 is output from the amplification transistor 57 to the column signal processing circuit 35 via the selection transistor 58. NS.
 以上のように、図5の画素回路を有する画素32は、露光時間を画素アレイ部33の全画素で同一に設定し、露光終了後はメモリ部53に電荷を一時的に保持しておいて、メモリ部53から行単位に順次電荷を読み出すグローバルシャッタ方式の動作(撮像)が可能である。 As described above, in the pixel 32 having the pixel circuit of FIG. 5, the exposure time is set to be the same for all the pixels of the pixel array unit 33, and after the exposure is completed, the electric charge is temporarily held in the memory unit 53. It is possible to perform a global shutter operation (imaging) in which electric charges are sequentially read from the memory unit 53 in line units.
 なお、画素32の回路構成としては、図5に示した構成に限定されるものではなく、例えば、メモリ部53を持たず、いわゆるローリングシャッタ方式による動作を行う回路構成を採用することもできる。 The circuit configuration of the pixel 32 is not limited to the configuration shown in FIG. 5, and for example, a circuit configuration that does not have the memory unit 53 and operates by the so-called rolling shutter method can be adopted.
 <固体撮像装置の基本構造例>
 次に、図6を参照して、固体撮像素子11の詳細構造について説明する。図6は、固体撮像素子11の一部分を拡大して示した断面図である。
<Example of basic structure of solid-state image sensor>
Next, the detailed structure of the solid-state image sensor 11 will be described with reference to FIG. FIG. 6 is an enlarged cross-sectional view showing a part of the solid-state image sensor 11.
 ロジック基板25には、例えばシリコン(Si)で構成された半導体基板81(以下、シリコン基板81という。)の上側(画素センサ基板26側)に、多層配線層82が形成されている。この多層配線層82により、図3の制御回路22やロジック回路23が構成されている。 In the logic substrate 25, for example, a multilayer wiring layer 82 is formed on the upper side (pixel sensor substrate 26 side) of a semiconductor substrate 81 (hereinafter referred to as a silicon substrate 81) made of silicon (Si). The multi-layer wiring layer 82 constitutes the control circuit 22 and the logic circuit 23 of FIG.
 多層配線層82は、画素センサ基板26に最も近い最上層の配線層83a、中間の配線層83b、及び、シリコン基板81に最も近い最下層の配線層83cなどからなる複数の配線層83と、各配線層83の間に形成された層間絶縁膜84とで構成される。 The multilayer wiring layer 82 includes a plurality of wiring layers 83 including an uppermost wiring layer 83a closest to the pixel sensor substrate 26, an intermediate wiring layer 83b, and a lowermost wiring layer 83c closest to the silicon substrate 81. It is composed of an interlayer insulating film 84 formed between the wiring layers 83.
 複数の配線層83は、例えば、銅(Cu)、アルミニウム(Al)、タングステン(W)などを用いて形成され、層間絶縁膜84は、例えば、シリコン酸化膜、シリコン窒化膜などで形成される。複数の配線層83及び層間絶縁膜84のそれぞれは、全ての階層が同一の材料で形成されていてもよし、階層によって2つ以上の材料を使い分けてもよい。 The plurality of wiring layers 83 are formed of, for example, copper (Cu), aluminum (Al), tungsten (W), etc., and the interlayer insulating film 84 is formed of, for example, a silicon oxide film, a silicon nitride film, or the like. .. Each of the plurality of wiring layers 83 and the interlayer insulating film 84 may be formed of the same material in all layers, or two or more materials may be used properly depending on the layer.
 シリコン基板81の所定の位置には、シリコン基板81を貫通するシリコン貫通孔85が形成されており、シリコン貫通孔85の内壁に、絶縁膜86を介して接続導体87が埋め込まれることにより、シリコン貫通電極(TSV:Through Silicon Via)88が形成されている。絶縁膜86は、例えば、SiO2膜やSiN膜などで形成することができる。 A silicon through hole 85 penetrating the silicon substrate 81 is formed at a predetermined position of the silicon substrate 81, and silicon is formed by embedding a connecting conductor 87 in the inner wall of the silicon through hole 85 via an insulating film 86. A through electrode (TSV: Through Silicon Via) 88 is formed. The insulating film 86 can be formed of, for example, a SiO2 film or a SiN film.
 なお、図6に示されるシリコン貫通電極88では、内壁面に沿って絶縁膜86と接続導体87が成膜され、シリコン貫通孔85内部が空洞となっているが、内径によってはシリコン貫通孔85内部全体が接続導体87で埋め込まれることもある。換言すれば、貫通孔の内部が導体で埋め込まれていても、一部が空洞となっていてもどちらでもよい。このことは、チップ貫通電極(TCV:Through Chip Via)105などについても同様である。 In the through silicon via 88 shown in FIG. 6, an insulating film 86 and a connecting conductor 87 are formed along the inner wall surface, and the inside of the silicon through hole 85 is hollow. However, depending on the inner diameter, the silicon through hole 85 The entire interior may be embedded with a connecting conductor 87. In other words, it does not matter whether the inside of the through hole is embedded with a conductor or a part of the through hole is hollow. This also applies to the through silicon via (TCV: Through Chip Via) 105 and the like.
 シリコン貫通電極88の接続導体87は、シリコン基板81の下面側に形成された再配線90と接続されており、再配線90は、はんだボール29と接続されている。接続導体87及び再配線90は、例えば、銅(Cu)、タングステン(W)、タングステン(W)、ポリシリコンなどで形成することができる。 The connecting conductor 87 of the through silicon via 88 is connected to the rewiring 90 formed on the lower surface side of the silicon substrate 81, and the rewiring 90 is connected to the solder ball 29. The connecting conductor 87 and the rewiring 90 can be formed of, for example, copper (Cu), tungsten (W), tungsten (W), polysilicon, or the like.
 また、シリコン基板81の下面側には、はんだボール29が形成されている領域を除いて、再配線90と絶縁膜86を覆うように、ソルダマスク(ソルダレジスト)91が形成されている。 Further, on the lower surface side of the silicon substrate 81, a solder mask (solder resist) 91 is formed so as to cover the rewiring 90 and the insulating film 86, except for the region where the solder balls 29 are formed.
 一方、画素センサ基板26には、シリコン(Si)で構成された半導体基板101(以下、シリコン基板101という。)の下側(ロジック基板25側)に、多層配線層102が形成されている。この多層配線層102により、図3の画素領域21の画素回路が構成されている。 On the other hand, in the pixel sensor substrate 26, a multilayer wiring layer 102 is formed on the lower side (logic substrate 25 side) of the semiconductor substrate 101 (hereinafter referred to as silicon substrate 101) made of silicon (Si). The multi-layer wiring layer 102 constitutes the pixel circuit of the pixel region 21 of FIG.
 多層配線層102は、シリコン基板101に最も近い最上層の配線層103a、中間の配線層103b、及び、ロジック基板25に最も近い最下層の配線層103cなどからなる複数の配線層103と、各配線層103の間に形成された層間絶縁膜104とで構成される。 The multilayer wiring layer 102 includes a plurality of wiring layers 103 including an uppermost wiring layer 103a closest to the silicon substrate 101, an intermediate wiring layer 103b, and a lowermost wiring layer 103c closest to the logic substrate 25. It is composed of an interlayer insulating film 104 formed between the wiring layers 103.
 複数の配線層103及び層間絶縁膜104として使用される材料は、上述した配線層83及び層間絶縁膜84の材料と同種のものを採用することができる。また、複数の配線層103や層間絶縁膜104が、1または2つ以上の材料を使い分けて形成されてもよい点も、上述した配線層83及び層間絶縁膜84と同様である。 As the material used as the plurality of wiring layers 103 and the interlayer insulating film 104, the same material as the materials of the wiring layer 83 and the interlayer insulating film 84 described above can be adopted. Further, the same as the wiring layer 83 and the interlayer insulating film 84 described above, the plurality of wiring layers 103 and the interlayer insulating film 104 may be formed by using one or more materials properly.
 なお、図6の例では、画素センサ基板26の多層配線層102は3層の配線層103で構成され、ロジック基板25の多層配線層82は4層の配線層83で構成されているが、配線層の総数はこれに限られず、任意の層数で形成することができる。 In the example of FIG. 6, the multi-layer wiring layer 102 of the pixel sensor board 26 is composed of the three-layer wiring layer 103, and the multi-layer wiring layer 82 of the logic board 25 is composed of the four-layer wiring layer 83. The total number of wiring layers is not limited to this, and can be formed by any number of layers.
 シリコン基板101内には、PN接合により形成されたフォトダイオード51が、画素32ごとに形成されている。 In the silicon substrate 101, a photodiode 51 formed by a PN junction is formed for each pixel 32.
 また、図示は省略されているが、多層配線層102とシリコン基板101には、第1転送トランジスタ52、第2転送トランジスタ54などの複数の画素トランジスタや、メモリ部(MEM)53なども形成されている。 Although not shown, the multilayer wiring layer 102 and the silicon substrate 101 are also formed with a plurality of pixel transistors such as the first transfer transistor 52 and the second transfer transistor 54, a memory unit (MEM) 53, and the like. ing.
 カラーフィルタ27とオンチップレンズ28が形成されていないシリコン基板101の所定の位置には、画素センサ基板26の配線層103aと接続されているシリコン貫通電極109と、ロジック基板25の配線層83aと接続されているチップ貫通電極105が、形成されている。 At predetermined positions of the silicon substrate 101 on which the color filter 27 and the on-chip lens 28 are not formed, the through silicon via 109 connected to the wiring layer 103a of the pixel sensor substrate 26 and the wiring layer 83a of the logic substrate 25 A connected through silicon via 105 is formed.
 チップ貫通電極105とシリコン貫通電極109は、シリコン基板101上面に形成された接続用配線106で接続されている。また、シリコン貫通電極109及びチップ貫通電極105のそれぞれとシリコン基板101との間には、絶縁膜107が形成されている。さらに、シリコン基板101の上面には、平坦化膜(絶縁膜)108を介して、カラーフィルタ27やオンチップレンズ28が形成されている。 The through silicon via 105 and the through silicon via 109 are connected by a connection wiring 106 formed on the upper surface of the silicon substrate 101. Further, an insulating film 107 is formed between each of the through silicon via 109 and the through silicon via 105 and the silicon substrate 101. Further, a color filter 27 and an on-chip lens 28 are formed on the upper surface of the silicon substrate 101 via a flattening film (insulating film) 108.
 以上のように、図2に示される固体撮像素子11は、ロジック基板25の多層配線層102側と、画素センサ基板26の多層配線層82側とを貼り合わせた積層構造となっている。図6では、ロジック基板25の多層配線層102側と、画素センサ基板26の多層配線層82側とを貼り合わせ面が、破線で示されている。 As described above, the solid-state image sensor 11 shown in FIG. 2 has a laminated structure in which the multilayer wiring layer 102 side of the logic substrate 25 and the multilayer wiring layer 82 side of the pixel sensor substrate 26 are bonded together. In FIG. 6, the surface where the multilayer wiring layer 102 side of the logic substrate 25 and the multilayer wiring layer 82 side of the pixel sensor substrate 26 are bonded is shown by a broken line.
 また、撮像装置1の固体撮像素子11では、画素センサ基板26の配線層103とロジック基板25の配線層83が、シリコン貫通電極109とチップ貫通電極105の2本の貫通電極により接続され、ロジック基板25の配線層83とはんだボール(裏面電極)29が、シリコン貫通電極88と再配線90により接続されている。これにより、撮像装置1の平面積を、極限まで小さくすることができる。 Further, in the solid-state image sensor 11 of the image pickup device 1, the wiring layer 103 of the pixel sensor substrate 26 and the wiring layer 83 of the logic substrate 25 are connected by two through electrodes, a through silicon via 109 and a through silicon via 105, and logic. The wiring layer 83 of the substrate 25 and the solder ball (back surface electrode) 29 are connected to the through silicon via 88 by the rewiring 90. As a result, the flat area of the image pickup apparatus 1 can be reduced to the utmost limit.
 さらに、固体撮像素子11とガラス基板12との間を、キャビティレス構造にして、接着剤13により貼り合わせることにより、高さ方向についても低くすることができる。 Further, by forming a cavityless structure between the solid-state image sensor 11 and the glass substrate 12 and bonding them with an adhesive 13, the height direction can also be lowered.
 したがって、図1に示される撮像装置1によれば、より小型化した半導体装置(半導体パッケージ)を実現することができる。 Therefore, according to the imaging device 1 shown in FIG. 1, a smaller semiconductor device (semiconductor package) can be realized.
 以上のような撮像装置1の構成により、IRCF14が、固体撮像素子11、およびガラス基板12上に設けられることになるので、光の内乱反射によるフレアやゴーストの発生を抑制することが可能となる。 With the configuration of the image pickup device 1 as described above, since the IRCF 14 is provided on the solid-state image pickup device 11 and the glass substrate 12, it is possible to suppress the generation of flare and ghost due to the diffused reflection of light. ..
 すなわち、図7の左部で示されるように、IRCF14が、ガラス基板(Glass)12に対して離間して、レンズ(Lens)16とガラス基板12との中間付近に構成される場合、実線で示されるように入射光が集光されて、IRCF14、ガラス基板12、および接着剤13を介して、固体撮像素子(CIS)11に位置F0に入射した後、点線で示されるように位置F0において反射し、反射光が発生する。 That is, as shown in the left part of FIG. 7, when the IRCF 14 is configured near the middle between the lens (Lens) 16 and the glass substrate 12 at a distance from the glass substrate (Glass) 12, the solid line is used. The incident light is condensed as shown and is incident on the solid-state image sensor (CIS) 11 at position F0 via the IRCF 14, the glass substrate 12, and the adhesive 13, and then at position F0 as shown by the dotted line. It is reflected and reflected light is generated.
 位置F0で反射した反射光は、点線で示されるように、その一部が、例えば、接着剤13、およびガラス基板12を介して、ガラス基板12と離間した位置に配置されたIRCF14の背面(図7中の下方の面)R1で反射し、再び、ガラス基板12、および接着剤13を介して、位置F1において、再び固体撮像素子11に入射する。 As shown by the dotted line, the reflected light reflected at the position F0 is partially arranged on the back surface of the IRCF 14 at a position separated from the glass substrate 12 via, for example, the adhesive 13 and the glass substrate 12 ( (Lower surface in FIG. 7) Reflected by R1 and again incident on the solid-state image sensor 11 at position F1 via the glass substrate 12 and the adhesive 13.
 また、焦点F0で反射した反射光は、点線で示されるように、その他の一部が、例えば、接着剤13、およびガラス基板12、並びに、ガラス基板12と離間した位置に配置されたIRCF14を透過して、IRCF14の上面(図7中の上方の面)R2で反射し、IRCF14、ガラス基板12、および接着剤13を介して、位置F2において、再び固体撮像素子11に入射する。 Further, as shown by the dotted line, the reflected light reflected at the focal point F0 includes, for example, the adhesive 13, the glass substrate 12, and the IRCF 14 arranged at a position separated from the glass substrate 12. It is transmitted, reflected by the upper surface (upper surface in FIG. 7) R2 of the IRCF 14, and is incident on the solid-state image sensor 11 again at the position F2 via the IRCF 14, the glass substrate 12, and the adhesive 13.
 この位置F1,F2において、再び入射する光が、内乱反射に起因するフレアやゴーストを発生させる。より具体的には、図8の画像P1で示されるように、固体撮像素子11において、照明Lを撮像する際に、反射光R21,R22で示されるような、フレアやゴーストとして現れることになる。 At these positions F1 and F2, the light that is incident again causes flare and ghost due to internal disturbance reflection. More specifically, as shown in the image P1 of FIG. 8, when the illumination L is imaged in the solid-state image sensor 11, it appears as flare or ghost as shown by the reflected lights R21 and R22. ..
 これに対して、図1の撮像装置1の構成と対応する、図7の右部で示されるような撮像装置1のように、IRCF14が、ガラス基板12上に構成されると、実線で示される入射光が集光されて、IRCF14、接着剤15、ガラス基板12、および接着剤13を介して、固体撮像素子11に位置F0で入射した後、点線で示されるように反射する。そして、反射した光は、接着剤13、ガラス基板12、接着剤15、およびIRCF14を介して、レンズ群16上の最下位層のレンズの面R11により反射するが、レンズ群16がIRCF14から十分に離れた位置であるので、固体撮像素子11において十分に受光できない範囲に反射される。 On the other hand, when the IRCF 14 is configured on the glass substrate 12 as in the imaging device 1 as shown in the right part of FIG. 7, which corresponds to the configuration of the image pickup device 1 in FIG. 1, it is shown by a solid line. The incident light is collected, incident on the solid-state image sensor 11 at position F0 via the IRCF 14, the adhesive 15, the glass substrate 12, and the adhesive 13, and then reflected as shown by the dotted line. Then, the reflected light is reflected by the lens surface R11 of the lowest layer on the lens group 16 via the adhesive 13, the glass substrate 12, the adhesive 15, and the IRCF 14, but the lens group 16 is sufficiently from the IRCF 14. Since it is located at a distance from the lens, it is reflected in a range where the solid-state image sensor 11 cannot sufficiently receive light.
 ここで、図中の一点鎖線で囲まれた、固体撮像素子11、ガラス基板12、およびIRCF14は、略同一の屈折率の接着剤13,15により貼り合わされて一体化された一体化構造部10として構成されている。一体化構造部10においては、屈折率が統一されることで、異なる屈折率の層の境界で発生する内乱反射の発生が抑制され、例えば、図7の左部における位置F0の近傍である、位置F1,F2で再入射されることが抑制される。 Here, the solid-state image sensor 11, the glass substrate 12, and the IRCF 14 surrounded by the alternate long and short dash line in the figure are bonded and integrated by the adhesives 13 and 15 having substantially the same refractive index. It is configured as. In the integrated structure portion 10, by unifying the refractive indexes, the generation of internal disturbance reflection generated at the boundary between layers having different refractive indexes is suppressed, and for example, it is in the vicinity of the position F0 in the left portion of FIG. Re-incident at positions F1 and F2 is suppressed.
 これにより、図1の撮像装置1は、照明Lを撮像した場合、図8の画像P2で示されるように、画像P1における反射光R21,R22のような、内乱反射に起因するフレアやゴーストの発生が抑制された画像を撮像することができる。 As a result, when the image pickup device 1 of FIG. 1 captures the illumination L, as shown in the image P2 of FIG. 8, flares and ghosts caused by internal disturbance reflection such as the reflected lights R21 and R22 in the image P1 are generated. It is possible to take an image in which the occurrence is suppressed.
 結果として、図1で示される撮像装置1のような構成により、装置構成の小型化および低背化を実現すると共に、内乱反射に起因するフレアやゴーストの発生を抑制することができる。 As a result, the configuration as shown in FIG. 1 enables the device configuration to be miniaturized and reduced in height, and the occurrence of flare and ghost due to internal disturbance reflection can be suppressed.
 尚、図8の画像P1は、図7の左部の構成からなる撮像装置1により夜間において照明Lが撮像された画像であり、画像P2は、図7の右部の構成からなる(図1の)撮像装置1により夜間において照明Lが撮像された画像である。 The image P1 of FIG. 8 is an image in which the illumination L is captured at night by the image pickup device 1 having the configuration of the left portion of FIG. 7, and the image P2 has the configuration of the right portion of FIG. 7 (FIG. 1). This is an image in which the illumination L is captured by the image pickup device 1 at night.
 また、以上においては、レンズ群16をアクチュエータ18により図1中において上下方向に移動させることにより被写体までの距離に応じて、焦点距離を調整して、オートフォーカスを実現できる構成を例に説明したが、アクチュエータ18を設けず、レンズ群16による焦点距離を調整せず、いわゆる単焦点レンズとして機能させるようにしてもよい。 Further, in the above, the configuration in which the focal length can be adjusted according to the distance to the subject by moving the lens group 16 in the vertical direction in FIG. 1 by the actuator 18 to realize autofocus will be described as an example. However, the actuator 18 may not be provided, the focal length of the lens group 16 may not be adjusted, and the lens may function as a so-called single focus lens.
 <撮像装置の他の構成>
 図1に示した撮像装置1では、IRCF14を固体撮像素子11の撮像面側に貼り付けられたガラス基板12上に貼り付ける例について説明してきたが、さらに、レンズ群16を構成する最下位層のレンズを、IRCF14上に設けるようにしてもよい。
<Other configurations of imaging device>
In the image pickup apparatus 1 shown in FIG. 1, an example in which the IRCF 14 is attached onto the glass substrate 12 attached to the image pickup surface side of the solid-state image sensor 11 has been described, but further, the lowest layer constituting the lens group 16 has been described. The lens may be provided on the IRCF 14.
 図9は、図1における撮像装置1を構成する複数のレンズからなるレンズ群16のうちの、光の入射方向に対して最下位層となるレンズを、レンズ群16から分離して、IRCF14上に構成するようにした撮像装置1の構成例を示している。尚、図9において、図1における構成と基本的に同一の機能を備えた構成については、同一の符号を付しており、その説明は適宜省略するものとする。 In FIG. 9, among the lens group 16 composed of a plurality of lenses constituting the image pickup apparatus 1 in FIG. 1, the lens that is the lowest layer with respect to the incident direction of light is separated from the lens group 16 and is displayed on the IRCF14. An example of the configuration of the image pickup apparatus 1 having the above configuration is shown. In FIG. 9, the configurations having basically the same functions as those in FIG. 1 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
 すなわち、図9の撮像装置1において、図1の撮像装置1と異なる点は、IRCF14の図中の上面において、さらに、レンズ群16を構成する複数のレンズのうちの、光の入射方向に対して最下位層となるレンズ131を、レンズ群16から分離して設けた点である。尚、図9のレンズ群16は、図1のレンズ群16と同一の符号を付しているが、光の入射方向に対して最下位層となるレンズ131が含まれていない点で、厳密には、図1のレンズ群16とは異なる。 That is, the image pickup device 1 of FIG. 9 differs from the image pickup device 1 of FIG. 1 in that the upper surface of the IRCF 14 in the drawing is further with respect to the incident direction of light among the plurality of lenses constituting the lens group 16. This is a point in which the lens 131, which is the lowest layer, is provided separately from the lens group 16. The lens group 16 of FIG. 9 has the same reference numerals as the lens group 16 of FIG. 1, but is strictly defined in that the lens 131, which is the lowest layer with respect to the incident direction of light, is not included. Is different from the lens group 16 of FIG.
 図9のような撮像装置1の構成により、IRCF14が、固体撮像素子11上に設けられたガラス基板12上に設けられ、さらに、IRCF14上にレンズ群16を構成する最下位層のレンズ131が設けられることになるので、光の内乱反射によるフレアやゴーストの発生を抑制することが可能となる。 With the configuration of the image pickup apparatus 1 as shown in FIG. 9, the IRCF 14 is provided on the glass substrate 12 provided on the solid-state image sensor 11, and the lowest layer lens 131 constituting the lens group 16 is further formed on the IRCF 14. Since it is provided, it is possible to suppress the occurrence of flare and ghost due to the diffused reflection of light.
 なお、図1や図9に示した撮像装置1は、ガラス基板12、IRCF14、接着剤15等を備える構成を示したが、ガラス基板12、IRCF14、接着剤15等を備えない構成であっても本技術を適用できる。 The image pickup apparatus 1 shown in FIGS. 1 and 9 shows a configuration including a glass substrate 12, IRCF 14, an adhesive 15, and the like, but does not include a glass substrate 12, IRCF 14, an adhesive 15, and the like. This technology can also be applied.
 以下の説明においては、図9に示したレンズ131を備える撮像装置1を例に挙げて説明する。また以下の説明において、図2に示した固体撮像素子11がパッケージ化された半導体パッケージのさらに詳細な構成について説明を加えるが、説明に必要な部分を拡大し、簡略化した図を用いて説明する。 In the following description, the image pickup device 1 provided with the lens 131 shown in FIG. 9 will be described as an example. Further, in the following description, a more detailed configuration of the semiconductor package in which the solid-state image sensor 11 shown in FIG. 2 is packaged will be described. do.
 <第1の実施の形態成>
 図10、図11を参照し、第1の実施の形態における画素32aの構成について説明する。図10は、図9に示した一体化構造部10の一部分を拡大した断面図であり、図11は、画素32aを、上部(受光面側)から見たときの平面図である。
<Formation of the first embodiment>
The configuration of the pixel 32a in the first embodiment will be described with reference to FIGS. 10 and 11. FIG. 10 is an enlarged cross-sectional view of a part of the integrated structure portion 10 shown in FIG. 9, and FIG. 11 is a plan view of the pixel 32a when viewed from above (light receiving surface side).
 図10に示した一体化構造部10を参照するに、図中上部から、レンズ131、接着剤13、オンチップレンズ28a、平坦化膜202、カラーフィルタ27、フォトダイオード51が積層されている。なお、図10ではガラス基板12、IRCF14、接着剤15等は図示を省略している。 With reference to the integrated structure portion 10 shown in FIG. 10, a lens 131, an adhesive 13, an on-chip lens 28a, a flattening film 202, a color filter 27, and a photodiode 51 are laminated from the upper part of the drawing. In FIG. 10, the glass substrate 12, IRCF 14, adhesive 15, and the like are not shown.
 画素センサ基板26内には、フォトダイオード51が形成されている。このフォトダイオード51上に、カラーフィルタ27が積層されている。カラーフィルタ27間には、画素間遮光部201が形成されている。画素間遮光部201は、隣接する画素(フォトダイオード51)に光が漏れないように遮光するために設けられている。画素間遮光部201は、金属などの遮光機能を有する遮光部材を材料として形成することができる。 A photodiode 51 is formed in the pixel sensor substrate 26. A color filter 27 is laminated on the photodiode 51. An inter-pixel light-shielding portion 201 is formed between the color filters 27. The inter-pixel light-shielding unit 201 is provided to block light so that light does not leak to adjacent pixels (photodiode 51). The inter-pixel light-shielding portion 201 can be formed of a light-shielding member having a light-shielding function such as metal.
 図11に示すように、2×2の4画素(4個のフォトダイオード51)には、同一色が配置されている。図11では、4×4の16画素を示している。このうち、図中左上部に位置する2×2の4画素には、緑色のカラーフィルタ27が配置されている。図中右上部に位置する2×2の4画素には、青色のカラーフィルタ27が配置されている。 As shown in FIG. 11, the same color is arranged in 2 × 2 4 pixels (4 photodiodes 51). FIG. 11 shows 16 pixels of 4 × 4. Of these, the green color filter 27 is arranged in the 2 × 2 4 pixels located in the upper left part of the figure. A blue color filter 27 is arranged in the 2 × 2 4 pixels located in the upper right part of the figure.
 図中左下部に位置する2×2の4画素には、赤色のカラーフィルタ27が配置されている。図中右下部に位置する2×2の4画素には、緑色のカラーフィルタ27が配置されている。カラーフィルタ27の色配置は、ベイヤー配列となっている。 A red color filter 27 is arranged in the 2 × 2 4 pixels located at the lower left in the figure. A green color filter 27 is arranged in 4 pixels of 2 × 2 located at the lower right in the figure. The color arrangement of the color filter 27 is a Bayer arrangement.
 なお、色の配列としては、ベイヤー配列以外の配列を適用することもできる。また、カラーフィルタ27の色としては、シアン(Cy)、マゼンタ(Mg)、黄(Ye)等であっても良い。また、白(透明)が含まれていても良い。このことは以下の実施の形態においても、同様に適用できる。 As the color arrangement, an arrangement other than the Bayer arrangement can be applied. The color of the color filter 27 may be cyan (Cy), magenta (Mg), yellow (Ye), or the like. Moreover, white (transparent) may be included. This also applies to the following embodiments as well.
 オンチップレンズ28aは、フォトダイオード51毎に設けられている。 The on-chip lens 28a is provided for each photodiode 51.
 このように、2×2の4画素は、同色画素とされる構成にすることで、例えば、2×2の4画素のそれぞれの蓄積時間を変え、光量により、信号を取り出す画素を、4画素内から選択するといった処理を行うことができるようになる。このことにより、光量に応じた蓄積時間で電荷が蓄積された画素を適切に選択することができ、ダイナミックレンジを拡大することができる。 In this way, the 2 × 2 4 pixels are configured to have the same color, so that, for example, the storage time of each of the 2 × 2 4 pixels is changed, and the pixel that extracts the signal depending on the amount of light is 4 pixels. You will be able to perform processing such as selecting from within. As a result, the pixels in which the electric charge is accumulated can be appropriately selected with the accumulation time according to the amount of light, and the dynamic range can be expanded.
 このように、2×2の4画素を同色にした場合、2×2の4画素に感度の差があると、画質が悪化する可能性がある。例えば、2×2の4画素に、斜め方向からの光が入射した場合に、2×2の4画素において、感度に差が生じる可能性があり、画質が悪化する可能性がある。 In this way, when the 2 × 2 4 pixels have the same color, if there is a difference in sensitivity between the 2 × 2 4 pixels, the image quality may deteriorate. For example, when light from an oblique direction is incident on the 2 × 2 4 pixels, the sensitivity may be different in the 2 × 2 4 pixels, and the image quality may be deteriorated.
 図10を参照する。図10に、入射光を、矢印で示す。入射光の入射角を角度θとする。図10に示したように、角度θは、例えば、平坦化膜202の面に対する垂線に対する角度である。角度θで入力された光は、レンズ131を透過することで、角度θよりも小さい角度で、オンチップレンズ28aに入射される。オンチップレンズ28aに入射される光は、垂直に近い角度、少なくとも、入射光が入射してきた角度θよりも小さい角度で入射される。 Refer to FIG. In FIG. 10, the incident light is indicated by an arrow. Let the angle of incidence of the incident light be the angle θ. As shown in FIG. 10, the angle θ is, for example, an angle with respect to a perpendicular line with respect to the surface of the flattening film 202. The light input at the angle θ passes through the lens 131 and is incident on the on-chip lens 28a at an angle smaller than the angle θ. The light incident on the on-chip lens 28a is incident at an angle close to vertical, at least an angle smaller than the angle θ at which the incident light has been incident.
 また、撮像装置1(図9)に入射された入射光は、レンズ131に届く前に、レンズ群16を通過しているため、仮に斜め方向から入射した光であっても、レンズ群16に透過した時点で、集光され、入射した角度よりも垂直に近い角度に修正されている。よって、レンズ131に到達する光は、レンズ131に入射する時点で、垂直に近い角度θに修正されている。さらに、上記したように、レンズ131を透過することで、さらに角度が修正され、またオンチップレンズ28aでさらに集光されるため、フォトダイオード51には、垂直に近い光に変換された光が入射される。 Further, since the incident light incident on the image pickup apparatus 1 (FIG. 9) passes through the lens group 16 before reaching the lens 131, even if the light is incident from an oblique direction, the incident light enters the lens group 16. When it is transmitted, it is focused and corrected to an angle closer to vertical than the incident angle. Therefore, the light that reaches the lens 131 is corrected to an angle θ that is close to vertical when it enters the lens 131. Further, as described above, the angle is further corrected by passing through the lens 131, and the light is further focused by the on-chip lens 28a, so that the photodiode 51 receives light converted into light that is almost vertical. Being incident.
 よって、本技術によれば、隣接するフォトダイオード51への光漏れを低減させることができるため、隣接画素への光漏れによるクロストークを低減させることができる。よって、図10,11を参照して説明したように、2×2の4画素を同色画素にした場合であっても、同色間の感度差が生じるようなことを低減させることができ、画質劣化を防ぐことが可能となる。 Therefore, according to the present technology, it is possible to reduce the light leakage to the adjacent photodiode 51, so that the crosstalk due to the light leakage to the adjacent pixels can be reduced. Therefore, as described with reference to FIGS. 10 and 11, even when 4 pixels of 2 × 2 are made into pixels of the same color, it is possible to reduce the occurrence of a sensitivity difference between the same colors, and the image quality can be reduced. It is possible to prevent deterioration.
 また、クロストークを低減できる構造のため、クロストークを低減させるために設けられている画素間遮光部201の幅を細くしても良い。画素間遮光部201を細くすることで、フォトダイオード51の開口部を大きく形成することが可能となり、感度を向上させることができる。 Further, since the structure can reduce crosstalk, the width of the inter-pixel light-shielding portion 201 provided for reducing crosstalk may be narrowed. By making the inter-pixel light-shielding portion 201 thinner, it is possible to form a large opening of the photodiode 51, and the sensitivity can be improved.
 また、図12に示すように、瞳補正のための補正量を小さくすることもできる。 Further, as shown in FIG. 12, the correction amount for pupil correction can be reduced.
 撮像装置1(レンズ群16)への光は、撮像面に対してさまざまな角度で入ってくるため、画角中心の画素32aと画角端の画素32aを同様の構造とすると、効率良く集光できず、画角中心の画素32aと画角端の画素32aとで感度差が生じてしまう。画角中心の画素32aと画角端の画素32aとで感度差が生じず、一定の感度とするために、例えば、撮像面の中央(画角中心)では、レンズ群16の光軸とフォトダイオード51の開口を合わせ、画角端に向かうにつれて、フォトダイオード51の位置を主光線の向きに合わせてずらす瞳補正等と称される技術がある。 Since the light to the image pickup device 1 (lens group 16) enters at various angles with respect to the image pickup surface, if the pixel 32a at the center of the angle of view and the pixel 32a at the end of the angle of view have the same structure, they can be efficiently collected. Light cannot be emitted, and a sensitivity difference occurs between the pixel 32a at the center of the angle of view and the pixel 32a at the end of the angle of view. In order to keep the sensitivity constant between the pixel 32a at the center of the image angle and the pixel 32a at the edge of the image angle, for example, at the center of the imaging surface (center of the image angle), the optical axis of the lens group 16 and the photo There is a technique called pupil correction or the like in which the aperture of the diode 51 is adjusted and the position of the photodiode 51 is shifted according to the direction of the main light beam toward the end of the angle of view.
 図12は、画素端の画素32aの構造を示し、瞳補正が行われた画素32aの構造を示す。図12に示した画素32aの構造は、図11に示した画素32aの構造と同一であるが、瞳補正が行われることで、オンチップレンズ28aとカラーフィルタ27は、画角中心側にずらされた配置とされている。 FIG. 12 shows the structure of the pixel 32a at the pixel end, and shows the structure of the pixel 32a after pupil correction. The structure of the pixel 32a shown in FIG. 12 is the same as the structure of the pixel 32a shown in FIG. 11, but the on-chip lens 28a and the color filter 27 are shifted toward the center of the angle of view by performing pupil correction. It is said that it is arranged.
 図12に示した画素32aは、図中右側に画角中心があり、その画角中心側に近づく方向にオンチップレンズ28aとカラーフィルタ27がずらされている。このずれのずれ量を、ずれ量H1とする。 The pixel 32a shown in FIG. 12 has the center of the angle of view on the right side of the figure, and the on-chip lens 28a and the color filter 27 are shifted in the direction closer to the center of the angle of view. The amount of deviation of this deviation is defined as the amount of deviation H1.
 本実施の形態においては、上記したように、レンズ群16やレンズ131により、フォトダイオード51には、垂直に近い光に変換された光が入射される構造である。画角端においても、このような効果を得ることができる。よって、ずれ量H1は、本技術を適用していない場合のずれ量よりも、小さくて良い。仮に、ずれ量H1を0、換言すれば瞳補正を行わない場合であっても、本技術を適用した撮像装置1によれば、本技術を適用していない撮像装置よりも、画角端においても、クロストークを低減することができる。 In the present embodiment, as described above, the lens group 16 and the lens 131 have a structure in which light converted into near-vertical light is incident on the photodiode 51. Such an effect can be obtained even at the edge of the angle of view. Therefore, the deviation amount H1 may be smaller than the deviation amount when the present technology is not applied. Even if the deviation amount H1 is set to 0, in other words, even if the pupil correction is not performed, according to the image pickup apparatus 1 to which the present technology is applied, the angle of view edge is higher than that of the image pickup apparatus to which the present technology is not applied. Also, crosstalk can be reduced.
 第1の実施の形態によれば、混色を防ぎ、同色間の感度差を改善でき、画質劣化を防ぐことができる。 According to the first embodiment, it is possible to prevent color mixing, improve the sensitivity difference between the same colors, and prevent deterioration of image quality.
 <第2の画素の構成>
 図13は、第2の実施の形態における画素32bの構成例を示す図である。図10に示した第1の実施の形態における画素32aと同一の部分には同一の符号を付し、適宜説明は省略する。
<Structure of second pixel>
FIG. 13 is a diagram showing a configuration example of the pixel 32b according to the second embodiment. The same parts as the pixels 32a in the first embodiment shown in FIG. 10 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
 図13に示した画素32bは、第1の実施の形態における画素32aに、トレンチ221bを追加した構成とされている点以外は、第1の実施の形態における画素32aと同様な構成とされている。 The pixel 32b shown in FIG. 13 has the same configuration as the pixel 32a in the first embodiment except that the trench 221b is added to the pixel 32a in the first embodiment. There is.
 第2の実施の形態における画素32bとして、図13に断面構成例を示したが、第1の実施の形態における画素32aと同じく、平面で見た場合、図11に示した場合と同じ構成とされており、2×2の4画素32bが同色とされ、画素32a2毎にオンチップレンズ28aが設けられている構成である。 Although a cross-sectional configuration example is shown in FIG. 13 as the pixel 32b in the second embodiment, the same configuration as in the case shown in FIG. 11 when viewed in a plane is the same as that of the pixel 32a in the first embodiment. The 2 × 2 4 pixels 32b have the same color, and an on-chip lens 28a is provided for each pixel 32a2.
 トレンチ221bは、フォトダイオード51間に形成されている。トレンチ221b内は、空洞でも良いし、金属を充填した構成であっても良い。トレンチ221b内を金属で充填した構成とした場合、画素間遮光部201と一体構成とすることができる。図13では、一体化構成とした場合を示している。 The trench 221b is formed between the photodiodes 51. The inside of the trench 221b may be hollow or may be filled with metal. When the inside of the trench 221b is filled with metal, it can be integrated with the inter-pixel light-shielding portion 201. FIG. 13 shows a case where the integrated configuration is used.
 トレンチ221bは、フォトダイオード51間に設けられ、隣接する画素32aフォトダイオード51からの光の漏れ込みを防ぎ、フォトダイオード51を電気的に分離すること実現するために設けられている。このように、トレンチ221bを設けることで、画素32b間の画素間のクロストークをさらに低減させることができる。 The trench 221b is provided between the photodiodes 51 to prevent light from leaking from the adjacent pixel 32a photodiode 51 and to electrically separate the photodiode 51. By providing the trench 221b in this way, the crosstalk between the pixels between the pixels 32b can be further reduced.
 このようなトレンチ221bを設けた場合も、瞳補正を適用することができる。図14は、画素端の画素32bの構造を示し、瞳補正が行われた画素32bの構造を示す。図14に示した画素32aの構造は、図13に示した画素32bの構造と同一であるが、瞳補正が行われることで、オンチップレンズ28aとカラーフィルタ27は、画角中心側にずらされた配置とされている。 Even when such a trench 221b is provided, pupil correction can be applied. FIG. 14 shows the structure of the pixel 32b at the pixel end, and shows the structure of the pixel 32b for which pupil correction has been performed. The structure of the pixel 32a shown in FIG. 14 is the same as the structure of the pixel 32b shown in FIG. 13, but the on-chip lens 28a and the color filter 27 are shifted toward the center of the angle of view by performing pupil correction. It is said that it is arranged.
 本実施の形態においては、上記したように、レンズ群16やレンズ131により、フォトダイオード51には、垂直に近い光に変換された光が入射される構造である。よって、図14に示した画素32bにおいても、ずれ量H1は、本技術を適用していない場合のずれ量よりも、小さくて良い。 In the present embodiment, as described above, the lens group 16 and the lens 131 have a structure in which light converted into near-vertical light is incident on the photodiode 51. Therefore, even in the pixel 32b shown in FIG. 14, the deviation amount H1 may be smaller than the deviation amount when the present technology is not applied.
 第2の実施の形態によれば、混色を防ぎ、同色間の感度差を改善でき、画質劣化を防ぐことができる。 According to the second embodiment, it is possible to prevent color mixing, improve the sensitivity difference between the same colors, and prevent deterioration of image quality.
 <第3の画素の構成>
 図15は、第3の実施の形態における画素32cの断面構成例を示す図であり、図16は、平面構成例を示す図である。図10,図11に示した第1の実施の形態における画素32aと同一の部分には同一の符号を付し、適宜説明は省略する。
<Structure of third pixel>
FIG. 15 is a diagram showing a cross-sectional configuration example of the pixel 32c in the third embodiment, and FIG. 16 is a diagram showing a plane configuration example. The same parts as the pixels 32a in the first embodiment shown in FIGS. 10 and 11 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
 図15,図16に示した第3の実施の形態における画素32cは、図10,図11に示した第1の実施の形態における画素32aが、フォトダイオード51毎にオンチップレンズ28を備えている構成とされていたのに対して、2×2の4画素32cで、1つのオンチップレンズ28cを備えている点が異なり、他の点は同一である。 In the pixel 32c in the third embodiment shown in FIGS. 15 and 16, the pixel 32a in the first embodiment shown in FIGS. 10 and 11 includes an on-chip lens 28 for each photodiode 51. The difference is that the 2 × 2 4-pixel 32c is provided with one on-chip lens 28c, and the other points are the same.
 第3の実施の形態における画素32cは、2×2の4画素32c(4個のフォトダイオード51)には、同一色が配置されている。図16では、4×4の16画素を示している。このうち、図中左上部に位置する2×2のフォトダイオード51上には、それぞれ緑色のカラーフィルタ27が配置されている。また図中右上部に位置する2×2のフォトダイオード51上には、それぞれ青色のカラーフィルタ27が配置されている。 As for the pixel 32c in the third embodiment, the same color is arranged in the 2 × 2 4 pixel 32c (4 photodiodes 51). FIG. 16 shows 16 pixels of 4 × 4. Of these, green color filters 27 are arranged on the 2 × 2 photodiodes 51 located in the upper left portion of the drawing. Further, a blue color filter 27 is arranged on each of the 2 × 2 photodiodes 51 located in the upper right part of the drawing.
 また図中左下部に位置する2×2のフォトダイオード51上には、それぞれ赤色のカラーフィルタ27が配置されている。また図中右下部に位置する2×2のフォトダイオード51上には、それぞれ緑色のカラーフィルタ27が配置されている。カラーフィルタ27の色配置は、ベイヤー配列となっている。このような構成は、第1、第2の実施の形態と同じである。 A red color filter 27 is arranged on each of the 2 × 2 photodiodes 51 located at the lower left in the figure. Further, a green color filter 27 is arranged on each of the 2 × 2 photodiodes 51 located at the lower right in the figure. The color arrangement of the color filter 27 is a Bayer arrangement. Such a configuration is the same as in the first and second embodiments.
 オンチップレンズ28Cは、2×2の4個のフォトダイオード51毎に設けられている。すなわち、図中左上部に位置する緑色のカラーフィルタ27が配置されている2×2のフォトダイオード51上に1つのオンチップレンズ28cが積層されている。また図中左上部に位置する青色のカラーフィルタ27が配置されている2×2のフォトダイオード51上に1つのオンチップレンズ28cが積層されている。 The on-chip lens 28C is provided for each of four 2 × 2 photodiodes 51. That is, one on-chip lens 28c is laminated on the 2 × 2 photodiode 51 in which the green color filter 27 located in the upper left portion of the drawing is arranged. Further, one on-chip lens 28c is laminated on a 2 × 2 photodiode 51 in which a blue color filter 27 located in the upper left portion of the figure is arranged.
 また図中左下部に位置する赤色のカラーフィルタ27が配置されている2×2のフォトダイオード51上に1つのオンチップレンズ28cが積層されている。また図中右下部に位置する緑色のカラーフィルタ27が配置されている2×2のフォトダイオード51上に1つのオンチップレンズ28cが積層されている。 Further, one on-chip lens 28c is laminated on the 2 × 2 photodiode 51 in which the red color filter 27 located at the lower left in the figure is arranged. Further, one on-chip lens 28c is laminated on a 2 × 2 photodiode 51 in which a green color filter 27 located at the lower right in the figure is arranged.
 このように、4画素でオンチップレンズ28cを共用した場合であり、また4画素が同色画素とされている構成とすることで、例えば、2×2の4画素のそれぞれの蓄積時間を変え、光量により、信号を取り出す画素を、4画素内から選択するといった処理を行うことができるようになる。このことにより、光量に応じた蓄積時間で電荷が蓄積された画素を適切に選択することができ、ダイナミックレンジを拡大することができる。 In this way, when the on-chip lens 28c is shared by 4 pixels, and by configuring the 4 pixels to be the same color pixels, for example, the accumulation time of each of the 2 × 2 4 pixels can be changed. Depending on the amount of light, it becomes possible to perform processing such as selecting a pixel for extracting a signal from among four pixels. As a result, the pixels in which the electric charge is accumulated can be appropriately selected with the accumulation time according to the amount of light, and the dynamic range can be expanded.
 第3の実施の形態における画素32cにおいても、図10を参照して説明した第1の実施の形態と同じく、隣接するフォトダイオード51への光漏れを低減させることができるため、隣接画素への光漏れによるクロストークを低減させることができる。よって、図15,16を参照して説明したように、2×2の4画素を同色画素にし、1つのオンチップレンズ28cを共有する構成にした場合であっても、同色間の感度差が生じるようなことを低減させることができ、画質劣化を防ぐことが可能となる。 Also in the pixel 32c in the third embodiment, as in the first embodiment described with reference to FIG. 10, light leakage to the adjacent photodiode 51 can be reduced, so that the pixel 32c can be connected to the adjacent pixel. Cross talk due to light leakage can be reduced. Therefore, as described with reference to FIGS. 15 and 16, even when the 2 × 2 4 pixels are set to the same color pixels and one on-chip lens 28c is shared, the sensitivity difference between the same colors is large. It is possible to reduce what occurs and prevent deterioration of image quality.
 また、クロストークを低減できる構造のため、クロストークを低減させるために設けられている画素間遮光部201の幅を細くしても良い。画素間遮光部201を細くすることで、フォトダイオード51を大きく形成することが可能となり、感度を向上させることができる。 Further, since the structure can reduce crosstalk, the width of the inter-pixel light-shielding portion 201 provided for reducing crosstalk may be narrowed. By making the inter-pixel light-shielding portion 201 thinner, the photodiode 51 can be formed larger, and the sensitivity can be improved.
 第3の実施の形態における画素32cの場合も、瞳補正を適用することができる。図17は、画素端の画素32cの構造を示し、瞳補正が行われた画素32cの構造を示す。図17に示した画素32cの構造は、図15に示した画素32cの構造と同一であるが、瞳補正が行われることで、オンチップレンズ28cとカラーフィルタ27は、画角中心側にずらされた配置とされている。 The pupil correction can also be applied to the pixel 32c in the third embodiment. FIG. 17 shows the structure of the pixel 32c at the pixel end, and shows the structure of the pixel 32c with pupil correction. The structure of the pixel 32c shown in FIG. 17 is the same as the structure of the pixel 32c shown in FIG. 15, but the on-chip lens 28c and the color filter 27 are shifted toward the center of the angle of view by performing pupil correction. It is said that it is arranged.
 本実施の形態においては、上記したように、レンズ群16やレンズ131により、フォトダイオード51には、垂直に近い光に変換された光が入射される構造である。よって、図17に示した画素32cにおいても、ずれ量H1は、本技術を適用していない場合のずれ量よりも、小さくて良い。 In the present embodiment, as described above, the lens group 16 and the lens 131 have a structure in which light converted into near-vertical light is incident on the photodiode 51. Therefore, even in the pixel 32c shown in FIG. 17, the deviation amount H1 may be smaller than the deviation amount when the present technology is not applied.
 第3の実施の形態によれば、混色を防ぎ、同色間の感度差を改善でき、画質劣化を防ぐことができる。 According to the third embodiment, it is possible to prevent color mixing, improve the sensitivity difference between the same colors, and prevent deterioration of image quality.
 <第4の画素の構成>
 図18は、第4の実施の形態における画素32dの構成例を示す図である。図15に示した第3の実施の形態における画素32cと同一の部分には同一の符号を付し、適宜説明は省略する。
<Structure of 4th pixel>
FIG. 18 is a diagram showing a configuration example of the pixel 32d according to the fourth embodiment. The same parts as the pixels 32c in the third embodiment shown in FIG. 15 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
 図18に示した画素32dは、第3の実施の形態における画素32cに、トレンチ221dを追加した構成とされている点以外は、第3の実施の形態における画素32cと同様な構成とされている。 The pixel 32d shown in FIG. 18 has the same configuration as the pixel 32c in the third embodiment except that the trench 221d is added to the pixel 32c in the third embodiment. There is.
 第4の実施の形態における画素32dとして、図18に断面構成例を示したが、第3の実施の形態における画素32cと同じく、平面で見た場合、図16に示した場合と同じ構成とされており、2×2の4画素32dが同色とされ、1つのオンチップレンズ28cが積層されている構成である。 Although a cross-sectional configuration example is shown in FIG. 18 as the pixel 32d in the fourth embodiment, the same configuration as in the case shown in FIG. 16 when viewed in a plane is the same as that of the pixel 32c in the third embodiment. 2 × 2 4 pixels 32d have the same color, and one on-chip lens 28c is laminated.
 トレンチ221dは、フォトダイオード51間に形成されている。トレンチ221d内は、空洞でも良いし、金属を充填した構成であっても良い。トレンチ221d内を金属で充填した構成とした場合、画素間遮光部201と一体化した構成とすることができる。図18では、一体化した構成とした場合を図示している。 The trench 221d is formed between the photodiodes 51. The inside of the trench 221d may be hollow or may be filled with metal. When the inside of the trench 221d is filled with metal, the structure can be integrated with the inter-pixel light-shielding portion 201. FIG. 18 illustrates a case where the configuration is integrated.
 トレンチ221dは、フォトダイオード51間に設けられ、隣接するフォトダイオード51からの光の漏れ込みを防ぎ、フォトダイオード51の電気的な分離を実現するために設けられている。このように、トレンチ221dを設けることで、画素32d間の画素間のクロストークをさらに低減させることができる。 The trench 221d is provided between the photodiodes 51 to prevent light from leaking from the adjacent photodiodes 51 and to realize electrical separation of the photodiodes 51. By providing the trench 221d in this way, crosstalk between pixels between the pixels 32d can be further reduced.
 このようなトレンチ221dを設けた場合も、瞳補正を適用することができる。図19は、画素端の画素32dの構造を示し、瞳補正が行われた画素32dの構造を示す。図19に示した画素32dの構造は、図18に示した画素32dの構造と同一であるが、瞳補正が行われることで、オンチップレンズ28cとカラーフィルタ27は、画角中心側にずらされた配置とされている。 Even when such a trench 221d is provided, pupil correction can be applied. FIG. 19 shows the structure of the pixel 32d at the pixel end, and shows the structure of the pixel 32d with pupil correction. The structure of the pixel 32d shown in FIG. 19 is the same as the structure of the pixel 32d shown in FIG. 18, but the on-chip lens 28c and the color filter 27 are shifted toward the center of the angle of view by performing pupil correction. It is said that it is arranged.
 本実施の形態においては、上記したように、レンズ群16やレンズ131により、フォトダイオード51には、垂直に近い光に変換された光が入射される構造である。よって、図19に示した画素32dにおいても、ずれ量H1は、本技術を適用していない場合のずれ量よりも、小さくて良い。 In the present embodiment, as described above, the lens group 16 and the lens 131 have a structure in which light converted into near-vertical light is incident on the photodiode 51. Therefore, even in the pixel 32d shown in FIG. 19, the deviation amount H1 may be smaller than the deviation amount when the present technology is not applied.
 第4の実施の形態によれば、混色を防ぎ、同色間の感度差を改善でき、画質劣化を防ぐことができる。 According to the fourth embodiment, it is possible to prevent color mixing, improve the sensitivity difference between the same colors, and prevent deterioration of image quality.
 <第5の画素の構成>
 図20は、第5の実施の形態における画素32eの断面構成例を示す図であり、図21は、平面構成例を示す図である。図15、図16に示した第3の実施の形態における画素32cと同一の部分には同一の符号を付し、適宜説明は省略する。
<Structure of 5th pixel>
FIG. 20 is a diagram showing a cross-sectional configuration example of the pixel 32e in the fifth embodiment, and FIG. 21 is a diagram showing a plane configuration example. The same parts as the pixels 32c in the third embodiment shown in FIGS. 15 and 16 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
 図20に示した画素32eは、画素間遮光部201の厚さが、第3の実施の形態における画素32cの画素間遮光部201と異なる点以外は、第3の実施の形態における画素32cと同様の構成を有している。 The pixel 32e shown in FIG. 20 is different from the pixel 32c in the third embodiment except that the thickness of the inter-pixel light-shielding portion 201 is different from that of the inter-pixel light-shielding portion 201 of the pixel 32c in the third embodiment. It has a similar configuration.
 図20,図21に示した画素32eの画素間遮光部201は、線幅の太さ異なる画素間遮光部201e-1と画素間遮光部201e-2から構成されている。画素間遮光部201e-2は、画素間遮光部201e-1より太く形成されている。画素間遮光部201e-1は、例えば、第3の実施の形態における画素32cの画素間遮光部201と同程度の太さとされる。 The inter-pixel light-shielding unit 201 of the pixels 32e shown in FIGS. 20 and 21 is composed of an inter-pixel light-shielding unit 201e-1 having a different line width and an inter-pixel light-shielding unit 201e-2. The inter-pixel light-shielding portion 201e-2 is formed thicker than the inter-pixel light-shielding portion 201e-1. The inter-pixel light-shielding unit 201e-1 has, for example, the same thickness as the inter-pixel light-shielding unit 201 of the pixels 32c in the third embodiment.
 画素間遮光部201-1は、1つのオンチップレンズ28c下の2×2の4画素32eの外周部分を囲む遮光部である。換言すれば、画素間遮光部201-1は、異なる色が配置された画素間を遮光する部分に設けられている遮光部である。異なる色が配置されている画素間において、隣接する画素に光が漏れると、異なる色の光が漏れ込むことになるため、そのようなことがないように、所定の線幅を有する画素間遮光部201-1が形成され、影響がないように構成される。 The inter-pixel light-shielding unit 2011-1 is a light-shielding unit that surrounds the outer peripheral portion of a 2 × 2 4-pixel 32e under one on-chip lens 28c. In other words, the inter-pixel light-shielding unit 2011-1 is a light-shielding unit provided in a portion that shields light between pixels in which different colors are arranged. If light leaks to adjacent pixels between pixels in which different colors are arranged, light of different colors leaks. Therefore, in order to prevent such a situation, inter-pixel shading having a predetermined line width is prevented. Part 201-1 is formed and is configured to have no effect.
 画素間遮光部201-2は、1つのオンチップレンズ28c下の2×2の4画素32eの画素間に形成される遮光部である。換言すれば、画素間遮光部201-2は、同じ色が配置された画素間を遮光する部分に設けられている遮光部である。同じ色が配置されている画素間において、隣接する画素に光が漏れると、同色の光が漏れ込むことになる。上記した異なる色の光が漏れ込む場合よりも、影響は少ないと考えられるため、画素間遮光部201-2は、画素間遮光部201-1よりも細い線幅で形成しても良い。 The inter-pixel light-shielding unit 201-2 is a light-shielding unit formed between the pixels of 2 × 2 4 pixels 32e under one on-chip lens 28c. In other words, the inter-pixel light-shielding unit 201-2 is a light-shielding unit provided in a portion that shields light between pixels in which the same color is arranged. If light leaks to adjacent pixels between pixels in which the same color is arranged, light of the same color leaks. Since it is considered that the influence is less than the case where the light of different colors leaks as described above, the inter-pixel light-shielding unit 201-2 may be formed with a line width narrower than that of the inter-pixel light-shielding unit 201-1.
 上記したように、画素間遮光部201-1の線幅は、画素間遮光部201-2の線幅よりも太く形成されている。このように、画素間遮光部201が形成されている部分により、画素間遮光部201の線幅が異なる構成としても良い。 As described above, the line width of the inter-pixel light-shielding portion 201-1 is formed to be thicker than the line width of the inter-pixel light-shielding portion 201-2. As described above, the line width of the inter-pixel light-shielding portion 201 may be different depending on the portion where the inter-pixel light-shielding portion 201 is formed.
 一般的に画素間に設けられている遮光部の線幅を細くすると、クロストークが悪化する。本技術によれば、上記したように、フォトダイオード51に入射される光の入射角は、略垂直に近づいているため、クロストークを軽減できる構造となっている。よって、画素間遮光部201-2の厚さを細く形成したとしても、画素32e間でのクロストークが悪化することなく、抑制することができる。 Generally, if the line width of the light-shielding portion provided between pixels is narrowed, crosstalk worsens. According to the present technology, as described above, since the incident angle of the light incident on the photodiode 51 approaches substantially vertical, the structure is such that crosstalk can be reduced. Therefore, even if the thickness of the inter-pixel light-shielding portion 201-2 is formed to be thin, crosstalk between the pixels 32e can be suppressed without deterioration.
 画素間遮光部201-1の線幅も、本技術を適用してない画素に設けられている画素間遮光部の線幅より細くすることができる。 The line width of the inter-pixel light-shielding unit 2011-1 can also be made narrower than the line width of the inter-pixel light-shielding unit provided on the pixels to which the present technology is not applied.
 ここでは、異なる色が配置されている画素間での光の漏れは、影響が大きいため、この部分の画素間遮光部201-1は、十分に光の漏れ込みを抑制できる(クロストークを抑制できる)厚さに形成されている例を示したが、本技術によれば、クロストークを軽減できる構造となっているため、画素間遮光部201-1の厚さも、画素間遮光部201-2の厚さと同程度まで細くしても良い。 Here, since light leakage between pixels in which different colors are arranged has a large effect, the inter-pixel light-shielding portion 2011-1 in this portion can sufficiently suppress light leakage (crosstalk is suppressed). Although an example is shown in which the thickness is formed to be possible, according to the present technology, since the structure is such that crosstalk can be reduced, the thickness of the inter-pixel light-shielding portion 211-1 is also the inter-pixel light-shielding portion 201-. It may be as thin as the thickness of 2.
 画素間遮光部201-2の線幅を細くすることで、フォトダイオード51の開口面積を広くすることができる。よって、画素間遮光部201-2の線幅を細くすることで、画素32cの感度を向上させることもできる。 By narrowing the line width of the inter-pixel light-shielding portion 201-2, the aperture area of the photodiode 51 can be widened. Therefore, the sensitivity of the pixel 32c can be improved by narrowing the line width of the inter-pixel light-shielding portion 201-2.
 なお、図示はしないが、第5の実施の形態における画素32eに対しても、瞳補正を適用しオンチップレンズ28cやカラーフィルタ27を、画素端において、画角中心側にずらした構成とすることもできる。 Although not shown, the on-chip lens 28c and the color filter 27 are configured to shift the on-chip lens 28c and the color filter 27 toward the center of the angle of view at the pixel end by applying pupil correction to the pixel 32e in the fifth embodiment. You can also do it.
 <第6の画素の構成>
 図22は、第6の実施の形態における画素32fの断面構成例を示す図であり、図23は、平面構成例を示す図である。図15、図16に示した第3の実施の形態における画素32cと同一の部分には同一の符号を付し、適宜説明は省略する。
<Structure of 6th pixel>
FIG. 22 is a diagram showing a cross-sectional configuration example of the pixel 32f in the sixth embodiment, and FIG. 23 is a diagram showing a plane configuration example. The same parts as the pixels 32c in the third embodiment shown in FIGS. 15 and 16 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
 第6の実施の形態における画素32fは、1つのオンチップレンズ28c下の2×2の4画素32eの画素間に形成される画素間遮光部が形成されていない点が、第3の実施の形態における画素32cと異なり、他の点は同様である。 The third embodiment is that the pixel 32f in the sixth embodiment does not have an inter-pixel light-shielding portion formed between the pixels of the 2 × 2 four-pixel 32e under one on-chip lens 28c. Unlike the pixel 32c in the form, other points are the same.
 第6の実施の形態における画素32fと、第3の実施の形態における画素32cは、(図15)を比較した場合、画素32fは、画素32cが同色のカラーフィルタ27下に配置されている画素32c間に設けられていた画素間遮光部201が形成されていない構成とされている。 When the pixel 32f in the sixth embodiment and the pixel 32c in the third embodiment are compared (FIG. 15), the pixel 32f is a pixel in which the pixel 32c is arranged under the color filter 27 of the same color. The inter-pixel light-shielding portion 201 provided between the 32c is not formed.
 図20に示した第5の実施の形態における画素32eと比較した場合、第6の実施の形態における画素32fは、画素32eが有していた画素間遮光部201-2が形成されていない構成とされている。 When compared with the pixel 32e in the fifth embodiment shown in FIG. 20, the pixel 32f in the sixth embodiment does not have the inter-pixel light-shielding portion 201-2 formed by the pixel 32e. It is said that.
 第5の実施の形態における画素32dは、画素間遮光部201-2の線幅を細く形成した実施の形態であり、細く形成してもクロストークが抑制できる構成である。そこでさらに、第6の実施の形態における画素32fは、画素間遮光部201-2の線幅を0、すなわち形成しない構成とした場合を示している。 The pixel 32d in the fifth embodiment is an embodiment in which the line width of the inter-pixel light-shielding portion 201-2 is formed to be thin, and crosstalk can be suppressed even if the line width is made thin. Therefore, further, the pixel 32f in the sixth embodiment shows a case where the line width of the inter-pixel light-shielding portion 201-2 is set to 0, that is, a configuration in which the pixel 32f is not formed.
 第6の実施の形態における画素32fは、1つのオンチップレンズ28c下の2×2の4画素32fの外周部分を囲む画素間遮光部201fを有する。換言すれば、異なる色が配置された画素間を遮光する部分には、画素間遮光部201fが設けられている。 The pixel 32f in the sixth embodiment has an inter-pixel light-shielding portion 201f that surrounds an outer peripheral portion of a 2 × 2 4-pixel 32f under one on-chip lens 28c. In other words, an inter-pixel light-shielding portion 201f is provided at a portion that shields light between pixels in which different colors are arranged.
 1つのオンチップレンズ28c下の2×2の4画素32fの画素間に形成される遮光部を設けない構成とすることで、カラーフィルタ27fは、2×2の4画素32fを覆う形状で形成される。 The color filter 27f is formed so as to cover the 2 × 2 4 pixels 32f by not providing a light-shielding portion formed between the pixels of the 2 × 2 4 pixels 32f under one on-chip lens 28c. Will be done.
 1つのオンチップレンズ28c下の2×2の4画素32fの画素間に形成される遮光部を設けない構成とすることで、フォトダイオード51の開口面積を広くすることができる。よって、画素32fの感度を向上させることもできる。 The aperture area of the photodiode 51 can be widened by providing a configuration in which a light-shielding portion formed between 2 × 2 4 pixels 32f pixels under one on-chip lens 28c is not provided. Therefore, the sensitivity of the pixel 32f can be improved.
 このような画素間遮光部201f(カラーフィルタ27f)を設けた場合も、瞳補正を適用することができる。図示はしないが、画素端の画素32fであり、瞳補正が行われた画素32fは、オンチップレンズ28とカラーフィルタ27fが、画角中心側にずらされた配置とされている。 Even when such an inter-pixel shading unit 201f (color filter 27f) is provided, pupil correction can be applied. Although not shown, the pixel 32f at the pixel end and the pupil-corrected pixel 32f have the on-chip lens 28 and the color filter 27f displaced toward the center of the angle of view.
 第6の実施の形態によれば、混色を防ぎ、同色間の感度差を改善でき、画質劣化を防ぐことができる。 According to the sixth embodiment, it is possible to prevent color mixing, improve the sensitivity difference between the same colors, and prevent deterioration of image quality.
 <第7の画素の構成>
 図24は、第7の実施の形態における画素32gの構成例を示す図である。図22に示した第6の実施の形態における画素32fと同一の部分には同一の符号を付し、適宜説明は省略する。
<Structure of 7th pixel>
FIG. 24 is a diagram showing a configuration example of the pixel 32g according to the seventh embodiment. The same parts as the pixels 32f in the sixth embodiment shown in FIG. 22 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
 図24に示した画素32gは、第6の実施の形態における画素32fに、トレンチ221gを追加した構成とされている点以外は、第6の実施の形態における画素32fと同様な構成とされている。 The pixel 32g shown in FIG. 24 has the same configuration as the pixel 32f in the sixth embodiment except that the trench 221g is added to the pixel 32f in the sixth embodiment. There is.
 第7の実施の形態における画素32gとして、図24に断面構成例を示したが、第6の実施の形態における画素32fと同じく、平面で見た場合、図23に示した場合と同じ構成とされており、2×2の4画素32gが同色とされ、1つのオンチップレンズ28cとカラーフィルタ27fが積層されている構成である。 Although a cross-sectional configuration example is shown in FIG. 24 as the pixel 32g in the seventh embodiment, the same configuration as that shown in FIG. 23 when viewed in a plane is the same as that of the pixel 32f in the sixth embodiment. 32g of 4 pixels of 2 × 2 have the same color, and one on-chip lens 28c and a color filter 27f are laminated.
 トレンチ221gは、フォトダイオード51間に形成されている。トレンチ221g内は、空洞でも良いし、金属を充填した構成であっても良い。トレンチ221g内を金属で充填した構成とした場合、画素間遮光部201fが形成されている部分においては、画素間遮光部201fと一体構成とすることができる。トレンチ221gは、画素間遮光部201が設けられていないカラーフィルタ27fの下側のフォトダイオード51間にも形成されている。 The trench 221g is formed between the photodiodes 51. The inside of the trench 221g may be hollow or may be filled with metal. When the inside of the trench 221g is filled with metal, the inter-pixel light-shielding portion 201f can be integrated with the inter-pixel light-shielding portion 201f in the portion where the inter-pixel light-shielding portion 201f is formed. The trench 221g is also formed between the photodiodes 51 on the lower side of the color filter 27f in which the inter-pixel light-shielding portion 201 is not provided.
 トレンチ221gは、フォトダイオード51間に設けられ、隣接するフォトダイオード51からの光の漏れ込みを防ぎ、フォトダイオード51を電気的な分離を実現するために設けられている。このように、トレンチ221gを設けることで、画素32g間の画素間のクロストークをさらに低減させることができる。 The trench 221g is provided between the photodiodes 51 to prevent light from leaking from the adjacent photodiodes 51 and to realize electrical separation of the photodiodes 51. By providing the trench 221 g in this way, crosstalk between pixels between the pixels 32 g can be further reduced.
 このようなトレンチ221gを設けた場合も、瞳補正を適用することができる。図示はしないが、画素端の画素32gであり、瞳補正が行われた画素32gは、オンチップレンズ28cとカラーフィルタ27fが、画角中心側にずらされた配置とされている。 Even when such a trench 221 g is provided, pupil correction can be applied. Although not shown, the pixels at the pixel ends are 32 g, and the pupil-corrected pixels 32 g are arranged so that the on-chip lens 28c and the color filter 27f are shifted toward the center of the angle of view.
 第7の実施の形態によれば、混色を防ぎ、同色間の感度差を改善でき、画質劣化を防ぐことができる。 According to the seventh embodiment, it is possible to prevent color mixing, improve the sensitivity difference between the same colors, and prevent deterioration of image quality.
 <第8の画素の構成>
 図25は、第8の実施の形態における画素32hの断面構成例を示す図である。図24に示した第7の実施の形態における画素32gと同一の部分には同一の符号を付し、適宜説明は省略する。
<Structure of 8th pixel>
FIG. 25 is a diagram showing a cross-sectional configuration example of the pixel 32h according to the eighth embodiment. The same parts as the pixels 32g in the seventh embodiment shown in FIG. 24 are designated by the same reference numerals, and the description thereof will be omitted as appropriate.
 図25に示した画素32hは、トレンチ221hの厚さが、第7の実施の形態における画素32gのトレンチ221gと異なる点以外は、第7の実施の形態における画素32gと同様の構成を有している。 The pixel 32h shown in FIG. 25 has the same configuration as the pixel 32g in the seventh embodiment, except that the thickness of the trench 221h is different from the trench 221g of the pixel 32g in the seventh embodiment. ing.
 図25に示した画素32hのトレンチ221hは、太さの異なるトレンチ221-1とトレンチ221-2から構成されている。トレンチ221h-2は、トレンチ221h-1より細く形成されている。トレンチ221h-1は、例えば、第7の実施の形態における画素32gのトレンチ221gと同程度の太さとされる。 The trench 221h of the pixel 32h shown in FIG. 25 is composed of a trench 221-1 and a trench 221-2 having different thicknesses. The trench 221h-2 is formed thinner than the trench 221h-1. The trench 221h-1 has, for example, the same thickness as the trench 221g having 32 g of pixels in the seventh embodiment.
 トレンチ221h-1は、1つのオンチップレンズ28c下の2×2の4画素32hの外周部分を囲む遮光部である。換言すれば、トレンチ221h-1は、異なる色が配置された画素間を遮光する部分に設けられている遮光部である。異なる色が配置されている画素間において、隣接する画素に光が漏れると、異なる色の光が漏れ込むことになるため、そのようなことがないように、所定の線幅を有するトレンチ221h-1が形成され、影響がないように構成される。 The trench 221h-1 is a light-shielding portion that surrounds the outer peripheral portion of a 2 × 2 4-pixel 32h under one on-chip lens 28c. In other words, the trench 221h-1 is a light-shielding portion provided in a portion that shields light between pixels in which different colors are arranged. If light leaks to adjacent pixels between pixels in which different colors are arranged, light of different colors leaks. Therefore, in order to prevent such a situation, a trench 221h- having a predetermined line width is used. 1 is formed and is configured to have no effect.
 トレンチ221h-2は、1つのオンチップレンズ28c下の2×2の4画素32hの画素間に形成される遮光部である。換言すれば、トレンチ221h-2は、同じ色が配置された画素間を遮光する部分に設けられている遮光部である。同じ色が配置されている画素間において、隣接する画素に光が漏れると、同色の光が漏れ込むことになる。上記した異なる色の光が漏れ込む場合よりも、影響は少ないと考えられるため、トレンチ221h-2は、トレンチ221h-1よりも細い線幅で形成しても良い。 The trench 221h-2 is a light-shielding portion formed between 2 × 2 4 pixels 32h pixels under one on-chip lens 28c. In other words, the trench 221h-2 is a light-shielding portion provided in a portion that shields light between pixels in which the same color is arranged. If light leaks to adjacent pixels between pixels in which the same color is arranged, light of the same color leaks. The trench 221h-2 may be formed with a line width narrower than that of the trench 221h-1, because it is considered that the influence is less than the case where the light of different colors leaks.
 上記したように、トレンチ221h-1の線幅は、トレンチ221h-2の線幅よりも太く形成されている。このように、トレンチ221hが形成されている部分により、トレンチ221hの線幅が異なる構成としても良い。 As described above, the line width of the trench 221h-1 is formed to be thicker than the line width of the trench 221h-2. As described above, the line width of the trench 221h may be different depending on the portion where the trench 221h is formed.
 一般的に画素間に設けられている遮光部として機能するトレンチの線幅を細くすると、クロストークが悪化する。本技術によれば、上記したように、フォトダイオード51に入射される光の入射角は、略垂直に近づいているため、クロストークを軽減できる構造となっている。よって、トレンチ221h-2の厚さを細く形成したとしても、画素32h間でのクロストークが悪化することなく、抑制することができる。 Crosstalk worsens when the line width of the trench, which generally functions as a light-shielding part provided between pixels, is narrowed. According to the present technology, as described above, since the incident angle of the light incident on the photodiode 51 is close to substantially vertical, the structure is such that crosstalk can be reduced. Therefore, even if the thickness of the trench 221h-2 is made thin, the crosstalk between the pixels 32h can be suppressed without being deteriorated.
 トレンチ221h-1の線幅も、本技術を適用してない画素に設けられている画素間遮光部の線幅より細くすることができる。 The line width of the trench 221h-1 can also be made narrower than the line width of the inter-pixel light-shielding portion provided in the pixels to which the present technology is not applied.
 ここでは、異なる色が配置されている画素間での光の漏れは、影響が大きいため、この部分のトレンチ221h-1は、十分に光の漏れ込みを抑制できる(クロストークを抑制できる)厚さに形成されている例を示したが、本技術によれば、クロストークを軽減できる構造となっているため、トレンチ221h-1の厚さも、トレンチ221h-2の厚さと同程度まで細くしても良い。 Here, since light leakage between pixels in which different colors are arranged has a large effect, the trench 221h-1 in this portion can sufficiently suppress light leakage (crosstalk can be suppressed). As shown in the example, the thickness of the trench 221h-1 is reduced to the same level as the thickness of the trench 221h-2 because the structure is such that crosstalk can be reduced according to the present technology. You may.
 トレンチ221h-2の線幅を細くすることで、フォトダイオード51の開口面積を広くすることができる。よって、トレンチ221h-2の線幅を細くすることで、画素32hの感度を向上させることもできる。 By narrowing the line width of the trench 221h-2, the opening area of the photodiode 51 can be widened. Therefore, the sensitivity of the pixel 32h can be improved by narrowing the line width of the trench 221h-2.
 なお、図示はしないが、第8の実施の形態における画素32hに対しても、瞳補正を適用しオンチップレンズ28cやカラーフィルタ27fを、画素端において、画角中心側にずらした構成とすることもできる。 Although not shown, the on-chip lens 28c and the color filter 27f are configured to shift the on-chip lens 28c and the color filter 27f toward the center of the angle of view at the pixel end by applying pupil correction to the pixel 32h in the eighth embodiment. You can also do it.
 <第9の画素の構成>
 図26は、第9の実施の形態における画素32iの断面構成例を示す図である。第1乃至第8の実施の形態においては、2×2の4画素32(フォトダイオード51)を1つの単位として扱い、その基本単位である4画素32(フォトダイオード51)上に同一色のカラーフィルタ27を配置した場合を例に挙げて説明した。
<Structure of 9th pixel>
FIG. 26 is a diagram showing a cross-sectional configuration example of the pixel 32i according to the ninth embodiment. In the first to eighth embodiments, a 2 × 2 4-pixel 32 (photodiode 51) is treated as one unit, and the same color is applied on the basic unit of the 4-pixel 32 (photodiode 51). The case where the filter 27 is arranged has been described as an example.
 第9の実施の形態においては、亜画素32i内に、2つのフォトダイオード51を含み、その2つのフォトダイオード51を1つの単位として扱い、その基本単位である2つのフォトダイオード51上に同一色のカラーフィルタ27fを配置した場合を例に挙げて説明する。 In the ninth embodiment, two photodiodes 51 are included in the subpixel 32i, the two photodiodes 51 are treated as one unit, and the same color is applied on the two photodiodes 51 which are the basic units thereof. The case where the color filter 27f of the above is arranged will be described as an example.
 図26に示した第9の実施の形態における画素32iは、例えば、図22に示した第6の実施の形態における画素32fと組み合わせた場合を示すが、第6の実施の形態以外の形態と組み合わせることも可能である。 The pixel 32i in the ninth embodiment shown in FIG. 26 is shown, for example, in combination with the pixel 32f in the sixth embodiment shown in FIG. 22, but is different from the embodiment other than the sixth embodiment. It is also possible to combine them.
 図26に示した第9の実施の形態における画素32iは、図22に示した第6の実施の形態における画素32fと組み合わせた場合を示しており、断面構造は、図22に示したようになる。 The pixel 32i in the ninth embodiment shown in FIG. 26 shows the case of being combined with the pixel 32f in the sixth embodiment shown in FIG. 22, and the cross-sectional structure is as shown in FIG. 22. Become.
 図26に示した画素32iは、2個のフォトダイオード51を基本単位とし、2個のフォトダイオード51上に同一色のカラーフィルタ27fが積層され、1つのオンチップレンズ28cが積層されている。図26に示した画素32iは、基本単位とされている2個のフォトダイオード51は、横方向(図中左右方向)に配置されている例を示したが、縦方向(図中上下方向)に配置されていても良い。 The pixel 32i shown in FIG. 26 has two photodiodes 51 as a basic unit, and a color filter 27f of the same color is laminated on the two photodiodes 51, and one on-chip lens 28c is laminated. The pixel 32i shown in FIG. 26 shows an example in which the two photodiodes 51, which are the basic units, are arranged in the horizontal direction (horizontal direction in the figure), but in the vertical direction (vertical direction in the figure). It may be arranged in.
 図26に示した1つのオンチップレンズ28c下に左右方向または上下方向で2個のフォトダイオード51を配置した画素32iは、所謂、像面位相差方式などと称される位相差を取得し、その位相差を用いてオートフォーカスを実現するシステムに適用できる。 The pixel 32i in which two photodiodes 51 are arranged in the left-right direction or the up-down direction under one on-chip lens 28c shown in FIG. 26 acquires a phase difference called an image plane phase difference method or the like. It can be applied to a system that realizes autofocus using the phase difference.
 図26に示した1つのオンチップレンズ28c下に左右方向で2個のフォトダイオード51を配置した画素32iの場合、左右方向の位相差を取得することができる。1つのオンチップレンズ28c下に上下方向で2個のフォトダイオード51を配置した画素32iの場合、上下方向の位相差を取得することができる。 In the case of the pixel 32i in which two photodiodes 51 are arranged in the left-right direction under one on-chip lens 28c shown in FIG. 26, the phase difference in the left-right direction can be acquired. In the case of the pixel 32i in which two photodiodes 51 are arranged in the vertical direction under one on-chip lens 28c, the phase difference in the vertical direction can be acquired.
 画素アレイ部33内または画素アレイ部33外に、1つのオンチップレンズ28c下に左右方向で2個のフォトダイオード51を配置した画素32iと上下方向で2個のフォトダイオード51を配置した画素32iを配置し、左右方向の位相差と上下方向の位相差を取得できる構成としても良い。 A pixel 32i in which two photodiodes 51 are arranged in the left-right direction and a pixel 32i in which two photodiodes 51 are arranged in the vertical direction are arranged under one on-chip lens 28c inside the pixel array unit 33 or outside the pixel array unit 33. May be configured so that the phase difference in the left-right direction and the phase difference in the up-down direction can be obtained.
 仮に、1つのオンチップレンズ28c下に左右方向で2個のフォトダイオード51を配置した画素32iを配置した場合、上下方向の位相差は取得できないが、読み出しゲートの数を削減でき、フォトダイオード51の飽和性能を向上させることができる。 If the pixel 32i in which two photodiodes 51 are arranged in the left-right direction is arranged under one on-chip lens 28c, the phase difference in the vertical direction cannot be obtained, but the number of readout gates can be reduced, and the photodiode 51 Saturation performance can be improved.
 なお、上記したように、第9の実施の形態は、第6の実施の形態以外の実施の形態とも組み合わせが可能であり、例えば、トレンチ221を設けた構成にしたり、瞳補正を適用した構成としたりしてもよい。 As described above, the ninth embodiment can be combined with the embodiments other than the sixth embodiment, for example, a configuration in which a trench 221 is provided or a configuration in which pupil correction is applied. May be.
 第1乃至第9の実施の形態の画素32を、OPB(Optical Black)領域の画素に適用しても良い。CMOS(Complementary Metal Oxide Semiconductor)イメージセンサやCCD(Charge Coupled Device)イメージセンサ等の撮像素子を用いる撮像装置等においては、撮像素子において得られる画像信号に対して、その黒レベルを基準値に補正するクランプ処理が行われる。 The pixels 32 of the first to ninth embodiments may be applied to the pixels in the OPB (Optical Black) region. In an image pickup device that uses an image sensor such as a CMOS (Complementary Metal Oxide Semiconductor) image sensor or a CCD (Charge Coupled Device) image sensor, the black level of the image signal obtained by the image sensor is corrected to a reference value. The clamping process is performed.
 例えば、撮像素子の有効画素領域外に、基準とする黒レベルを検出するOPB領域が設けられ、その画素値を用いて有効画素領域内の各画素値の補正が行われる。OPB領域の各画素の構造は、遮光膜によって外部からの光の入射が遮られていること以外、有効画素領域の画素と同様である。よって、OPB領域に配置されている画素に、第1乃至第9の実施の形態の画素32を適用することもできる。またOPB領域は、通常、有効画素領域外に設けられるため、OPB領域内の画素に瞳補正をかけて配置しても良い。 For example, an OPB region for detecting a reference black level is provided outside the effective pixel region of the image sensor, and each pixel value in the effective pixel region is corrected using the pixel value. The structure of each pixel in the OPB region is the same as that of the pixel in the effective pixel region, except that the light incident from the outside is blocked by the light-shielding film. Therefore, the pixels 32 of the first to ninth embodiments can be applied to the pixels arranged in the OPB region. Further, since the OPB region is usually provided outside the effective pixel region, the pixels in the OPB region may be arranged with pupil correction.
 本技術によれば、混色を防ぐことができ、同色間の感度の差を改善することができる。よって、画質劣化を防ぐことができる。また像面位相差方式の画素に適用した場合、像面位相差の取得精度を向上させることもできる。 According to this technology, color mixing can be prevented and the difference in sensitivity between the same colors can be improved. Therefore, deterioration of image quality can be prevented. Further, when applied to pixels of the image plane phase difference method, it is possible to improve the acquisition accuracy of the image plane phase difference.
 <電子機器への適用例>
 本技術は、撮像素子への適用に限られるものではない。即ち、本技術は、デジタルスチルカメラやビデオカメラ等の撮像装置や、撮像機能を有する携帯端末装置や、画像読取部に撮像素子を用いる複写機など、画像取込部(光電変換部)に撮像素子を用いる電子機器全般に対して適用可能である。撮像素子は、ワンチップとして形成された形態であってもよいし、撮像部と信号処理部または光学系とがまとめてパッケージングされた撮像機能を有するモジュール状の形態であってもよい。
<Example of application to electronic devices>
The present technology is not limited to application to an image sensor. That is, this technology captures images on an image capture unit (photoelectric conversion unit) such as an image pickup device such as a digital still camera or a video camera, a portable terminal device having an image pickup function, or a copier that uses an image sensor as an image reader. It can be applied to all electronic devices that use elements. The image pickup device may be in the form of one chip, or may be in the form of a module having an image pickup function in which the image pickup unit and the signal processing unit or the optical system are packaged together.
 図27は、本技術を適用した電子機器としての、撮像装置の構成例を示すブロック図である。 FIG. 27 is a block diagram showing a configuration example of an imaging device as an electronic device to which the present technology is applied.
 図27の撮像素子1000は、レンズ群などからなる光学部1001、図1の撮像装置1の構成が採用される撮像素子(撮像デバイス)1002、およびカメラ信号処理回路であるDSP(Digital Signal Processor)回路1003を備える。また、撮像素子1000は、フレームメモリ1004、表示部1005、記録部1006、操作部1007、および電源部1008も備える。DSP回路1003、フレームメモリ1004、表示部1005、記録部1006、操作部1007および電源部1008は、バスライン1009を介して相互に接続されている。 The image sensor 1000 of FIG. 27 includes an optical unit 1001 including a lens group, an image sensor (imaging device) 1002 adopting the configuration of the image sensor 1 of FIG. 1, and a DSP (Digital Signal Processor) which is a camera signal processing circuit. The circuit 1003 is provided. The image sensor 1000 also includes a frame memory 1004, a display unit 1005, a recording unit 1006, an operation unit 1007, and a power supply unit 1008. The DSP circuit 1003, the frame memory 1004, the display unit 1005, the recording unit 1006, the operation unit 1007, and the power supply unit 1008 are connected to each other via the bus line 1009.
 光学部1001は、被写体からの入射光(像光)を取り込んで撮像素子1002の撮像面上に結像する。撮像素子1002は、光学部1001によって撮像面上に結像された入射光の光量を画素単位で電気信号に変換して画素信号として出力する。この撮像素子1002として、図1の撮像装置1を用いることができる。 The optical unit 1001 captures incident light (image light) from the subject and forms an image on the image pickup surface of the image pickup device 1002. The image sensor 1002 converts the amount of incident light imaged on the image pickup surface by the optical unit 1001 into an electric signal in pixel units and outputs it as a pixel signal. As the image pickup device 1002, the image pickup device 1 of FIG. 1 can be used.
 表示部1005は、例えば、LCD(Liquid Crystal Display)や有機EL(Electro Luminescence)ディスプレイ等の薄型ディスプレイで構成され、撮像素子1002で撮像された動画または静止画を表示する。記録部1006は、撮像素子1002で撮像された動画または静止画を、ハードディスクや半導体メモリ等の記録媒体に記録する。 The display unit 1005 is composed of a thin display such as an LCD (Liquid Crystal Display) or an organic EL (Electro Luminescence) display, and displays a moving image or a still image captured by the image pickup element 1002. The recording unit 1006 records a moving image or a still image captured by the image sensor 1002 on a recording medium such as a hard disk or a semiconductor memory.
 操作部1007は、ユーザによる操作の下に、撮像素子1000が持つ様々な機能について操作指令を発する。電源部1008は、DSP回路1003、フレームメモリ1004、表示部1005、記録部1006および操作部1007の動作電源となる各種の電源を、これら供給対象に対して適宜供給する。 The operation unit 1007 issues operation commands for various functions of the image sensor 1000 under the operation of the user. The power supply unit 1008 appropriately supplies various power sources that serve as operating power sources for the DSP circuit 1003, the frame memory 1004, the display unit 1005, the recording unit 1006, and the operation unit 1007.
 <内視鏡手術システムへの応用例>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、内視鏡手術システムに適用されてもよい。
<Example of application to endoscopic surgery system>
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the techniques according to the present disclosure may be applied to endoscopic surgery systems.
 図28は、本開示に係る技術(本技術)が適用され得る内視鏡手術システムの概略的な構成の一例を示す図である。 FIG. 28 is a diagram showing an example of a schematic configuration of an endoscopic surgery system to which the technique according to the present disclosure (the present technique) can be applied.
 図28では、術者(医師)11131が、内視鏡手術システム11000を用いて、患者ベッド11133上の患者11132に手術を行っている様子が図示されている。図示するように、内視鏡手術システム11000は、内視鏡11100と、気腹チューブ11111やエネルギー処置具11112等の、その他の術具11110と、内視鏡11100を支持する支持アーム装置11120と、内視鏡下手術のための各種の装置が搭載されたカート11200と、から構成される。 FIG. 28 shows a surgeon (doctor) 11131 performing surgery on patient 11132 on patient bed 11133 using the endoscopic surgery system 11000. As shown, the endoscopic surgery system 11000 includes an endoscope 11100, other surgical tools 11110 such as a pneumoperitoneum tube 11111 and an energy treatment tool 11112, and a support arm device 11120 that supports the endoscope 11100. , A cart 11200 equipped with various devices for endoscopic surgery.
 内視鏡11100は、先端から所定の長さの領域が患者11132の体腔内に挿入される鏡筒11101と、鏡筒11101の基端に接続されるカメラヘッド11102と、から構成される。図示する例では、硬性の鏡筒11101を有するいわゆる硬性鏡として構成される内視鏡11100を図示しているが、内視鏡11100は、軟性の鏡筒を有するいわゆる軟性鏡として構成されてもよい。 The endoscope 11100 is composed of a lens barrel 11101 in which a region having a predetermined length from the tip is inserted into the body cavity of the patient 11132, and a camera head 11102 connected to the base end of the lens barrel 11101. In the illustrated example, the endoscope 11100 configured as a so-called rigid mirror having a rigid barrel 11101 is illustrated, but the endoscope 11100 may be configured as a so-called flexible mirror having a flexible barrel. good.
 鏡筒11101の先端には、対物レンズが嵌め込まれた開口部が設けられている。内視鏡11100には光源装置11203が接続されており、当該光源装置11203によって生成された光が、鏡筒11101の内部に延設されるライトガイドによって当該鏡筒の先端まで導光され、対物レンズを介して患者11132の体腔内の観察対象に向かって照射される。なお、内視鏡11100は、直視鏡であってもよいし、斜視鏡又は側視鏡であってもよい。 An opening in which an objective lens is fitted is provided at the tip of the lens barrel 11101. A light source device 11203 is connected to the endoscope 11100, and the light generated by the light source device 11203 is guided to the tip of the lens barrel by a light guide extending inside the lens barrel 11101 to be an objective. It is irradiated toward the observation target in the body cavity of the patient 11132 through the lens. The endoscope 11100 may be a direct endoscope, a perspective mirror, or a side endoscope.
 カメラヘッド11102の内部には光学系及び撮像素子が設けられており、観察対象からの反射光(観察光)は当該光学系によって当該撮像素子に集光される。当該撮像素子によって観察光が光電変換され、観察光に対応する電気信号、すなわち観察像に対応する画像信号が生成される。当該画像信号は、RAWデータとしてカメラコントロールユニット(CCU: Camera Control Unit)11201に送信される。 An optical system and an image pickup element are provided inside the camera head 11102, and the reflected light (observation light) from the observation target is focused on the image pickup element by the optical system. The observation light is photoelectrically converted by the image sensor, and an electric signal corresponding to the observation light, that is, an image signal corresponding to the observation image is generated. The image signal is transmitted as RAW data to the camera control unit (CCU: Camera Control Unit) 11201.
 CCU11201は、CPU(Central Processing Unit)やGPU(Graphics Processing Unit)等によって構成され、内視鏡11100及び表示装置11202の動作を統括的に制御する。さらに、CCU11201は、カメラヘッド11102から画像信号を受け取り、その画像信号に対して、例えば現像処理(デモザイク処理)等の、当該画像信号に基づく画像を表示するための各種の画像処理を施す。 The CCU11201 is composed of a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), etc., and comprehensively controls the operations of the endoscope 11100 and the display device 11202. Further, the CCU 11201 receives an image signal from the camera head 11102, and performs various image processing on the image signal for displaying an image based on the image signal, such as development processing (demosaic processing).
 表示装置11202は、CCU11201からの制御により、当該CCU11201によって画像処理が施された画像信号に基づく画像を表示する。 The display device 11202 displays an image based on the image signal processed by the CCU 11201 under the control of the CCU 11201.
 光源装置11203は、例えばLED(light emitting diode)等の光源から構成され、術部等を撮影する際の照射光を内視鏡11100に供給する。 The light source device 11203 is composed of, for example, a light source such as an LED (light emission diode), and supplies irradiation light to the endoscope 11100 when photographing an operating part or the like.
 入力装置11204は、内視鏡手術システム11000に対する入力インタフェースである。ユーザは、入力装置11204を介して、内視鏡手術システム11000に対して各種の情報の入力や指示入力を行うことができる。例えば、ユーザは、内視鏡11100による撮像条件(照射光の種類、倍率及び焦点距離等)を変更する旨の指示等を入力する。 The input device 11204 is an input interface for the endoscopic surgery system 11000. The user can input various information and input instructions to the endoscopic surgery system 11000 via the input device 11204. For example, the user inputs an instruction to change the imaging conditions (type of irradiation light, magnification, focal length, etc.) by the endoscope 11100.
 処置具制御装置11205は、組織の焼灼、切開又は血管の封止等のためのエネルギー処置具11112の駆動を制御する。気腹装置11206は、内視鏡11100による視野の確保及び術者の作業空間の確保の目的で、患者11132の体腔を膨らめるために、気腹チューブ11111を介して当該体腔内にガスを送り込む。レコーダ11207は、手術に関する各種の情報を記録可能な装置である。プリンタ11208は、手術に関する各種の情報を、テキスト、画像又はグラフ等各種の形式で印刷可能な装置である。 The treatment tool control device 11205 controls the drive of the energy treatment tool 11112 for ablation of tissue, incision, sealing of blood vessels, and the like. The pneumoperitoneum device 11206 uses a gas in the pneumoperitoneum tube 11111 to inflate the body cavity of the patient 11132 for the purpose of securing the field of view by the endoscope 11100 and securing the work space of the operator. To send. The recorder 11207 is a device capable of recording various information related to surgery. The printer 11208 is a device capable of printing various information related to surgery in various formats such as texts, images, and graphs.
 なお、内視鏡11100に術部を撮影する際の照射光を供給する光源装置11203は、例えばLED、レーザ光源又はこれらの組み合わせによって構成される白色光源から構成することができる。RGBレーザ光源の組み合わせにより白色光源が構成される場合には、各色(各波長)の出力強度及び出力タイミングを高精度に制御することができるため、光源装置11203において撮像画像のホワイトバランスの調整を行うことができる。また、この場合には、RGBレーザ光源それぞれからのレーザ光を時分割で観察対象に照射し、その照射タイミングに同期してカメラヘッド11102の撮像素子の駆動を制御することにより、RGBそれぞれに対応した画像を時分割で撮像することも可能である。当該方法によれば、当該撮像素子にカラーフィルタを設けなくても、カラー画像を得ることができる。 The light source device 11203 that supplies the irradiation light to the endoscope 11100 when photographing the surgical site can be composed of, for example, an LED, a laser light source, or a white light source composed of a combination thereof. When a white light source is configured by combining RGB laser light sources, the output intensity and output timing of each color (each wavelength) can be controlled with high accuracy. Therefore, the light source device 11203 adjusts the white balance of the captured image. It can be carried out. Further, in this case, the laser light from each of the RGB laser light sources is irradiated to the observation target in a time-division manner, and the drive of the image sensor of the camera head 11102 is controlled in synchronization with the irradiation timing to correspond to each of RGB. It is also possible to capture the image in a time-division manner. According to this method, a color image can be obtained without providing a color filter on the image sensor.
 また、光源装置11203は、出力する光の強度を所定の時間ごとに変更するようにその駆動が制御されてもよい。その光の強度の変更のタイミングに同期してカメラヘッド11102の撮像素子の駆動を制御して時分割で画像を取得し、その画像を合成することにより、いわゆる黒つぶれ及び白とびのない高ダイナミックレンジの画像を生成することができる。 Further, the drive of the light source device 11203 may be controlled so as to change the intensity of the output light at predetermined time intervals. By controlling the drive of the image sensor of the camera head 11102 in synchronization with the timing of changing the light intensity to acquire an image in a time-divided manner and synthesizing the image, so-called high dynamic without blackout and overexposure. Range images can be generated.
 また、光源装置11203は、特殊光観察に対応した所定の波長帯域の光を供給可能に構成されてもよい。特殊光観察では、例えば、体組織における光の吸収の波長依存性を利用して、通常の観察時における照射光(すなわち、白色光)に比べて狭帯域の光を照射することにより、粘膜表層の血管等の所定の組織を高コントラストで撮影する、いわゆる狭帯域光観察(Narrow Band Imaging)が行われる。あるいは、特殊光観察では、励起光を照射することにより発生する蛍光により画像を得る蛍光観察が行われてもよい。蛍光観察では、体組織に励起光を照射し当該体組織からの蛍光を観察すること(自家蛍光観察)、又はインドシアニングリーン(ICG)等の試薬を体組織に局注するとともに当該体組織にその試薬の蛍光波長に対応した励起光を照射し蛍光像を得ること等を行うことができる。光源装置11203は、このような特殊光観察に対応した狭帯域光及び/又は励起光を供給可能に構成され得る。 Further, the light source device 11203 may be configured to be able to supply light in a predetermined wavelength band corresponding to special light observation. In special light observation, for example, by utilizing the wavelength dependence of light absorption in body tissue to irradiate light in a narrow band as compared with the irradiation light (that is, white light) in normal observation, the surface layer of the mucous membrane. So-called narrow band imaging, in which a predetermined tissue such as a blood vessel is photographed with high contrast, is performed. Alternatively, in the special light observation, fluorescence observation may be performed in which an image is obtained by fluorescence generated by irradiating with excitation light. In fluorescence observation, the body tissue is irradiated with excitation light to observe the fluorescence from the body tissue (autofluorescence observation), or a reagent such as indocyanine green (ICG) is locally injected into the body tissue and the body tissue is injected. It is possible to obtain a fluorescence image by irradiating excitation light corresponding to the fluorescence wavelength of the reagent. The light source device 11203 may be configured to be capable of supplying narrow band light and / or excitation light corresponding to such special light observation.
 図29は、図28に示すカメラヘッド11102及びCCU11201の機能構成の一例を示すブロック図である。 FIG. 29 is a block diagram showing an example of the functional configuration of the camera head 11102 and CCU11201 shown in FIG. 28.
 カメラヘッド11102は、レンズユニット11401と、撮像部11402と、駆動部11403と、通信部11404と、カメラヘッド制御部11405と、を有する。CCU11201は、通信部11411と、画像処理部11412と、制御部11413と、を有する。カメラヘッド11102とCCU11201とは、伝送ケーブル11400によって互いに通信可能に接続されている。 The camera head 11102 includes a lens unit 11401, an imaging unit 11402, a driving unit 11403, a communication unit 11404, and a camera head control unit 11405. CCU11201 has a communication unit 11411, an image processing unit 11412, and a control unit 11413. The camera head 11102 and CCU11201 are communicatively connected to each other by a transmission cable 11400.
 レンズユニット11401は、鏡筒11101との接続部に設けられる光学系である。鏡筒11101の先端から取り込まれた観察光は、カメラヘッド11102まで導光され、当該レンズユニット11401に入射する。レンズユニット11401は、ズームレンズ及びフォーカスレンズを含む複数のレンズが組み合わされて構成される。 The lens unit 11401 is an optical system provided at a connection portion with the lens barrel 11101. The observation light taken in from the tip of the lens barrel 11101 is guided to the camera head 11102 and incident on the lens unit 11401. The lens unit 11401 is configured by combining a plurality of lenses including a zoom lens and a focus lens.
 撮像部11402を構成する撮像素子は、1つ(いわゆる単板式)であってもよいし、複数(いわゆる多板式)であってもよい。撮像部11402が多板式で構成される場合には、例えば各撮像素子によってRGBそれぞれに対応する画像信号が生成され、それらが合成されることによりカラー画像が得られてもよい。あるいは、撮像部11402は、3D(dimensional)表示に対応する右目用及び左目用の画像信号をそれぞれ取得するための1対の撮像素子を有するように構成されてもよい。3D表示が行われることにより、術者11131は術部における生体組織の奥行きをより正確に把握することが可能になる。なお、撮像部11402が多板式で構成される場合には、各撮像素子に対応して、レンズユニット11401も複数系統設けられ得る。 The image sensor constituting the image pickup unit 11402 may be one (so-called single plate type) or a plurality (so-called multi-plate type). When the image pickup unit 11402 is composed of a multi-plate type, for example, each image pickup element may generate an image signal corresponding to each of RGB, and a color image may be obtained by synthesizing them. Alternatively, the image pickup unit 11402 may be configured to have a pair of image pickup elements for acquiring image signals for the right eye and the left eye corresponding to 3D (dimensional) display, respectively. The 3D display enables the operator 11131 to more accurately grasp the depth of the biological tissue in the surgical site. When the image pickup unit 11402 is composed of a multi-plate type, a plurality of lens units 11401 may be provided corresponding to each image pickup element.
 また、撮像部11402は、必ずしもカメラヘッド11102に設けられなくてもよい。例えば、撮像部11402は、鏡筒11101の内部に、対物レンズの直後に設けられてもよい。 Further, the imaging unit 11402 does not necessarily have to be provided on the camera head 11102. For example, the imaging unit 11402 may be provided inside the lens barrel 11101 immediately after the objective lens.
 駆動部11403は、アクチュエータによって構成され、カメラヘッド制御部11405からの制御により、レンズユニット11401のズームレンズ及びフォーカスレンズを光軸に沿って所定の距離だけ移動させる。これにより、撮像部11402による撮像画像の倍率及び焦点が適宜調整され得る。 The drive unit 11403 is composed of an actuator, and the zoom lens and focus lens of the lens unit 11401 are moved by a predetermined distance along the optical axis under the control of the camera head control unit 11405. As a result, the magnification and focus of the image captured by the imaging unit 11402 can be adjusted as appropriate.
 通信部11404は、CCU11201との間で各種の情報を送受信するための通信装置によって構成される。通信部11404は、撮像部11402から得た画像信号をRAWデータとして伝送ケーブル11400を介してCCU11201に送信する。 The communication unit 11404 is composed of a communication device for transmitting and receiving various information to and from the CCU11201. The communication unit 11404 transmits the image signal obtained from the image pickup unit 11402 as RAW data to the CCU 11201 via the transmission cable 11400.
 また、通信部11404は、CCU11201から、カメラヘッド11102の駆動を制御するための制御信号を受信し、カメラヘッド制御部11405に供給する。当該制御信号には、例えば、撮像画像のフレームレートを指定する旨の情報、撮像時の露出値を指定する旨の情報、並びに/又は撮像画像の倍率及び焦点を指定する旨の情報等、撮像条件に関する情報が含まれる。 Further, the communication unit 11404 receives a control signal for controlling the drive of the camera head 11102 from the CCU 11201 and supplies the control signal to the camera head control unit 11405. The control signal includes, for example, information to specify the frame rate of the captured image, information to specify the exposure value at the time of imaging, and / or information to specify the magnification and focus of the captured image, and the like. Contains information about the condition.
 なお、上記のフレームレートや露出値、倍率、焦点等の撮像条件は、ユーザによって適宜指定されてもよいし、取得された画像信号に基づいてCCU11201の制御部11413によって自動的に設定されてもよい。後者の場合には、いわゆるAE(Auto Exposure)機能、AF(Auto Focus)機能及びAWB(Auto White Balance)機能が内視鏡11100に搭載されていることになる。 The imaging conditions such as the frame rate, exposure value, magnification, and focus may be appropriately specified by the user, or may be automatically set by the control unit 11413 of CCU11201 based on the acquired image signal. good. In the latter case, the endoscope 11100 is equipped with a so-called AE (Auto Exposure) function, AF (Auto Focus) function, and AWB (Auto White Balance) function.
 カメラヘッド制御部11405は、通信部11404を介して受信したCCU11201からの制御信号に基づいて、カメラヘッド11102の駆動を制御する。 The camera head control unit 11405 controls the drive of the camera head 11102 based on the control signal from the CCU 11201 received via the communication unit 11404.
 通信部11411は、カメラヘッド11102との間で各種の情報を送受信するための通信装置によって構成される。通信部11411は、カメラヘッド11102から、伝送ケーブル11400を介して送信される画像信号を受信する。 The communication unit 11411 is composed of a communication device for transmitting and receiving various information to and from the camera head 11102. The communication unit 11411 receives an image signal transmitted from the camera head 11102 via the transmission cable 11400.
 また、通信部11411は、カメラヘッド11102に対して、カメラヘッド11102の駆動を制御するための制御信号を送信する。画像信号や制御信号は、電気通信や光通信等によって送信することができる。 Further, the communication unit 11411 transmits a control signal for controlling the drive of the camera head 11102 to the camera head 11102. Image signals and control signals can be transmitted by telecommunications, optical communication, or the like.
 画像処理部11412は、カメラヘッド11102から送信されたRAWデータである画像信号に対して各種の画像処理を施す。 The image processing unit 11412 performs various image processing on the image signal which is the RAW data transmitted from the camera head 11102.
 制御部11413は、内視鏡11100による術部等の撮像、及び、術部等の撮像により得られる撮像画像の表示に関する各種の制御を行う。例えば、制御部11413は、カメラヘッド11102の駆動を制御するための制御信号を生成する。 The control unit 11413 performs various controls related to the imaging of the surgical site and the like by the endoscope 11100 and the display of the captured image obtained by the imaging of the surgical site and the like. For example, the control unit 11413 generates a control signal for controlling the drive of the camera head 11102.
 また、制御部11413は、画像処理部11412によって画像処理が施された画像信号に基づいて、術部等が映った撮像画像を表示装置11202に表示させる。この際、制御部11413は、各種の画像認識技術を用いて撮像画像内における各種の物体を認識してもよい。例えば、制御部11413は、撮像画像に含まれる物体のエッジの形状や色等を検出することにより、鉗子等の術具、特定の生体部位、出血、エネルギー処置具11112の使用時のミスト等を認識することができる。制御部11413は、表示装置11202に撮像画像を表示させる際に、その認識結果を用いて、各種の手術支援情報を当該術部の画像に重畳表示させてもよい。手術支援情報が重畳表示され、術者11131に提示されることにより、術者11131の負担を軽減することや、術者11131が確実に手術を進めることが可能になる。 Further, the control unit 11413 causes the display device 11202 to display an image captured by the surgical unit or the like based on the image signal processed by the image processing unit 11412. At this time, the control unit 11413 may recognize various objects in the captured image by using various image recognition techniques. For example, the control unit 11413 detects the shape, color, and the like of the edge of an object included in the captured image to remove surgical tools such as forceps, a specific biological part, bleeding, and mist when using the energy treatment tool 11112. Can be recognized. When displaying the captured image on the display device 11202, the control unit 11413 may superimpose and display various surgical support information on the image of the surgical unit by using the recognition result. By superimposing and displaying the surgical support information and presenting it to the surgeon 11131, it is possible to reduce the burden on the surgeon 11131 and to allow the surgeon 11131 to proceed with the surgery reliably.
 カメラヘッド11102及びCCU11201を接続する伝送ケーブル11400は、電気信号の通信に対応した電気信号ケーブル、光通信に対応した光ファイバ、又はこれらの複合ケーブルである。 The transmission cable 11400 that connects the camera head 11102 and CCU11201 is an electric signal cable that supports electric signal communication, an optical fiber that supports optical communication, or a composite cable thereof.
 ここで、図示する例では、伝送ケーブル11400を用いて有線で通信が行われていたが、カメラヘッド11102とCCU11201との間の通信は無線で行われてもよい。 Here, in the illustrated example, the communication was performed by wire using the transmission cable 11400, but the communication between the camera head 11102 and the CCU11201 may be performed wirelessly.
 <移動体への応用例>
 本開示に係る技術(本技術)は、様々な製品へ応用することができる。例えば、本開示に係る技術は、自動車、電気自動車、ハイブリッド電気自動車、自動二輪車、自転車、パーソナルモビリティ、飛行機、ドローン、船舶、ロボット等のいずれかの種類の移動体に搭載される装置として実現されてもよい。
<Example of application to mobiles>
The technology according to the present disclosure (the present technology) can be applied to various products. For example, the technology according to the present disclosure is realized as a device mounted on a moving body of any kind such as an automobile, an electric vehicle, a hybrid electric vehicle, a motorcycle, a bicycle, a personal mobility, an airplane, a drone, a ship, and a robot. You may.
 図30は、本開示に係る技術が適用され得る移動体制御システムの一例である車両制御システムの概略的な構成例を示すブロック図である。 FIG. 30 is a block diagram showing a schematic configuration example of a vehicle control system, which is an example of a mobile control system to which the technology according to the present disclosure can be applied.
 車両制御システム12000は、通信ネットワーク12001を介して接続された複数の電子制御ユニットを備える。図30に示した例では、車両制御システム12000は、駆動系制御ユニット12010、ボディ系制御ユニット12020、車外情報検出ユニット12030、車内情報検出ユニット12040、及び統合制御ユニット12050を備える。また、統合制御ユニット12050の機能構成として、マイクロコンピュータ12051、音声画像出力部12052、及び車載ネットワークI/F(Interface)12053が図示されている。 The vehicle control system 12000 includes a plurality of electronic control units connected via the communication network 12001. In the example shown in FIG. 30, the vehicle control system 12000 includes a drive system control unit 12010, a body system control unit 12020, an outside information detection unit 12030, an in-vehicle information detection unit 12040, and an integrated control unit 12050. Further, as a functional configuration of the integrated control unit 12050, a microcomputer 12051, an audio image output unit 12052, and an in-vehicle network I / F (Interface) 12053 are shown.
 駆動系制御ユニット12010は、各種プログラムにしたがって車両の駆動系に関連する装置の動作を制御する。例えば、駆動系制御ユニット12010は、内燃機関又は駆動用モータ等の車両の駆動力を発生させるための駆動力発生装置、駆動力を車輪に伝達するための駆動力伝達機構、車両の舵角を調節するステアリング機構、及び、車両の制動力を発生させる制動装置等の制御装置として機能する。 The drive system control unit 12010 controls the operation of the device related to the drive system of the vehicle according to various programs. For example, the drive system control unit 12010 provides a driving force generator for generating the driving force of the vehicle such as an internal combustion engine or a driving motor, a driving force transmission mechanism for transmitting the driving force to the wheels, and a steering angle of the vehicle. It functions as a control device such as a steering mechanism for adjusting and a braking device for generating a braking force of a vehicle.
 ボディ系制御ユニット12020は、各種プログラムにしたがって車体に装備された各種装置の動作を制御する。例えば、ボディ系制御ユニット12020は、キーレスエントリシステム、スマートキーシステム、パワーウィンドウ装置、あるいは、ヘッドランプ、バックランプ、ブレーキランプ、ウィンカー又はフォグランプ等の各種ランプの制御装置として機能する。この場合、ボディ系制御ユニット12020には、鍵を代替する携帯機から発信される電波又は各種スイッチの信号が入力され得る。ボディ系制御ユニット12020は、これらの電波又は信号の入力を受け付け、車両のドアロック装置、パワーウィンドウ装置、ランプ等を制御する。 The body system control unit 12020 controls the operation of various devices mounted on the vehicle body according to various programs. For example, the body system control unit 12020 functions as a keyless entry system, a smart key system, a power window device, or a control device for various lamps such as headlamps, back lamps, brake lamps, blinkers or fog lamps. In this case, the body system control unit 12020 may be input with radio waves transmitted from a portable device that substitutes for the key or signals of various switches. The body system control unit 12020 receives inputs of these radio waves or signals and controls a vehicle door lock device, a power window device, a lamp, and the like.
 車外情報検出ユニット12030は、車両制御システム12000を搭載した車両の外部の情報を検出する。例えば、車外情報検出ユニット12030には、撮像部12031が接続される。車外情報検出ユニット12030は、撮像部12031に車外の画像を撮像させるとともに、撮像された画像を受信する。車外情報検出ユニット12030は、受信した画像に基づいて、人、車、障害物、標識又は路面上の文字等の物体検出処理又は距離検出処理を行ってもよい。 The vehicle outside information detection unit 12030 detects information outside the vehicle equipped with the vehicle control system 12000. For example, the imaging unit 12031 is connected to the vehicle exterior information detection unit 12030. The vehicle outside information detection unit 12030 causes the image pickup unit 12031 to capture an image of the outside of the vehicle and receives the captured image. The vehicle exterior information detection unit 12030 may perform object detection processing or distance detection processing such as a person, a vehicle, an obstacle, a sign, or a character on the road surface based on the received image.
 撮像部12031は、光を受光し、その光の受光量に応じた電気信号を出力する光センサである。撮像部12031は、電気信号を画像として出力することもできるし、測距の情報として出力することもできる。また、撮像部12031が受光する光は、可視光であっても良いし、赤外線等の非可視光であっても良い。 The imaging unit 12031 is an optical sensor that receives light and outputs an electric signal according to the amount of the light received. The image pickup unit 12031 can output an electric signal as an image or can output it as distance measurement information. Further, the light received by the imaging unit 12031 may be visible light or invisible light such as infrared light.
 車内情報検出ユニット12040は、車内の情報を検出する。車内情報検出ユニット12040には、例えば、運転者の状態を検出する運転者状態検出部12041が接続される。運転者状態検出部12041は、例えば運転者を撮像するカメラを含み、車内情報検出ユニット12040は、運転者状態検出部12041から入力される検出情報に基づいて、運転者の疲労度合い又は集中度合いを算出してもよいし、運転者が居眠りをしていないかを判別してもよい。 The in-vehicle information detection unit 12040 detects the in-vehicle information. For example, a driver state detection unit 12041 that detects the driver's state is connected to the vehicle interior information detection unit 12040. The driver state detection unit 12041 includes, for example, a camera that images the driver, and the in-vehicle information detection unit 12040 determines the degree of fatigue or concentration of the driver based on the detection information input from the driver state detection unit 12041. It may be calculated, or it may be determined whether the driver is dozing.
 マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車内外の情報に基づいて、駆動力発生装置、ステアリング機構又は制動装置の制御目標値を演算し、駆動系制御ユニット12010に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車両の衝突回避あるいは衝撃緩和、車間距離に基づく追従走行、車速維持走行、車両の衝突警告、又は車両のレーン逸脱警告等を含むADAS(Advanced Driver Assistance System)の機能実現を目的とした協調制御を行うことができる。 The microcomputer 12051 calculates the control target value of the driving force generator, the steering mechanism, or the braking device based on the information inside and outside the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040, and the drive system control unit. A control command can be output to 12010. For example, the microcomputer 12051 realizes ADAS (Advanced Driver Assistance System) functions including vehicle collision avoidance or impact mitigation, follow-up driving based on inter-vehicle distance, vehicle speed maintenance driving, vehicle collision warning, vehicle lane deviation warning, and the like. It is possible to perform cooperative control for the purpose of.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030又は車内情報検出ユニット12040で取得される車両の周囲の情報に基づいて駆動力発生装置、ステアリング機構又は制動装置等を制御することにより、運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 Further, the microcomputer 12051 controls the driving force generator, the steering mechanism, the braking device, and the like based on the information around the vehicle acquired by the vehicle exterior information detection unit 12030 or the vehicle interior information detection unit 12040. It is possible to perform coordinated control for the purpose of automatic driving, etc., which runs autonomously without depending on the operation.
 また、マイクロコンピュータ12051は、車外情報検出ユニット12030で取得される車外の情報に基づいて、ボディ系制御ユニット12030に対して制御指令を出力することができる。例えば、マイクロコンピュータ12051は、車外情報検出ユニット12030で検知した先行車又は対向車の位置に応じてヘッドランプを制御し、ハイビームをロービームに切り替える等の防眩を図ることを目的とした協調制御を行うことができる。 Further, the microcomputer 12051 can output a control command to the body system control unit 12030 based on the information outside the vehicle acquired by the vehicle exterior information detection unit 12030. For example, the microcomputer 12051 controls the headlamps according to the position of the preceding vehicle or the oncoming vehicle detected by the external information detection unit 12030, and performs coordinated control for the purpose of anti-glare such as switching the high beam to the low beam. It can be carried out.
 音声画像出力部12052は、車両の搭乗者又は車外に対して、視覚的又は聴覚的に情報を通知することが可能な出力装置へ音声及び画像のうちの少なくとも一方の出力信号を送信する。図30の例では、出力装置として、オーディオスピーカ12061、表示部12062及びインストルメントパネル12063が例示されている。表示部12062は、例えば、オンボードディスプレイ及びヘッドアップディスプレイの少なくとも一つを含んでいてもよい。 The audio image output unit 12052 transmits the output signal of at least one of the audio and the image to the output device capable of visually or audibly notifying the passenger or the outside of the vehicle of the information. In the example of FIG. 30, an audio speaker 12061, a display unit 12062, and an instrument panel 12063 are exemplified as output devices. The display unit 12062 may include, for example, at least one of an onboard display and a heads-up display.
 図31は、撮像部12031の設置位置の例を示す図である。 FIG. 31 is a diagram showing an example of the installation position of the imaging unit 12031.
 図31では、撮像部12031として、撮像部12101、12102、12103、12104、12105を有する。 In FIG. 31, the imaging unit 12031 includes imaging units 12101, 12102, 12103, 12104, and 12105.
 撮像部12101、12102、12103、12104、12105は、例えば、車両12100のフロントノーズ、サイドミラー、リアバンパ、バックドア及び車室内のフロントガラスの上部等の位置に設けられる。フロントノーズに備えられる撮像部12101及び車室内のフロントガラスの上部に備えられる撮像部12105は、主として車両12100の前方の画像を取得する。サイドミラーに備えられる撮像部12102、12103は、主として車両12100の側方の画像を取得する。リアバンパ又はバックドアに備えられる撮像部12104は、主として車両12100の後方の画像を取得する。車室内のフロントガラスの上部に備えられる撮像部12105は、主として先行車両又は、歩行者、障害物、信号機、交通標識又は車線等の検出に用いられる。 The imaging units 12101, 12102, 12103, 12104, 12105 are provided at positions such as, for example, the front nose, side mirrors, rear bumpers, back doors, and the upper part of the windshield in the vehicle interior of the vehicle 12100. The image pickup unit 12101 provided on the front nose and the image pickup section 12105 provided on the upper part of the windshield in the vehicle interior mainly acquire an image in front of the vehicle 12100. The imaging units 12102 and 12103 provided in the side mirrors mainly acquire images of the side of the vehicle 12100. The imaging unit 12104 provided on the rear bumper or the back door mainly acquires an image of the rear of the vehicle 12100. The imaging unit 12105 provided on the upper part of the windshield in the vehicle interior is mainly used for detecting a preceding vehicle, a pedestrian, an obstacle, a traffic light, a traffic sign, a lane, or the like.
 なお、図31には、撮像部12101ないし12104の撮影範囲の一例が示されている。撮像範囲12111は、フロントノーズに設けられた撮像部12101の撮像範囲を示し、撮像範囲12112,12113は、それぞれサイドミラーに設けられた撮像部12102,12103の撮像範囲を示し、撮像範囲12114は、リアバンパ又はバックドアに設けられた撮像部12104の撮像範囲を示す。例えば、撮像部12101ないし12104で撮像された画像データが重ね合わせられることにより、車両12100を上方から見た俯瞰画像が得られる。 Note that FIG. 31 shows an example of the photographing range of the imaging units 12101 to 12104. The imaging range 12111 indicates the imaging range of the imaging unit 12101 provided on the front nose, the imaging ranges 12112 and 12113 indicate the imaging ranges of the imaging units 12102 and 12103 provided on the side mirrors, respectively, and the imaging range 12114 indicates the imaging range of the imaging units 12102 and 12103. The imaging range of the imaging unit 12104 provided on the rear bumper or the back door is shown. For example, by superimposing the image data captured by the imaging units 12101 to 12104, a bird's-eye view image of the vehicle 12100 as viewed from above can be obtained.
 撮像部12101ないし12104の少なくとも1つは、距離情報を取得する機能を有していてもよい。例えば、撮像部12101ないし12104の少なくとも1つは、複数の撮像素子からなるステレオカメラであってもよいし、位相差検出用の画素を有する撮像素子であってもよい。 At least one of the imaging units 12101 to 12104 may have a function of acquiring distance information. For example, at least one of the image pickup units 12101 to 12104 may be a stereo camera composed of a plurality of image pickup elements, or may be an image pickup device having pixels for phase difference detection.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を基に、撮像範囲12111ないし12114内における各立体物までの距離と、この距離の時間的変化(車両12100に対する相対速度)を求めることにより、特に車両12100の進行路上にある最も近い立体物で、車両12100と略同じ方向に所定の速度(例えば、0km/h以上)で走行する立体物を先行車として抽出することができる。さらに、マイクロコンピュータ12051は、先行車の手前に予め確保すべき車間距離を設定し、自動ブレーキ制御(追従停止制御も含む)や自動加速制御(追従発進制御も含む)等を行うことができる。このように運転者の操作に拠らずに自律的に走行する自動運転等を目的とした協調制御を行うことができる。 For example, the microcomputer 12051 has a distance to each three-dimensional object within the imaging range 12111 to 12114 based on the distance information obtained from the imaging units 12101 to 12104, and a temporal change of this distance (relative velocity with respect to the vehicle 12100). By obtaining can. Further, the microcomputer 12051 can set an inter-vehicle distance to be secured in front of the preceding vehicle in advance, and can perform automatic braking control (including follow-up stop control), automatic acceleration control (including follow-up start control), and the like. In this way, it is possible to perform coordinated control for the purpose of automatic driving or the like in which the vehicle travels autonomously without depending on the operation of the driver.
 例えば、マイクロコンピュータ12051は、撮像部12101ないし12104から得られた距離情報を元に、立体物に関する立体物データを、2輪車、普通車両、大型車両、歩行者、電柱等その他の立体物に分類して抽出し、障害物の自動回避に用いることができる。例えば、マイクロコンピュータ12051は、車両12100の周辺の障害物を、車両12100のドライバが視認可能な障害物と視認困難な障害物とに識別する。そして、マイクロコンピュータ12051は、各障害物との衝突の危険度を示す衝突リスクを判断し、衝突リスクが設定値以上で衝突可能性がある状況であるときには、オーディオスピーカ12061や表示部12062を介してドライバに警報を出力することや、駆動系制御ユニット12010を介して強制減速や回避操舵を行うことで、衝突回避のための運転支援を行うことができる。 For example, the microcomputer 12051 converts three-dimensional object data related to a three-dimensional object into two-wheeled vehicles, ordinary vehicles, large vehicles, pedestrians, utility poles, and other three-dimensional objects based on the distance information obtained from the imaging units 12101 to 12104. It can be classified and extracted and used for automatic avoidance of obstacles. For example, the microcomputer 12051 distinguishes obstacles around the vehicle 12100 into obstacles that can be seen by the driver of the vehicle 12100 and obstacles that are difficult to see. Then, the microcomputer 12051 determines the collision risk indicating the risk of collision with each obstacle, and when the collision risk is equal to or higher than the set value and there is a possibility of collision, the microcomputer 12051 via the audio speaker 12061 or the display unit 12062. By outputting an alarm to the driver and performing forced deceleration and avoidance steering via the drive system control unit 12010, driving support for collision avoidance can be provided.
 撮像部12101ないし12104の少なくとも1つは、赤外線を検出する赤外線カメラであってもよい。例えば、マイクロコンピュータ12051は、撮像部12101ないし12104の撮像画像中に歩行者が存在するか否かを判定することで歩行者を認識することができる。かかる歩行者の認識は、例えば赤外線カメラとしての撮像部12101ないし12104の撮像画像における特徴点を抽出する手順と、物体の輪郭を示す一連の特徴点にパターンマッチング処理を行って歩行者か否かを判別する手順によって行われる。マイクロコンピュータ12051が、撮像部12101ないし12104の撮像画像中に歩行者が存在すると判定し、歩行者を認識すると、音声画像出力部12052は、当該認識された歩行者に強調のための方形輪郭線を重畳表示するように、表示部12062を制御する。また、音声画像出力部12052は、歩行者を示すアイコン等を所望の位置に表示するように表示部12062を制御してもよい。 At least one of the imaging units 12101 to 12104 may be an infrared camera that detects infrared rays. For example, the microcomputer 12051 can recognize a pedestrian by determining whether or not a pedestrian is present in the captured image of the imaging units 12101 to 12104. Such pedestrian recognition includes, for example, a procedure for extracting feature points in an image captured by an imaging unit 12101 to 12104 as an infrared camera, and pattern matching processing for a series of feature points indicating the outline of an object to determine whether or not the pedestrian is a pedestrian. It is done by the procedure to determine. When the microcomputer 12051 determines that a pedestrian is present in the captured images of the imaging units 12101 to 12104 and recognizes the pedestrian, the audio image output unit 12052 outputs a square contour line for emphasizing the recognized pedestrian. The display unit 12062 is controlled so as to superimpose and display. Further, the audio image output unit 12052 may control the display unit 12062 so as to display an icon or the like indicating a pedestrian at a desired position.
 本明細書において、システムとは、複数の装置により構成される装置全体を表すものである。 In the present specification, the system represents the entire device composed of a plurality of devices.
 なお、本明細書に記載された効果はあくまで例示であって限定されるものでは無く、また他の効果があってもよい。 Note that the effects described in this specification are merely examples and are not limited, and other effects may be obtained.
 なお、本技術の実施の形態は、上述した実施の形態に限定されるものではなく、本技術の要旨を逸脱しない範囲において種々の変更が可能である。 The embodiment of the present technology is not limited to the above-described embodiment, and various changes can be made without departing from the gist of the present technology.
 なお、本技術は以下のような構成も取ることができる。
(1)
 入射光の光量に応じて光電変換により画素信号を生成する光電変換部と、
 複数の前記光電変換部がアレイ状に配置されている受光面に対して、前記入射光を合焦させる、複数のレンズからなるレンズ群と、
 前記レンズ群のうち、前記入射光の入射方向に対して最下位層を構成するレンズであり、固定された状態で配置されている最下位層レンズと、前記受光面との間にカラーフィルタと
 を備え、
 隣接する複数の前記光電変換部上に配置されている前記カラーフィルタの色は同色とされている
 撮像装置。
(2)
 前記光電変換部毎にオンチップレンズをさらに備える
 前記(1)に記載の撮像装置。
(3)
 2×2の4個の前記光電変換部毎にオンチップレンズをさらに備える
 前記(1)に記載の撮像装置。
(4)
 上下または左右に隣接する2個の前記光電変換部毎に、オンチップレンズをさらに備える
 前記(1)に記載の撮像装置。
(5)
 前記カラーフィルタの層には、遮光部材で形成された遮光部が含まれる
 前記(1)乃至(4)のいずれかに記載の撮像装置。
(6)
 前記遮光部は、前記光電変換部の間にも設けられている
 前記(5)に記載の撮像装置。
(7)
 異なる色が配置されているカラーフィルタ間の前記遮光部の線幅は、同一色が配置されているカラーフィルタ間の前記遮光部の線幅よりも太い
 前記(5)または(6)に記載の撮像装置。
(8)
 前記遮光部は、異なる色が配置されているカラーフィルタ間に設けられている
 前記(5)または(6)に記載の撮像装置。
(9)
 前記光電変換部間に、遮光部材で形成された遮光部をさらに備える
 前記(8)に記載の撮像装置。
(10)
 入射光の光量に応じて光電変換により画素信号を生成する光電変換部と、
 複数の前記光電変換部がアレイ状に配置されている受光面に対して、前記入射光を合焦させる、複数のレンズからなるレンズ群と、
 前記レンズ群のうち、前記入射光の入射方向に対して最下位層を構成するレンズであり、固定された状態で配置されている最下位層レンズと、前記受光面との間にカラーフィルタと
 を備え、
 隣接する複数の前記光電変換部上に配置されている前記カラーフィルタの色は同色とされている
 撮像装置と、
 前記撮像装置からの信号を処理する処理部と
 を備える電子機器。
The present technology can also have the following configurations.
(1)
A photoelectric conversion unit that generates a pixel signal by photoelectric conversion according to the amount of incident light,
A lens group composed of a plurality of lenses that focuses the incident light on a light receiving surface in which the plurality of photoelectric conversion units are arranged in an array.
A lens that constitutes the lowest layer of the lens group with respect to the incident direction of the incident light, and has a color filter between the lowest layer lens arranged in a fixed state and the light receiving surface. With
An imaging device in which the colors of the color filters arranged on a plurality of adjacent photoelectric conversion units are the same.
(2)
The imaging device according to (1), further including an on-chip lens for each photoelectric conversion unit.
(3)
The imaging device according to (1), further comprising an on-chip lens for each of the four 2 × 2 photoelectric conversion units.
(4)
The imaging device according to (1), further including an on-chip lens for each of the two photoelectric conversion units adjacent to each other on the top, bottom, left and right.
(5)
The imaging device according to any one of (1) to (4) above, wherein the layer of the color filter includes a light-shielding portion formed of a light-shielding member.
(6)
The imaging device according to (5) above, wherein the light-shielding unit is also provided between the photoelectric conversion units.
(7)
The line width of the light-shielding portion between color filters in which different colors are arranged is thicker than the line width of the light-shielding portion between color filters in which the same color is arranged. Imaging device.
(8)
The imaging device according to (5) or (6) above, wherein the light-shielding portion is provided between color filters in which different colors are arranged.
(9)
The imaging device according to (8) above, further comprising a light-shielding unit formed of a light-shielding member between the photoelectric conversion units.
(10)
A photoelectric conversion unit that generates a pixel signal by photoelectric conversion according to the amount of incident light,
A lens group composed of a plurality of lenses that focuses the incident light on a light receiving surface in which the plurality of photoelectric conversion units are arranged in an array.
A lens that constitutes the lowest layer of the lens group with respect to the incident direction of the incident light, and has a color filter between the lowest layer lens arranged in a fixed state and the light receiving surface. With
An image pickup device in which the colors of the color filters arranged on a plurality of adjacent photoelectric conversion units are the same, and
An electronic device including a processing unit that processes a signal from the imaging device.
 1 撮像装置, 10 一体化構造部, 11 固体撮像素子, 12 ガラス基板, 13 接着剤, 15 接着剤, 16 レンズ群, 17 回路基板, 18 アクチュエータ, 19 コネクタ, 20 スペーサ, 21 画素領域, 22 制御回路, 23 ロジック回路, 25 ロジック基板, 26 画素センサ基板, 27 カラーフィルタ, 28 オンチップレンズ, 29 はんだボール, 32 画素, 33 画素アレイ部, 34 垂直駆動回路, 35 カラム信号処理回路, 36 水平駆動回路, 37 出力回路, 38 制御回路, 39 入出力端子, 40 画素駆動配線, 41 垂直信号線, 42 水平信号線, 51 フォトダイオード, 52 第1転送トランジスタ, 53 メモリ部, 54 第2転送トランジスタ, 56 リセットトランジスタ, 57 増幅トランジスタ, 58 選択トランジスタ, 59 排出トランジスタ, 81 半導体基板, 82 多層配線層, 83 配線層, 84 層間絶縁膜, 85 シリコン貫通孔, 86 絶縁膜, 87 接続導体, 88 シリコン貫通電極, 90 再配線, 101 半導体基板, 102 多層配線層, 103 配線層, 104 層間絶縁膜, 105 チップ貫通電極, 106 接続用配線, 107 絶縁膜, 109 シリコン貫通電極, 131 レンズ, 201 画素間遮光部, 202 平坦化膜, 221 トレンチ 1 imaging device, 10 integrated structure, 11 solid imaging element, 12 glass substrate, 13 adhesive, 15 adhesive, 16 lens group, 17 circuit board, 18 actuator, 19 connector, 20 spacer, 21 pixel area, 22 control Circuit, 23 logic circuit, 25 logic board, 26 pixel sensor board, 27 color filter, 28 on-chip lens, 29 solder balls, 32 pixels, 33 pixel array section, 34 vertical drive circuit, 35 column signal processing circuit, 36 horizontal drive Circuit, 37 output circuit, 38 control circuit, 39 input / output terminal, 40 pixel drive wiring, 41 vertical signal line, 42 horizontal signal line, 51 photodiode, 52 first transfer transistor, 53 memory section, 54 second transfer transistor, 56 reset transistor, 57 amplification transistor, 58 selection transistor, 59 discharge transistor, 81 semiconductor substrate, 82 multilayer wiring layer, 83 wiring layer, 84 interlayer insulating film, 85 silicon through hole, 86 insulating film, 87 connection conductor, 88 silicon penetration Electrodes, 90 rewiring, 101 semiconductor substrate, 102 multi-layer wiring layer, 103 wiring layer, 104 interlayer insulating film, 105 chip penetrating electrode, 106 connection wiring, 107 insulating film, 109 silicon penetrating electrode, 131 lens, 201 inter-pixel shading Part, 202 flattening film, 221 trench

Claims (10)

  1.  入射光の光量に応じて光電変換により画素信号を生成する光電変換部と、
     複数の前記光電変換部がアレイ状に配置されている受光面に対して、前記入射光を合焦させる、複数のレンズからなるレンズ群と、
     前記レンズ群のうち、前記入射光の入射方向に対して最下位層を構成するレンズであり、固定された状態で配置されている最下位層レンズと、前記受光面との間にカラーフィルタと
     を備え、
     隣接する複数の前記光電変換部上に配置されている前記カラーフィルタの色は同色とされている
     撮像装置。
    A photoelectric conversion unit that generates a pixel signal by photoelectric conversion according to the amount of incident light,
    A lens group composed of a plurality of lenses that focuses the incident light on a light receiving surface in which the plurality of photoelectric conversion units are arranged in an array.
    A lens that constitutes the lowest layer of the lens group with respect to the incident direction of the incident light, and has a color filter between the lowest layer lens arranged in a fixed state and the light receiving surface. With
    An imaging device in which the colors of the color filters arranged on a plurality of adjacent photoelectric conversion units are the same.
  2.  前記光電変換部毎にオンチップレンズをさらに備える
     請求項1に記載の撮像装置。
    The imaging device according to claim 1, further comprising an on-chip lens for each photoelectric conversion unit.
  3.  2×2の4個の前記光電変換部毎にオンチップレンズをさらに備える
     請求項1に記載の撮像装置。
    The imaging device according to claim 1, further comprising an on-chip lens for each of the four 2 × 2 photoelectric conversion units.
  4.  上下または左右に隣接する2個の前記光電変換部毎に、オンチップレンズをさらに備える
     請求項1に記載の撮像装置。
    The imaging device according to claim 1, further comprising an on-chip lens for each of the two photoelectric conversion units adjacent to the top, bottom, left and right.
  5.  前記カラーフィルタには、遮光部材で形成された遮光部が含まれる
     請求項1に記載の撮像装置。
    The imaging device according to claim 1, wherein the color filter includes a light-shielding portion formed of a light-shielding member.
  6.  前記遮光部は、前記光電変換部の間にも設けられている
     請求項5に記載の撮像装置。
    The imaging device according to claim 5, wherein the light-shielding unit is also provided between the photoelectric conversion units.
  7.  異なる色が配置されているカラーフィルタ間の前記遮光部の線幅は、同一色が配置されているカラーフィルタ間の前記遮光部の線幅よりも太い
     請求項5に記載の撮像装置。
    The imaging device according to claim 5, wherein the line width of the light-shielding portion between the color filters in which different colors are arranged is wider than the line width of the light-shielding portion between the color filters in which the same color is arranged.
  8.  前記遮光部は、異なる色が配置されているカラーフィルタ間に設けられている
     請求項5に記載の撮像装置。
    The imaging device according to claim 5, wherein the light-shielding portion is provided between color filters in which different colors are arranged.
  9.  前記光電変換部間に、遮光部材で形成された遮光部をさらに備える
     請求項8に記載の撮像装置。
    The imaging device according to claim 8, further comprising a light-shielding unit formed of a light-shielding member between the photoelectric conversion units.
  10.  入射光の光量に応じて光電変換により画素信号を生成する光電変換部と、
     複数の前記光電変換部がアレイ状に配置されている受光面に対して、前記入射光を合焦させる、複数のレンズからなるレンズ群と、
     前記レンズ群のうち、前記入射光の入射方向に対して最下位層を構成するレンズであり、固定された状態で配置されている最下位層レンズと、前記受光面との間にカラーフィルタと
     を備え、
     隣接する複数の前記光電変換部上に配置されている前記カラーフィルタの色は同色とされている
     撮像装置と、
     前記撮像装置からの信号を処理する処理部と
     を備える電子機器。
    A photoelectric conversion unit that generates a pixel signal by photoelectric conversion according to the amount of incident light,
    A lens group composed of a plurality of lenses that focuses the incident light on a light receiving surface in which the plurality of photoelectric conversion units are arranged in an array.
    A lens that constitutes the lowest layer of the lens group with respect to the incident direction of the incident light, and has a color filter between the lowest layer lens arranged in a fixed state and the light receiving surface. With
    An image pickup device in which the colors of the color filters arranged on a plurality of adjacent photoelectric conversion units are the same, and
    An electronic device including a processing unit that processes a signal from the imaging device.
PCT/JP2021/004221 2020-02-20 2021-02-05 Imaging apparatus and electronic device WO2021166672A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2022501786A JPWO2021166672A1 (en) 2020-02-20 2021-02-05

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-027192 2020-02-20
JP2020027192 2020-02-20

Publications (1)

Publication Number Publication Date
WO2021166672A1 true WO2021166672A1 (en) 2021-08-26

Family

ID=77391960

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/004221 WO2021166672A1 (en) 2020-02-20 2021-02-05 Imaging apparatus and electronic device

Country Status (2)

Country Link
JP (1) JPWO2021166672A1 (en)
WO (1) WO2021166672A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014016537A (en) * 2012-07-10 2014-01-30 Sony Corp Imaging device
WO2017169479A1 (en) * 2016-03-31 2017-10-05 株式会社ニコン Image capturing element and image capturing device
WO2019035374A1 (en) * 2017-08-18 2019-02-21 ソニー株式会社 Image pickup element and image pickup device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014016537A (en) * 2012-07-10 2014-01-30 Sony Corp Imaging device
WO2017169479A1 (en) * 2016-03-31 2017-10-05 株式会社ニコン Image capturing element and image capturing device
WO2019035374A1 (en) * 2017-08-18 2019-02-21 ソニー株式会社 Image pickup element and image pickup device

Also Published As

Publication number Publication date
JPWO2021166672A1 (en) 2021-08-26

Similar Documents

Publication Publication Date Title
JP7439214B2 (en) Solid-state image sensor and electronic equipment
WO2020209126A1 (en) Solid-state imaging device
CN115528056A (en) Photo-detection device and electronic apparatus
WO2021241019A1 (en) Imaging element and imaging device
JP2020021987A (en) Imaging apparatus and electronic apparatus
WO2021124975A1 (en) Solid-state imaging device and electronic instrument
WO2021235101A1 (en) Solid-state imaging device
WO2021106732A1 (en) Imaging device and electronic instrument
WO2020036054A1 (en) Solid-state imaging element and electronic device
WO2020022054A1 (en) Imaging device and electronic equipment
US20230387166A1 (en) Imaging device
WO2021153429A1 (en) Solid-state imaging device and electronic apparatus
WO2021186907A1 (en) Solid-state imaging device, method for manufacturing same, and electronic instrument
WO2021187091A1 (en) Sensor package, method for manufacturing same, and imaging device
WO2021166672A1 (en) Imaging apparatus and electronic device
CN114270516A (en) Image pickup element and image pickup apparatus
WO2023162496A1 (en) Imaging device
WO2021192584A1 (en) Imaging device and production method for same
WO2023106308A1 (en) Light-receiving device
WO2023132137A1 (en) Imaging element and electronic apparatus
WO2021215299A1 (en) Imaging element and imaging device
WO2023012989A1 (en) Imaging device
WO2023106316A1 (en) Light-receiving device
WO2024142640A1 (en) Optical element, photodetector, and electronic apparatus
WO2024057806A1 (en) Imaging device and electronic apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21756190

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022501786

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21756190

Country of ref document: EP

Kind code of ref document: A1